Data Engineer II: 25-07190
Data engineer job in Seattle, WA
Process Skills: Data Pipeline (Proficient), SQL (Expert), Python(Proficient), ETL (Expert), QuickSight (Intermediate) Contract Type: W2 Only Duration: 5+ months (high possibility of conversion) Pay Range: $65.00 - $70.00 Per Hour
#LP
Summary:
Join the Sustain.AI team as a Data Engineer to lead the pivotal migration of sustainability data to modern AWS infrastructure, fostering a net-zero future. This role involves transforming and automating sustainability workflows into data-driven processes, significantly impacting our environmental goals. Your work will directly contribute to reducing the carbon footprint by creating an automated, scalable data infrastructure and ensuring precise data quality and consistency.
Key Responsibilities:
Migrate ETL jobs from legacy to AWS, ensuring no operational disruptions.
Set up and manage AWS data services (Redshift, Glue) and orchestrate workflows.
Transform existing workflows into scalable, efficient AWS pipelines with robust validation.
Collaborate with various teams to understand and fulfill data requirements for sustainability metrics.
Document new architecture, implement data quality checks, and communicate with stakeholders on progress and challenges.
Must-Have Skills:
Advanced proficiency in ETL Pipeline and AWS data services (Redshift, S3, Glue).
Expertise in SQL and experience with Python or Spark for data transformation.
Proven experience in overseeing data migrations with minimal disruption.
Industry Experience Required:
Experience in sustainability data, carbon accounting, or environmental metrics is highly preferred. Familiarity with large-scale data infrastructure projects in complex enterprise environments is essential.
ABOUT AKRAYA
Akraya is an award-winning IT staffing firm consistently recognized for our commitment to excellence and a thriving work environment. Most recently, we were recognized Inc's Best Workplaces 2024 and Silicon Valley's Best Places to Work by the San Francisco Business Journal (2024) and Glassdoor's Best Places to Work (2023 & 2022)!
Industry Leaders in IT Staffing
As staffing solutions providers for Fortune 100 companies, Akraya's industry recognitions solidify our leadership position in the IT staffing space. We don't just connect you with great jobs, we connect you with a workplace that inspires!
Join Akraya Today!
Let us lead you to your dream career and experience the Akraya difference. Browse our open positions and join our team!
Data Engineer
Data engineer job in Seattle, WA
A globally leading consumer device company based in Seattle, WA is looking for a ML Data Pipeline Engineer to join their dynamic team!
Job Responsibilities:
• Assist team in building, maintaining, and running essential ML Data Pipelines.
Minimum Requirements: 5 year+ experience
Experience with cloud platforms (AWS, Azure, GCP)
Strong Data quality focus - implementing data validation, cleansing, transformation
Strong proficiency in Python and related data science libraries (e.g., pandas, NumPy).
Experience with distributed computing
Experience dealing with image-type and array-type data
Able to automate repetitive tasks and processes
Experience with scripting and automation tools
Able to execute tasks independently and creatively
Languages: Fluent in English or Farsi
Type: Contract
Duration: 12 months with extension
Work Location: Seattle, WA (hybrid)
Pay Rate Range: $40.00 - $55.00 (DOE)
AWS Data Engineer
Data engineer job in Seattle, WA
Must Have Technical/Functional Skills:
We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications.
Roles & Responsibilities:
Key Responsibilities:
• Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data.
• ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems.
• Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data.
• Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data.
• Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations.
• Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health.
• Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals.
• Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Business Intelligence Engineer
Data engineer job in Seattle, WA
Pay rate range - $55/hr. to $60/hr. on W2
Onsite role
Must Have -
Expert Python and SQL
Visualization and development
Required Skills
- 5-7 years of experience working with large-scale complex datasets
- Strong analytical mindset, ability to decompose business requirements into an analytical plan, and execute the plan to answer those business questions
- Strong working knowledge of SQL
- Background (academic or professional) in statistics, programming, and marketing
- SAS experience a plus
- Graduate degree in math/statistics, computer science or related field, or marketing is highly desirable.
- Excellent communication skills, equally adept at working with engineers as well as business leaders
Daily Schedule
- Evaluation of the performance of program features and marketing content along measures of customer response, use, conversion, and retention
- Statistical testing of A/B and multivariate experiments
- Design, build and maintain metrics and reports on program health
- Respond to ad hoc requests from business leaders to investigate critical aspects of customer behavior, e.g. how many customers use a given feature or fit a given profile, deep dive into unusual patterns, and exploratory data analysis
- Employ data mining, model building, segmentation, and other analytical techniques to capture important trends in the customer base
- Participate in strategic and tactical planning discussions
About the role
understanding customer behavior is paramount to our success in providing customers with convenient, fast free shipping in the US and international markets.
As a Senior Business Intelligence Engineer, you will work with our world-class marketing and technology teams to ensure that we continue to delight our customers.
You will meet with business owners to formulate key questions, leverage client's vast Data Warehouse to extract and analyze relevant data, and present your findings and recommendations to management in a way that is actionable
PAM Platform Engineer
Data engineer job in Seattle, WA
Job Title: Privileged Access Management - Beyond Trust Engineer
Duration: 06+ month contract (with possible extension)
Pay Rate: $97.31/hr on W2
Benefits: Medical, Dental, Vision.
Job Description:
Summary:
As a PAM Platform Engineer on Client's Identity & Access Management team, you'll be a key technical specialist responsible for designing, implementing, and maintaining our enterprise-wide Privileged Access Management infrastructure using Beyond Trust. You'll lead the rollout of Beyond Trust and support ongoing management of our privileged access solutions, including password management, endpoint privilege management, and session management capabilities across our retail technology ecosystem.
Join our cybersecurity team to drive enterprise-level PAM adoption while maintaining Client's commitment to innovation, security excellence, and work-life balance.
A day in the life...
PAM Platform Leadership: Serve as the primary technical expert for privileged access management solutions, including architecture, deployment, configuration, and optimization of password vaults and endpoint privilege management systems
Enterprise PAM Implementation: Design and execute large-scale PAM deployments across Windows, mac OS, and Linux environments, ensuring seamless integration with existing infrastructure
Policy Development & Management: Create and maintain privilege elevation policies, credential rotation schedules, access request workflows, and governance rules aligned with security and compliance requirements
Integration & Automation: Integrate PAM solutions with ITSM platforms, SIEM tools, vulnerability scanners, directory services, and other security infrastructure to create comprehensive privileged access workflows
Troubleshooting & Support: Provide expert-level technical support for PAM platform issues, performance optimization, privileged account onboarding, and user access requests
Security & Compliance: Ensure PAM implementations meet PCI DSS, and other requirements through proper audit trails, session recording and monitoring, and privileged account governance
Documentation & Training: Develop technical documentation, procedures, and training materials for internal teams and end users
Continuous Improvement: Monitor platform performance, evaluate new features, and implement best practices to enhance security posture and operational efficiency
You own this if you have...
Required Qualifications:
4-6+ years of hands-on experience implementing and managing Beyond Trust PAM at the Enterprise level.
Beyond Trust certifications are preferred.
Deep expertise in privileged account discovery, credential management, password rotation, session management, and access request workflows using Beyond Trust
Strong understanding of Windows Server administration, Active Directory, Group Policy, and PowerShell scripting
Experience with Linux/Unix system administration and shell scripting for cross-platform Beyond Trust PAM deployments
Knowledge of networking fundamentals including protocols, ports, certificates, load balancing, and security hardening
Experience with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes)
Understanding of identity and access protocols (SAML, OIDC, OAuth, SCIM, LDAP) and their integration with PAM solutions
Preferred Qualifications:
Knowledge of DevOps practices, CI/CD pipelines, and Infrastructure as Code (Terraform, Ansible)
Familiarity with ITSM integration (ServiceNow, Jira) for ticket-driven privileged access workflows
Experience with SIEM integration and security monitoring platforms (Splunk, QRadar, etc.)
Understanding of zero trust architecture and least privilege access principles
Experience with secrets management platforms (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault)
Previous experience in retail technology environments or large-scale enterprise deployments
Industry certifications such as CISSP, CISM, or relevant cloud security certifications
Technical Skills:
PAM Platforms: Experience with BeyondTrust.
Operating Systems: Windows Server (2016/2019/2022), Windows 10/11, mac OS, RHEL, Ubuntu, SUSE
Databases: SQL Server, MySQL, PostgreSQL, Oracle for PAM backend configuration
Virtualization: VMware vSphere, Hyper-V, cloud-based virtual machines
Scripting: PowerShell, Bash, Python for automation and integration tasks
Security Tools: Integration experience with vulnerability scanners, endpoint detection tools, and identity governance platforms
Hiring a
PAM Platform Engineer (BeyondTrust)
to lead enterprise-level privileged access security and drive our next-gen IAM transformation!
PAM Platform Engineer
Data engineer job in Seattle, WA
Job Title: BeyondTrust SME - PAM Platform Engineer
Job Type: Contract (6-12 Months, Extendable)
Experience: 7-12+ Years
We are hiring a Contract BeyondTrust SME / PAM Platform Engineer to support an enterprise Privileged Access Management program. The consultant will lead implementation, configuration, automation, and operational support for BeyondTrust PAM solutions across hybrid environments.
Responsibilities
BeyondTrust Engineering (Primary)
Install, configure, upgrade, and maintain BeyondTrust Password Safe, BeyondInsight, and Endpoint Privilege Management (EPM).
Configure credential vaulting, password rotation, SSH key management, and privileged session monitoring.
Onboard privileged accounts for servers, applications, DBs, and network devices.
Create and enforce Windows/Linux privilege elevation policies.
PAM Operations & Automation
Build automated onboarding workflows using PowerShell / Python / REST APIs.
Manage policy updates, access approvals, JIT access workflows, and audit reporting.
Troubleshoot PAM failures, authentication issues, session proxy errors, and integration problems.
Integration & Governance
Integrate BeyondTrust with Active Directory, Azure AD, LDAP, SIEM, and ServiceNow.
Ensure compliance with SOX, PCI, NIST, ISO requirements.
Provide documentation, architecture diagrams, SOPs, and operational runbooks.
Required Skills
3-5+ years hands-on with BeyondTrust (Password Safe, BeyondInsight, EPM).
Strong understanding of PAM concepts: vaulting, rotation, session recording, privileged access workflows.
Scripting for automation: PowerShell / Python.
Strong Linux & Windows administration background.
Experience with IAM, RBAC, MFA, Kerberos, SAML, OAuth.
Integration experience with AD / Azure AD / LDAP.
Nice to Have
Experience with CyberArk, HashiCorp Vault, Thycotic, or Delinea.
Cloud platform knowledge (AWS, Azure, GCP).
Security certifications (Security+, CISSP, etc.).
Kettle Engineer
Data engineer job in Seattle, WA
Job Title: Kettle Engineer
Type: Contract
We're seeking a Kettle Engineer to design, operate, and continuously improve our Pentaho Data Integration (PDI/Kettle) platform and data movement processes that underpin business‑critical workflows. You will own end‑to‑end lifecycle management-from environment build and configuration to orchestration, monitoring, and support-partnering closely with application teams, production operations, and data stakeholders. The ideal candidate combines strong hands‑on Kettle expertise with solid SQL, automation, and production support practices in a fast‑moving, highly collaborative environment.
Primary Responsibilities
Platform ownership: Install, configure, harden, and upgrade Kettle/PDI components (e.g., Spoon) across dev/prod.
Process engineering: Migrate, re-engineer, and optimize jobs and transformations for existing transformations.
Reliability & support: Document all workflows including ownership and escalation protocols. Knowledge transfer with Automation and Application Support teams.
Observability: Implement proactive monitoring, logging, and alerting for the Kettle platform including all dependent processes.
Collaboration: Partner with application, data, and infrastructure teams to deliver improvements to existing designs
Required Qualifications
Kettle/PDI expertise: Experience installing and configuring a Kettle instance (server and client tools, repositories, parameters, security, and upgrades).
Kettle/PDI expertise: Experience creating, maintaining, and supporting Kettle processes (jobs/transformations, error handling, recovery, and performance tuning).
4+ years hands‑on SQL (writing/diagnosing and optimizing queries).
Strong communication skills for both technical and non‑technical audiences; effective at documenting and sharing knowledge.
Preferred Qualifications
Experience integrating Kettle with cloud platforms (AWS and/or Azure); familiarity with containers or Windows/Linux server administration.
Exposure to monitoring/observability stacks (e.g., DataDog, CloudWatch, or similar).
Scripting/automation for operations (Python, PowerShell, or Bash); experience with REST APIs within Kettle.
Background in financial services or other regulated/mission‑critical environments.
Key Outcomes (First 90 Days)
Stand up or validate a hardened Kettle environment with baseline monitoring and runbooks.
Migrate at least two high‑value Kettle workflows using shared templates and standardized error handling.
Azure& GCP Engineer
Data engineer job in Seattle, WA
Job Title: Azure& GCP Engineer
Duration: 6 months
We are seeking a highly skilled and experienced Azure and Google Cloud Engineer to join our team. The ideal candidate will be responsible for troubleshooting and managing Azure as well as Google Cloud solutions that drive business transformation. This role requires a deep understanding of Cloud services, strong architectural principles, and the ability to translate business requirements into scalable, reliable, and secure cloud solutions.
Key Responsibilities:
Technical Leadership:
Lead and mentor a team of cloud engineers and developers, providing guidance on best practices and technical issues.
Act as a subject matter expert for Azure, staying current with the latest services, tools, and best practices.
o Develop comprehensive cloud architectures leveraging GCP services such as Compute Engine, Kubernetes Engine, BigQuery, Pub/Sub, Cloud Functions, and others.
o Design scalable, secure, and cost-effective cloud solutions to meet business and technical requirements.
Cloud Strategy and Roadmap:
o Define a long-term cloud strategy for the organization, including migration plans, optimization, and governance frameworks.
o Assess and recommend best practices for cloud-native and hybrid cloud solutions.
Solution Implementation:
o Implement CI/CD pipelines, monitoring, and infrastructure-as-code (e.g., Terraform, Cloud Deployment Manager).
Collaboration and Leadership:
o Work closely with development, operations, and business teams to understand requirements and provide technical guidance.
o Mentor junior team members and foster a culture of continuous learning and innovation.
Performance Optimization:
o Optimize GCP services for performance, scalability, and cost efficiency.
o Monitor and resolve issues related to cloud infrastructure, applications, and services.
Documentation and Reporting:
o Create and maintain technical documentation, architectural diagrams, and operational runbooks.
o Provide regular updates and reports to stakeholders on project progress, risks, and outcomes.
Required Skills and Qualifications:
• Education: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
• Experience:
o 7+ years of experience in cloud architecture and 4+ years specifically with GCP.
o Proven expertise in designing and implementing large-scale cloud solutions.
o Experience with application modernization using containers, microservices, and serverless architectures.
• Technical Skills:
o Proficiency in GCP services (e.g., BigQuery, Cloud Spanner, Kubernetes Engine, Cloud Run, Dataflow).
o Strong experience with Infrastructure as Code (e.g., Terraform, Deployment Manager).
o Knowledge of DevOps practices, CI/CD pipelines, and container orchestration tools.
o Familiarity with databases (relational and NoSQL) and data pipelines.
• Certifications: GCP certifications such as Professional Cloud Architect or Professional Data Engineer are highly preferred.
Solution Development and Deployment:
Oversee the deployment, management, and maintenance of cloud applications.
Automate deployment and configuration management processes using Infrastructure as Code (IaC) tools such as ARM templates, Terraform, or Azure Bicep.
Develop disaster recovery and business continuity plans for cloud services.
Collaboration and Communication:
Collaborate with cross-functional teams including development, operations, and security to ensure seamless integration and operation of cloud systems.
Communicate effectively with stakeholders to understand business requirements and provide cloud solutions that meet their needs.
Performance and Optimization:
Monitor and optimize the performance of cloud systems to ensure they meet service level agreements (SLAs).
Implement cost management strategies to optimize cloud spending.
Security and Compliance:
Ensure that all cloud solutions comply with security policies and industry regulations.
Implement robust security measures, including identity and access management (IAM), network security, and data protection.
Continuous Improvement:
Drive continuous improvement initiatives to enhance the performance, reliability, and scalability of cloud solutions.
Participate in architecture reviews and provide recommendations for improvements.
Technical Skills:
Extensive experience with Google and Microsoft Azure services including but not limited to Azure Virtual Machines, Azure App Services, Azure Functions, Azure Kubernetes Service (AKS), and Azure SQL Database.
Proficiency in Azure networking, storage, and database services
Strong skills in Infrastructure as Code (IaC) tools such as ARM templates, Terraform, or Azure Bicep.
Experience with continuous integration/continuous deployment (CI/CD) pipelines using Azure DevOps or other similar tools.
Deep understanding of cloud security best practices and tools including Azure Security Center, Azure Key Vault, and Azure Policy.
Familiarity with compliance frameworks such as GDPR, HIPAA, and SOC 2.
Proficiency in scripting languages like PowerShell, Python, or Bash for automation tasks.
Experience with configuration management tools like Ansible or Chef is a plus.
Qualifications:
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field. Master's degree preferred.
Experience:
Minimum of 5 years of experience in cloud architecture with a focus on Microsoft Azure.
Proven track record of designing and deploying large-scale cloud solutions.
Certifications:
Relevant Azure certifications such as Microsoft Certified: Azure Solutions Architect Expert, Microsoft Certified: Azure DevOps Engineer Expert, or similar.
Founding Applied AI Engineer
Data engineer job in Seattle, WA
Founding Engineer - Early-Stage AI Startup
Join a venture-backed, seed stage, fast-growing team led by repeat founders with a track record of successful AI company exits. They are building foundational AI infrastructure to solve one of the most pressing challenges in modern workflows: fragmented context. They help teams work smarter and deliver faster by building continuity between humans and AI
Why You'll Want to Join
Be among the first engineering hires, shaping both the product and company culture alongside experienced founders and a world-class technical team.
Work on cutting-edge retrieval and memory systems (embeddings, vector search, graph-based architectures) powering context-aware AI applications.
Enjoy significant ownership and autonomy-drive projects from ideation to production, experiment with new approaches, and influence technical direction.
Competitive compensation and equity, with flexibility to tailor your package based on what matters most to you.
Comprehensive benefits, including generous healthcare coverage for you and your dependents, and flexible PTO.
Hybrid work model: collaborate in-person with the founding team in Seattle, with flexibility for remote days.
Opportunity to earn a “Founding Engineer” title for the right candidate, reflecting your impact and early contributions.
Backed by top-tier investors and fresh off a successful fundraise, you'll have the resources and support to build something meaningful.
The role
Architect and implement core AI systems for knowledge extraction, retrieval, and orchestration.
Design and optimize retrieval and memory infrastructure (e.g., GraphRAG, vector search) for low-latency, high-precision context delivery.
Build agentic systems for autonomous reasoning, synthesis, and evaluation.
Develop prompt and orchestration frameworks connecting multiple models, tools, and data sources.
Prototype and validate new product directions, collaborating closely with founders and cross-functional teammates.
The experience we seek
Strong Python engineering skills and experience building scalable, maintainable systems.
Deep knowledge of retrieval architectures (embeddings, vector DBs, hybrid/graph search).
Hands-on experience with LLM orchestration frameworks (LangChain, LlamaIndex, or similar).
Proven ability to build and tune LLM-based agents for complex reasoning tasks.
Startup DNA: thrive in ambiguity, take ownership, and move fast.
Passion for building in the AI space and excitement about joining an early-stage team.
If you're ready to have a massive impact, learn from proven founders, and help define the future of human-AI collaboration, we'd love to connect.
Prime Team Partners is an equal opportunity employer. Prime Team Partners does not discriminate on the basis of race, color, religion, national origin, pregnancy status, gender, age, marital status, disability, medical condition, sexual orientation, or any other characteristics protected by applicable state or federal civil rights laws. For contract positions, hired candidates will be employed by Prime Team for the duration of the contract period and be eligible for our company benefits. Benefits include medical, dental and vision. Employees are covered at 75%. We offer a 401K after 6 months, we do not provide paid holidays or PTO, sick time is offered in accordance with local laws
Senior Staff Software Engineer
Data engineer job in Seattle, WA
We are seeking a highly experienced Senior Staff Software Engineer to lead and deliver complex technical projects from inception to deployment. This role requires a strong background in software architecture, hands-on development, and technical leadership across the full software development lifecycle.
This role is with a fast-growing technology company pioneering AI-driven solutions for real-world infrastructure. Backed by significant recent funding and valued at over $5 billion, the company is scaling rapidly across multiple verticals, including mobility, retail, and hospitality. Its platform leverages computer vision and cloud technologies to create frictionless, intelligent experiences, positioning it as a leader in the emerging Recognition Economy-a paradigm where physical environments adapt in real time to user presence and context.
Required Qualifications
10+ years of professional software engineering experience.
Proven track record of leading and delivering technical projects end-to-end.
Strong proficiency in Java or Scala.
Solid understanding of cloud technologies (AWS, GCP, or Azure).
Experience with distributed systems, microservices, and high-performance applications.
Preferred / Bonus Skills
Advanced expertise in Scala.
Prior experience mentoring engineers and building high-performing teams.
Background spanning FAANG companies or high-growth startups.
Exposure to AI/ML or general AI technologies.
BE Software Engineer (Block Storage)
Data engineer job in Seattle, WA
Backend Software Engineer (Block Storage)
W2 Contract
Salary Range: $114,400 - $135,200 per year
We are looking for collaborative, curious, and pragmatic Software Engineers to be part of this innovative team. You will be able to shape the product's features and architecture as it scales orders of magnitude. Being part of our Cloud Infrastructure organization opens the door to exerting cross-functional influence and making a more significant organizational impact.
Requirements and Qualifications:
Proficient with UNIX/Linux
Coding skills in one or more of these programming languages: Rust, C++, Java or C#
Experience with scripting languages (Bash, Python, Perl)
Excellent knowledge of software testing methodologies & practices
2 years of professional software development experience
Strong ownership and track record of delivering results
Excellent verbal and written communication skills
Bachelor's Degree in Computer Science, an engineering-related field, or equivalent related experience.
Preferred Qualifications:
Proficiency in Rust
Experience with high-performance asynchronous IO systems programming
Knowledge on distributed systems
Desired Skills and Experience
Proficient with UNIX/Linux, Rust, C++, Java, C#, Bash, Python, Perl, software testing methodologies, professional software development, strong ownership, results-driven delivery, excellent communication skills, computer science or engineering degree, high-performance asynchronous IO systems programming, distributed systems
Bayside Solutions, Inc. is not able to sponsor any candidates at this time. Additionally, candidates for this position must qualify as a W2 candidate.
Bayside Solutions, Inc. may collect your personal information during the position application process. Please reference Bayside Solutions, Inc.'s CCPA Privacy Policy at *************************
Lead Data Scientist, Ad Platforms
Data engineer job in Seattle, WA
Technology is at the heart of Disney's past, present, and future. Disney Entertainment and ESPN Product & Technology is a global organization of engineers, product developers, designers, technologists, data scientists, and more - all working to build and advance the technological backbone for Disney's media business globally.
The team marries technology with creativity to build world-class products, enhance storytelling, and drive velocity, innovation, and scalability for our businesses. We are Storytellers and Innovators. Creators and Builders. Entertainers and Engineers. We work with every part of The Walt Disney Company's media portfolio to advance the technological foundation and consumer media touch points serving millions of people around the world.
Here are a few reasons why we think you'd love working here:
Building the future of Disney's media: Our Technologists are designing and building the products and platforms that will power our media, advertising, and distribution businesses for years to come.
Reach, Scale & Impact: More than ever, Disney's technology and products serve as a signature doorway for fans' connections with the company's brands and stories. Disney+. Hulu. ESPN. ABC. ABC News…and many more. These products and brands - and the unmatched stories, storytellers, and events they carry - matter to millions of people globally.
Innovation: We develop and implement groundbreaking products and techniques that shape industry norms, and solve complex and distinctive technical problems.
Ad Platforms is responsible for Disney's industry-leading ad technology and products - driving advertising performance, innovation, and value in Disney's sports, news, and entertainment content, across all media platforms.
The research/AI fleet is the dedicated AI/ML and data science arm of the Ad Platform organization. Its mission is to advance AI and machine learning capabilities across Ad Platform by delivering scalable, high impact AI/ML and data science solutions that enhance Ad decisioning, forecasting, experimentation and Ad experience. We are seeking a Lead Data Scientist to join this innovative team. This role is a critical leadership opportunity for a seasoned data science professional who thrives at the intersection of advanced analytics, machine learning, and business strategy. As a Lead Data Scientist, you will spearhead analytical initiatives that shape the future of advertising technology. You will be responsible for designing and executing data science solutions that drive measurable impact across ad decisioning, forecasting, experimentation, and user experience optimization.
WHAT YOU'LL DO
* Drive innovation and apply data science and statistical analysis in a variety of areas to enhance every aspect of advertising, including inventory forecasting, ad experience, ad pacing, pricing, targeting, and efficient ad delivery.
* Translate business questions into data science framework; develop data analysis and data models for complex ad challenges with fast turnaround.
* Define and evaluate key metrics in various product areas.
* Deliver data insights and provide data-driven recommendations to cross-team stakeholders.
* Design and analyze A/B tests for machine learning algorithm iterations.
* Establish strong partnership with stakeholders, including product managers, engineers and program managers, in a collaborative environment.
* Mentor team members and support their technical growth.
WHAT TO BRING
* Bachelor's (MS or PhD preferred) in Computer Science, Software Engineering, or a related field
* At least 7 years of relevant industry experience in data science at leading internet companies. Experience in the advertising domain is preferred.
* Solid knowledge of data science and statistics. Strong communication skills and business acumen.
* Proficient in SQL, Python, Java, or R. Familiar with Tableau, and/or other visualization tools.
* Passionate about understanding the ad business, applying data science to relevant business scenarios, and seeking innovation opportunities to enhance business effectiveness.
* Passionate about technology and experienced in building data-driven services and applications.
* A proven track record of thriving in a fast-paced, data-driven, and collaborative work environment is required.
NICE-TO-HAVES
* Experience in digital video advertising or digital marketing domain
* Experience with machine learning models.
* Experience with Airflow, data warehouse, Databricks or Sagemaker
#DISNEYTECH
The hiring range for this position in Los Angeles, CA area is $155,700 - $208,700 per year and Seattle Area is $163,100 - $218,700. The base pay actually offered will take into account internal equity and also may vary depending on the candidate's geographic region, job-related knowledge, skills, and experience among other factors. A bonus and/or long-term incentive units may be provided as part of the compensation package, in addition to the full range of medical, financial, and/or other benefits, dependent on the level and position offered.
About Disney Entertainment and ESPN Product & Technology:
At Disney Entertainment and ESPN Product & Technology, we're blending imagination and innovation to reimagine the ways people experience and engage with the world's most beloved stories and products. Our work is wide-ranging and deeply sophisticated. We create amazing experiences, transform the future of media, and build products and platforms that enable the connection between people everywhere and the stories and sports they love.
Disney's ability to marry world-class technology with one-of-a-kind creativity makes us unique. It is at the heart of our past, present, and future. We are Storytellers and Innovators. Creators and Builders. Entertainers and Engineers.
About The Walt Disney Company:
The Walt Disney Company, together with its subsidiaries and affiliates, is a leading diversified international family entertainment and media enterprise that includes three core business segments: Disney Entertainment, ESPN, and Disney Experiences. From humble beginnings as a cartoon studio in the 1920s to its preeminent name in the entertainment industry today, Disney proudly continues its legacy of creating world-class stories and experiences for every member of the family. Disney's stories, characters and experiences reach consumers and guests from every corner of the globe. With operations in more than 40 countries, our employees and cast members work together to create entertainment experiences that are both universally and locally cherished.
This position is with Disney Streaming Technology LLC, which is part of a business we call Disney Entertainment and ESPN Product & Technology.
Disney Streaming Technology LLC is an equal opportunity employer. Applicants will receive consideration for employment without regard to race, religion, color, sex, sexual orientation, gender, gender identity, gender expression, national origin, ancestry, age, marital status, military or veteran status, medical condition, genetic information or disability, or any other basis prohibited by federal, state or local law. Disney champions a business environment where ideas and decisions from all people help us grow, innovate, create the best stories and be relevant in a constantly evolving world.
Apply Now Apply Later
Current Employees Apply via My Disney Career
Explore Location
Data Scientist, Product Analytics
Data engineer job in Seattle, WA
As a Data Scientist at Meta, you will shape the future of people-facing and business-facing products we build across our entire family of applications (Facebook, Instagram, Messenger, WhatsApp, Oculus). By applying your technical skills, analytical mindset, and product intuition to one of the richest data sets in the world, you will help define the experiences we build for billions of people and hundreds of millions of businesses around the world. You will collaborate on a wide array of product and business problems with a wide-range of cross-functional partners across Product, Engineering, Research, Data Engineering, Marketing, Sales, Finance and others. You will use data and analysis to identify and solve product development's biggest challenges. You will influence product strategy and investment decisions with data, be focused on impact, and collaborate with other teams. By joining Meta, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.Product leadership: You will use data to shape product development, quantify new opportunities, identify upcoming challenges, and ensure the products we build bring value to people, businesses, and Meta. You will help your partner teams prioritize what to build, set goals, and understand their product's ecosystem.Analytics: You will guide teams using data and insights. You will focus on developing hypotheses and employ a varied toolkit of rigorous analytical approaches, different methodologies, frameworks, and technical approaches to test them.Communication and influence: You won't simply present data, but tell data-driven stories. You will convince and influence your partners using clear insights and recommendations. You will build credibility through structure and clarity, and be a trusted strategic partner.
**Required Skills:**
Data Scientist, Product Analytics Responsibilities:
1. Work with large and complex data sets to solve a wide array of challenging problems using different analytical and statistical approaches
2. Apply technical expertise with quantitative analysis, experimentation, data mining, and the presentation of data to develop strategies for our products that serve billions of people and hundreds of millions of businesses
3. Identify and measure success of product efforts through goal setting, forecasting, and monitoring of key product metrics to understand trends
4. Define, understand, and test opportunities and levers to improve the product, and drive roadmaps through your insights and recommendations
5. Partner with Product, Engineering, and cross-functional teams to inform, influence, support, and execute product strategy and investment decisions
**Minimum Qualifications:**
Minimum Qualifications:
6. Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
7. A minimum of 6 years of work experience in analytics (minimum of 4 years with a Ph.D.)
8. Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience
9. Experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), and/or statistical/mathematical software (e.g. R)
**Preferred Qualifications:**
Preferred Qualifications:
10. Master's or Ph.D. Degree in a quantitative field
**Public Compensation:**
$173,000/year to $242,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
Java+ Big Data consultant with expertise in AWS
Data engineer job in Seattle, WA
Focus America Inc Hello Folks ! We are on an active lookout for Senior Big Data resources for one of our direct clients. Please drop me your updated resumes if you think you are a fit I will call you to help connect you with your dream Job.
Java+ Big Data consultant with expertise in AWS
Location: Orlando,FL or Seattle, WA
Duration: 6 months extendable
Interview Process: 3 Skype rounds
JD
• The skills are to help me build the Data Management Cloud Platform:
• Looking for Engineers who have experience in Java, AWS Cloud, BigData/Data Management knowledge, Experience in automation and or building CICD pipelines.
• Experience in Java, AWS, Big Data is critical
• Experience in Dockers, Terraform, Jenkins, Python is optional
• 10-12+ years of experience in Core Java with alteast 4 years of experience in Big Data
Job Description
The skills are to help me build the Data Management Cloud Platform:
• Looking for Engineers who have experience in Java, AWS Cloud, BigData/Data Management knowledge, Experience in automation and or building CICD pipelines.
• Experience in Java, AWS, Big Data is critical
• Experience in Dockers, Terraform, Jenkins, Python is optional
• 10-12+ years of experience in Core Java with alteast 4 years of experience in Big Data
Qualifications
The skills are to help me build the Data Management Cloud Platform:
• Looking for Engineers who have experience in Java, AWS Cloud, BigData/Data Management knowledge, Experience in automation and or building CICD pipelines.
• Experience in Java, AWS, Big Data is critical
• Experience in Dockers, Terraform, Jenkins, Python is optional
• 10-12+ years of experience in Core Java with alteast 4 years of experience in Big Data
Additional Information
All your information will be kept confidential according to EEO guidelines.
Software Engineer - Data Engineering, Otter - Seattle
Data engineer job in Seattle, WA
Who we are
In the past, to be a successful restaurateur, you simply had to have a passion for food and a passion for people - but to succeed as a digital restaurateur you also need to have a passion for technology. We believe in the joy of serving others, and that's why we created Otter - to help restaurateurs succeed in online food delivery. Restaurants around the world, both large and small, including Chick-fil-A, Ben & Jerry's, KFC, and Eataly trust our software to power their delivery business. We increase sales, reduce order issues, and decrease delivery headaches.
What you'll do
Design, build, and maintain scalable batch data pipelines using SQLMesh (or equivalents such as DBT) for robust, reliable data delivery.
Own our OLAP data storage strategy and implement performant solutions for data access.
Develop and optimize relational data models to support analytical and operational needs.
Implement and monitor data quality strategies to ensure accuracy and consistency of delivered datasets.
Build internal platform tooling to streamline pipeline development, deployment, and monitoring.
Partner closely with business and analytics teams to gather requirements and translate them into effective pipeline logic.
Promote best practices in batch data engineering, data governance, and infrastructure optimizations.
What we're looking for
Experience designing and implementing batch data pipelines with SQLMesh, DBT, or similar frameworks.
Strong skills in SQL and relational data modeling concepts.
Proficiency in Python, Java, and/or other languages used for pipeline orchestration and platform tooling.
Familiarity with data quality approaches and monitoring in modern data platforms.
Collaborative mindset with ability to work closely with business and technical stakeholders to define logic.
Excellent problem-solving and communication skills, able to translate complex technical ideas to non-technical audiences.
Why join us
Demand for online food delivery is growing really fast! In the last 5 years, just in the US, the overall market has expanded 10X from $10B to $100B, and could expand to $500bn- $1T by 2030.
Changing the restaurant industry: You'll be part of a team that helps restaurants succeed in online food delivery.
Collaborative environment: You will receive support and guidance from experienced colleagues and managers, helping you to learn, grow and achieve your goals, and you'll work closely with other teams to ensure our customer's success.
What else you need to know
This role is based in our Seattle office. As a company driven by innovation and continuous change, close collaboration is essential. We're constantly reimagining our industry, creating new products, and refining our processes, and we do our best work together. That's why all of our office-based teams work onsite, five days a week.
The base salary range for this role is $167,000 - $230,000
Actual compensation will be determined on an individual basis and may vary depending on experience, skills, and qualifications.You may also be eligible for equity awards and an annual performance-based bonus.
Benefits Summary (USA Full-Time Exempt Employees):
Medical, dental, and vision insurance (multiple plans, incl. HSA options).
Company-paid life and disability insurance (short- and long-term).
Voluntary insurance: accident, critical illness, hospital indemnity.
Optional supplemental life insurance for self, spouse, and children.
Pet insurance discount.
401(k).
Time Off policies:
Discretionary vacation days
8 paid holidays per year
Paid sick time
Paid Bereavement leave
Paid Parental Leave
Health Savings Account (HSA)
Flexible Spending Accounts (Healthcare, Dependent Care, Commuter)
Ready to join us as we serve those who serve others?
#LI-Onsite
Auto-ApplyStaff Data Scientist, Marketing Science
Data engineer job in Seattle, WA
Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product.
Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace the flexibility to do your best work. Creating a career you love? It's Possible.
Millions of people across the world come to Pinterest to find new ideas every day. It's where they get inspiration, dream about new possibilities and plan for what matters most. Brands and products are a critical piece of this journey, enabling Pinners to move easily from inspiration to action and advertisers to realize value in connecting with users with high commercial intent.
As Pinterest continues to scale, we need insightful analyses and actionable recommendations to help maximize value to both Pinners and advertisers. Tasked with such vision, we are looking for data scientists to jump start Pinterest's Marketing Science team. You will take end-to-end ownership of designing, researching, building, and delivering data products as well as collaborate with XFNs to formulate, experiment, and evolve Pinterest's best practice guides to advertiser performance. You will support strategic planning, marketing events, earnings releases, and other efforts with powerful data narratives.
What you'll do:
Research and analyze what drives performance on our platform for advertisers.
Monitor key trends in ads performance to identify and scale platform-level improvement opportunities and/or flag and mitigate risks of reduction.
Equip the business with data-backed recommendations that deliver the strongest performance outcomes, tailored to key advertiser segments, defined by dimensions including marketing objective, campaign type or measurement source of truth, etc.
Creating and tracking metrics around advertiser value.
Identify the right framework to quantify the value Pinterest provides to advertisers, and drive XFN alignment on implement and monitor those metrics as north stars of success.
Improve decision velocity and quality using data scientist tool kit: experimentation, causal inference techniques, etc. Design measurement strategy, advise on experimentation best practices, identifying flaws in experiment practices and results; building tools for experiment analysis etc.
Leadership: Lead and mentor the scope of work for data scientists in the same area, demonstrating high-quality output of both yourself and others for whom you are responsible. Provide continuous and candid feedback, recognizing individual strengths and contributions and flagging opportunities to improve performance.
What we're looking for:
MS or PhD in a quantitative field or equivalent experience
8+ years of experience applying data science to solve real-world problems on web-scale data
Domain expertise about ads platforms and products, particularly around measurement and performance
Strong fundamentals in statistics, particularly experimentation and causal inference
Proficiency in common data science coding languages such as SQL and Python.
Proven track record of driving data science projects and/or initiatives in a cross-functional team including both technical and non-technical stakeholders.
Strong communication skills. Explains work and thought processes clearly, concisely and convincingly.
In-Office Requirement Statement:
We let the type of work you do guide the collaboration style. That means we're not always working in an office, but we continue to gather for key moments of collaboration and connection.
This role will need to be in the office for in-person collaboration 1x week and therefore needs to be in a commutable distance from one of the following offices: Seattle, San Francisco and Palo Alto.
Relocation Statement:
This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
#LI-NM4
#LI-HYBRID
At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise.
Information regarding the culture at Pinterest and benefits available for this position can be found here.
US based applicants only$193,759-$339,078 USD
Our Commitment to Inclusion:
Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please complete this form for support.
Auto-ApplySenior Data Engineer
Data engineer job in Seattle, WA
At MCG, we lead the healthcare community to deliver patient-focused care. We have a mission-driven team of talented physicians and technical experts developing our evidence-based content and innovating our products to accelerate improvements in healthcare. If you are driven to enhance the US healthcare system, MCG is eager to have you join our team. We cultivate a work environment that nurtures personal and professional growth, and this is a thrilling time to become a part of our organization. With dynamic roles that offer meaningful impact, you'll be able to fully realize your potential. Plus, you'll enjoy world-class benefits and the security, stability, and resources of our parent company, Hearst, with over 100 years of experience.
As a Senior Data Engineer you will be responsible for enabling efficient and effective data ingestion & delivery systems. Our team collaborates with data producers (application teams) and data consumers/stakeholders (Data Science, Product, Analytics & Reporting teams) to ensure the availability, quality, and accessibility of data through robust pipelines and storage platforms.
You will:
Explore, analyze, and onboard data sets from data producers to ensure they are ready for processing and consumption.
Develop and maintain scalable and efficient data pipelines for data collection, processing (quality checks, de-duplication, etc.), and integration into Data lake and Data warehouse systems.
Optimize and monitor data pipeline performance to ensure minimal downtime.
Implement data quality control mechanisms to maintain data set integrity.
Collaborate with stakeholders for seamless data flow and address issues or needs for improvement.
Manage the deployment and automation of pipelines and infrastructure using Terraform, Flyte, and Kubernetes.
Support strategic data analysis and operational tasks as needed.
Lead end-to-end data pipeline development - from initial data discovery and ingestion to transformation, modeling, and delivery into production-grade data platforms.
Integrate and manage data from 3+ distinct sources, designing efficient, reusable frameworks for multi-source data processing and harmonization.
What We're Looking For:
Demonstrated ability to navigate ambiguous data challenges, ask the right questions, and design effective, scalable solutions.
Proficient in designing, building, and maintaining large-scale, reliable data pipeline systems.
Competence in designing and handling large-scale data pipeline systems.
Advanced SQL skills for querying and processing data.
Proficiency in Python, with experience in Spark for data processing.
3+ years of experience in data engineering, including data modeling and ETL pipelines.
Familiarity with cloud-based tools and infrastructure management using Terraform and Kubernetes is a plus.
Bonus:
Experience working with healthcare and clinical data sets
Experience with orchestration tools like Flyte
Pay Range: $136,000 - $190,400
Other compensation: Bonus Eligible
Perks & Benefits:
💻 Hybrid work
✈️ Travel expected 2-3 times per year for company-sponsored events
🩺 Medical, dental, vision, life, and disability insurance
📈 401K retirement plan; flexible spending and health savings account
🏝️ 15 days of paid time off + additional front-loaded personal days
🏖️ 14 company-recognized holidays + paid volunteer days
👶 up to 8 weeks of paid parental leave + 10 weeks of paid bonding leave
🌈 LGBTQ+ Health Services
🐶 Pet insurance
📣 Check out more of our benefits here: *******************************************
MCG Health is a Seattle, Washington-based company and is considering remote/hybrid candidates with some travel for company-sponsored events.
The ideal candidate should be comfortable balancing the independence of remote/hybrid work with the collaborative opportunities offered by periodic in-person engagements.
We embrace diversity and equal opportunity and are committed to building a team that represents a variety of backgrounds, perspectives, and skills. Only with diverse thoughts and ideas will we be able to create the change we want in healthcare. The more inclusive we are, the better our work will be for it.
All roles at MCG are expected to engage in occasional travel to participate in team or company-sponsored events for the purposes of connection and collaboration.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
MCG is a leading healthcare organization dedicated to patient-focused care. We value our employees' unique differences and are an Equal Employment Opportunity (EEO) employer. Our diverse workforce helps us achieve our goal of providing the right care to everyone. We welcome all qualified applicants without regard to race, religion, nationality, gender, sexual orientation, gender identity, age, marital status, veteran status, disability, pregnancy, parental status, genetic information, or political affiliation. We are committed to improving equity in healthcare and believe that a diverse workplace fosters curiosity, innovation, and business success. We are happy to provide accommodations for individuals. Please let us know if you require any support.
Auto-ApplyData Engineer II
Data engineer job in Seattle, WA
Data Engineer II Job ID: 25-12339 Pay rate range - $65/hr. to $70/hr. on Onsite Must Have Data pipelines ETL SQL Degree/Experience: * Bachelors Degree in Data Engineering, or any related * 3-6 years in Data Engineering building and maintaining production data pipelines
* Experience in SQL and ETL
We're looking for a Data Engineer to join AI for a focused engagement to lead the migration of critical data infrastructure that powers sustainability metrics, carbon footprint calculations, and Climate Pledge Friendly badging across Devices.
Responsibilities:
As a Data Engineer on the Sustain.AI team, you will be working with business-critical sustainability data pipelines.
You will be responsible for migrating production ETL jobs from legacy data platforms to modern AWS infrastructure, ensuring zero disruption to operations that drive corporate carbon footprinting, quarterly business reviews, and customer-facing sustainability communications.
You will establish the technical foundation for our data infrastructure by setting up Redshift clusters, AWS Glue, and workflow orchestration platforms in new AWS accounts.
You will translate existing data workflows into scalable AWS pipelines, implementing robust validation to ensure data consistency and quality throughout the migration.
You will work across multiple critical data domains including sales data, active device telemetry, transportation carbon emissions, energy consumption metrics, sustainability badging workflows, and executive reporting dashboards.
You will partner with Science team members who own carbon calculation methodologies and program teams who depend on these pipelines for daily operations, understanding their requirements and ensuring migrated solutions support their analytical and operational needs.
You will coordinate with upstream data owners to secure access permissions, collaborate with platform teams to troubleshoot integration issues, and communicate migration progress and risks to stakeholders across the organization.
Clear communication will be essential as you navigate dependencies across multiple teams and ensure alignment on migration timelines and success criteria.
You will be responsible for comprehensive documentation of the new architecture, including data lineage, dependencies, monitoring strategies, and operational runbooks.
This documentation will enable the AI team to maintain and evolve the infrastructure after your engagement completes.
You will implement data quality checks and alerting to proactively identify issues before they impact downstream consumers.
Required Skills & Experience:
3-6 years building and maintaining production data pipelines in complex enterprise environments
Strong SQL skills for writing complex transformations, optimizing query performance, and troubleshooting data quality issues across large datasets
Hands-on experience with AWS data services including Redshift, S3, Glue, and IAM for data access management
Deep understanding of ETL best practices including data validation, error handling, idempotency, and lineage tracking
Experience with Python or Spark for implementing data transformation logic
Strong analytical skills and ability to work with large, complex datasets spanning multiple source systems
Excellent communication and stakeholder management skills, with ability to coordinate across multiple technical teams and translate technical concepts for non-technical audiences
Ability to partner directly with technical stakeholders to understand requirements and deliver solutions that meet operational needs
Proven track record managing data migrations or large-scale infrastructure transitions with minimal business disruption
Preferred Qualifications:
Experience with workflow orchestration platforms
QuickSight experience for dashboard development
Background in large-scale data migrations
Familiarity with sustainability data, carbon accounting, or environmental metrics
Experience setting up monitoring and alerting for production data pipelines
* Job details
*
DevOps Engineer
Data engineer job in Seattle, WA
A globally leading consumer device company based in Seattle, WA is looking for a DevOps Engineer, Cloud Infrastructure to join their dynamic team!
Job Responsibilities:
• Manage Kubernetes clusters by performing improvements and regular maintenance. Perform cloud infrastructure operational tasks.
• Perform Database administration tasks including migration, instrumenting of telemetry, performance monitoring, cost monitoring and consolidation.
• CI/CD-focused projects that include implementing features for GitOps + software lifecycle tooling and processes.
Required Skills:5 years of relevant experience
3-5 years experience with Kubernetes (configuration, operations, deployment),and related technologies: Helm, ArgoCD, GitOps.
3-5 years experience with AWS and database administration: Postgres, RDS (eg. Aurora)
3 years experience in Python + Bash scripting
Operational Experience with cloud-based service infrastructure (DNS, loadbalancing, ingress, telemetry and logging)
Type: Contract
Duration: 9 months with extension
Work Location: Seattle, WA (Remote)
Pay range: $ 74.00 - $ 89.00 (DOE)
Databricks Engineer
Data engineer job in Seattle, WA
5+ years of experience in data engineering or similar roles.
Strong expertise in Databricks, Apache Spark, and PySpark.
Proficiency in SQL, Python, and data modeling concepts.
Experience with cloud platforms (Azure preferred; AWS/GCP is a plus).
Knowledge of Delta Lake, Lakehouse architecture, and partitioning strategies.
Familiarity with data governance, security best practices, and performance tuning.
Hands-on experience with version control (Git) and CI/CD pipelines.
Roles & Responsibilities:
Design and develop ETL/ELT pipelines using Azure Databricks and Apache Spark.
Integrate data from multiple sources into the data lake and data warehouse environments.
Optimize data workflows for performance and cost efficiency in cloud environments (Azure/AWS/GCP).
Implement data quality checks, monitoring, and alerting for pipelines.
Collaborate with data scientists and analysts to provide clean, curated datasets.
Ensure compliance with data governance, security, and privacy standards.
Automate workflows using CI/CD pipelines and orchestration tools (e.g., Airflow, Azure Data Factory).
Troubleshoot and resolve issues in data pipelines and platform components.
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance , 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range - $100,000-$140,000 a year