Post job

Data Engineer jobs at Hatch - 4746 jobs

  • Senior Software Engineer, Server Control Firmware

    Annapurna Labs (U.S.) Inc. 4.6company rating

    Austin, TX jobs

    Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago-even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. In Annapurna Labs we are at the forefront of hardware/software co-design not just in Amazon Web Services (AWS) but across the industry. Our Chassis Software team is looking for candidates interested in diving deep into the different hardware technologies that power our Machine Learning servers and develop the software and firmware to drive, support and sustain these technologies as they evolve though concept and manufacturing, and finally take their place in our rapidly expanding fleet of cutting edge Machine Learning products our customers demand. Key job responsibilities - Provide Baseboard Management Controller (BMC) and Satellite Management Controller (SMC) software and firmware for Machine Learning Accelerator (MLA) servers. - Continuously collaborate with other server and board software teams responsible for accelerator management firmware and other programmable logic devices. - Work within the larger MLA Systems Software group to support development of mission-mode firmware, exercisers for manufacturing and vetting, and automation for qualification and deployment. - Engage in new product development by participating in early concept design reviews, schematic approvals, offsite board bringup and laboratory-based testing. A day in the life The MLA Chassis Software team was formed to focus on board firmware primarily for mission-mode control of sensors and other board-level hardware. This includes debug, testing, qualification, and manufacturing. We touch technologies from device drivers to the I2C infrastructure pervasive in the server and everything in between. We are not working on machine learning algorithms, but rather we work on the physical systems (hardware) which execute and accelerate those machine learning algorithms. Data paths, I2C, and device control are our bread and butter. Some of us know what a Tensor is but really it's not what we do. About the team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we're building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying. About AWS Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud. Mentorship & Career Growth We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS- 5+ years of non-internship professional software development experience - 5+ years of programming with at least one software programming language experience - 5+ years of leading design or architecture (design patterns, reliability and scaling) of new and existing systems experience - 5+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience - Experience as a mentor, tech lead or leading an engineering team PREFERRED QUALIFICATIONS- Bachelor's degree in computer science or equivalent - Experience writing software for DDR/HBM controllers and PHYs Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $151,300/year in our lowest geographic market up to $261,500/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
    $151.3k-261.5k yearly 3d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Staff Data Platform Engineer Hybrid - New York City, San Francisco

    Vercel.com 4.1company rating

    San Francisco, CA jobs

    About Vercel: Vercel gives developers the tools and cloud infrastructure to build, scale, and secure a faster, more personalized web. As the team behind v0, Next.js, and AI SDK, Vercel helps customers like Ramp, Supreme, PayPal, and Under Armour build for the AI-native web. Our mission is to enable the world to ship the best products. That starts with creating a place where everyone can do their best work. Whether you're building on our platform, supporting our customers, or shaping our story: You can just ship things. About the Role We are looking for a Principal Engineer to lead the architecture and development of Vercel's next-generation Data Platform. You will design the systems that power data across our products and internal teams, enabling real-time analytics, observability, and future AI/ML capabilities. This role combines hands-on engineering with technical leadership. You will set the vision for our data ecosystem, build scalable distributed systems using technologies like Kafka, ClickHouse, Tinybird, and Snowflake, and work across the company to align data strategy with product and engineering goals. What You Will Do Architect and build a scalable data platform for batch and real-time workloads. Design streaming and event-driven systems using Kafka and related tooling. Implement modern lakehouse foundations, including Iceberg-based storage and governance. Partner with engineering, product, and leadership to define data strategy and technical direction. Establish best practices for ingestion, modeling, storage, quality, and security. Write production-grade code and set engineering standards across the team. Improve reliability through strong observability, monitoring, and fault tolerance. Drive long-term architectural decisions and evaluate build-vs-buy tradeoffs. Support analytics and ML teams by delivering high-quality data systems and tooling. About You 8+ years in data engineering or data architecture, including senior/principal-level work. Deep experience designing and operating large-scale distributed data systems. Strong expertise with Kafka and analytics/lakehouse technologies (ClickHouse, Tinybird, Snowflake, Iceberg). Strong cloud experience (AWS, GCP, or Azure). Experience with data governance, reliability, and secure data operations. Excellent communication skills and ability to influence across engineering and product teams. Demonstrated leadership through mentorship, design ownership, or technical direction. Benefits Competitive compensation package, including equity. Inclusive Healthcare Package. Learn and Grow - we provide mentorship and send you to events that help you build your network and skills. Flexible Time Off. We will provide you the gear you need to do your role, and a WFH budget for you to outfit your space as needed. The San Francisco, CA base pay range for this role is $196,000‑$294,000. Actual salary will be based on job‑related skills, experience, and location. Compensation outside of San Francisco may be adjusted based on employee location. The total compensation package may include benefits, equity‑based compensation, and eligibility for a company bonus or variable pay program depending on the role. Your recruiter can share more details during the hiring process. Vercel is committed to fostering and empowering an inclusive community within our organization. We do not discriminate on the basis of race, religion, color, gender expression or identity, sexual orientation, national origin, citizenship,, marital status, veteran status, disability status, or any other characteristic protected by law. Vercel encourages everyone to apply for our available positions, even if they don't necessarily check every box on the job description. #J-18808-Ljbffr
    $196k-294k yearly 2d ago
  • Remote Rust Engineer for AI Data & Evaluation

    Labelbox 4.3company rating

    San Francisco, CA jobs

    A leading technology firm is seeking a Rust Developer to design and optimize high-performance systems supporting AI models. The ideal candidate has over 5 years of experience in production Rust programming, focusing on memory safety and concurrency, and must effectively communicate complex concepts in English. This position offers competitive hourly compensation and the flexibility of remote work. #J-18808-Ljbffr
    $126k-178k yearly est. 1d ago
  • Remote Backend Engineer for AI Data Platform (Contract)

    Labelbox 4.3company rating

    San Francisco, CA jobs

    A leading data engine company seeks a backend developer to work remotely. This role involves reviewing AI-generated code, creating backend solutions, and clear communication of coding concepts. Candidates should have strong expertise in server-side languages, 3-5 years of experience in backend frameworks, and a Bachelor's degree in Computer Science or related. Offering a flexible workload while improving AI models used by top labs and companies. #J-18808-Ljbffr
    $126k-178k yearly est. 1d ago
  • Gen AI Engineer: Build Scalable Data Systems

    Scale Ai, Inc. 4.1company rating

    San Francisco, CA jobs

    A leading AI data company is looking for a Software Engineer to design and implement features across various technologies. This full-time role is based in either San Francisco or New York City, requiring 3+ years of software engineering experience and a passion for solving complex problems in a fast-paced environment. The compensation ranges from $179,400 to $224,250 USD, with additional benefits and equity opportunities. #J-18808-Ljbffr
    $179.4k-224.3k yearly 2d ago
  • Staff Data Engineer, Energy

    Medium 4.0company rating

    San Francisco, CA jobs

    About GoodLeap GoodLeap is a technology company delivering best-in-class financing and software products for sustainable solutions, from solar panels and batteries to energy-efficient HVAC, heat pumps, roofing, windows, and more. Over 1 million homeowners have benefited from our simple, fast, and frictionless technology that makes the adoption of these products more affordable, accessible, and easier to understand. Thousands of professionals deploying home efficiency and solar solutions rely on GoodLeap's proprietary, AI-powered applications and developer tools to drive more transparent customer communication, deeper business intelligence, and streamlined payment and operations. Our platform has led to more than $30 billion in financing for sustainable solutions since 2018. GoodLeap is also proud to support our award-winning nonprofit, GivePower, which is building and deploying life-saving water and clean electricity systems, changing the lives of more than 1.6 million people across Africa, Asia, and South America. Position Summary The GoodLeap team is looking for a hands‑on Data Engineer with a strong background in API data integrations, Spark processing and data lake development. The focus of this role will be on ingesting production energy data and helping get the aggregated metrics to the many teams in GoodLeap that need them. The successful candidate is a highly motivated individual with strong technical skills to create secure and performant data pipelines as well as support our foundational enterprise data warehouse. The ideal candidate is passionate about quality and has a bold, visionary approach to data practices in a modern finance enterprise. The candidate in this role will be required to work closely with cross‑functional teams to effectively coordinate the complex interdependencies inherent in the applications. Typical teams we collaborate with are Analytics & Reporting, Origination Platform engineers and AI developers. We are looking for a hardworking and passionate engineer who wants to make a difference with the tools they develop. Essential Job Duties and Responsibilities Implement data integrations across the organization as well as with business applications Develop and maintain data oriented web applications with scalable web services Participate in the design and development of projects, either independently or in a team Utilize agile software development lifecycle and DevOps principles Be the data stewards of the organization upholding quality and availability standards for our downstream consumers Be self‑sufficient and fully own the responsibility of executing projects from inception to delivery Provide mentorship to team members including pair programming and skills development Participate in data design and architecture discussions, considering solutions in the context of the larger GoodLeap ecosystem Required Skills, Knowledge & Abilities 6-10 years of full‑time Data Analysis and/or Software Development experience Experience with an end to end reporting & analytics technology: data warehousing (SQL, NoSQL) to BI/Visualization (Tableau, PowerBI, Excel) Degree in Computer Science or related discipline Experience with DataBricks/Spark processing Expertise with relational databases (including functional SQL/stored procedures) and non‑relational databases (MongoDB, DynamoDB, Elastic Search) Experience with orchestrating data pipelines with modern tools such as Airflow Strong knowledge and hands‑on experience with open source web frameworks (e.g. Vue /React) Solid understanding of performance implications and scalability of code Experience with Amazon Web Services (IAM, Cognito, EC2, S3, RDS, Cloud Formation) Experience with messaging paradigms and serverless technologies (Lambda, SQS, SNS, SES) Experience working with server‑less applications on public clouds (e.g. AWS) Experience with large, complex codebases and know how to maintain them $160,000 - $210,000 a year In addition to the above salary, this role may be eligible for a bonus and equity. Additional Information Regarding Job Duties and s Job duties include additional responsibilities as assigned by one's supervisor or other managers related to the position/department. This job description is meant to describe the general nature and level of work being performed; it is not intended to be construed as an exhaustive list of all responsibilities, duties and other skills required for the position. The Company reserves the right at any time with or without notice to alter or change job responsibilities, reassign or transfer job position or assign additional job responsibilities, subject to applicable law. The Company shall provide reasonable accommodations of known disabilities to enable a qualified applicant or employee to apply for employment, perform the essential functions of the job, or enjoy the benefits and privileges of employment as required by the law. If you are an extraordinary professional who thrives in a collaborative work culture and values a rewarding career, then we want to work with you! Apply today! We are committed to protecting your privacy. To learn more about how we collect, use, and safeguard your personal information during the application process, please review our Employment Privacy Policy and Recruiting Policy on AI. #J-18808-Ljbffr
    $160k-210k yearly 2d ago
  • Senior Energy Data Engineer - API & Spark Pipelines

    Medium 4.0company rating

    San Francisco, CA jobs

    A technology finance firm in San Francisco is seeking an experienced Data Engineer. The role involves building data pipelines, integrating data across various platforms, and developing scalable web applications. The ideal candidate will have a strong background in data analysis, software development, and experience with AWS. The salary range for this position is between $160,000 and $210,000, with potential bonuses and equity. #J-18808-Ljbffr
    $160k-210k yearly 2d ago
  • Staff Data Engineer: Lead Data Infrastructure & Pipelines

    Medium 4.0company rating

    San Francisco, CA jobs

    A leading SaaS company in healthcare, located in San Francisco, is seeking a Staff Data Engineer to maintain and improve its data infrastructure. The ideal candidate will have strong skills in Kubernetes, Python, and SQL, and should thrive in a collaborative environment. This role offers competitive compensation, including a salary range of $153,000 - $195,000, and provides equity along with full health benefits. Flexibility in work location with a hybrid model is also available. #J-18808-Ljbffr
    $153k-195k yearly 17h ago
  • Staff Data Engineer

    Prosper.com 4.5company rating

    San Francisco, CA jobs

    Your role in our mission We're hiring a Staff Data Engineer with a solid software engineering background. You're strong in Python and comfortable with SQL in a DevOps environment. You'll design, build, and improve automated ETL/ELT pipelines that are reliable, scalable, and easy to maintain. Your work will move data safely and efficiently across internal and external systems so teams at Prosper can use trusted data when they need it. You'll partner with product, analytics, and business teams to understand their needs and make data easier to find, use, and share. We value engineers who solve problems, sweat details, and simplify complex systems. If you like improving tools and processes and want to help shape Prosper's culture of innovation and collaboration, we'd love to talk. How you'll make an impact Work with engineers, DBAs, infrastructure, product, data engineers, and analysts to learn Prosper's data ecosystem and keep it running fast and secure. Forge strong relationships with business stakeholders, analysts, and data scientists to grasp their needs and craft data solutions to meet them. Design and run self-checking ETL/ELT pipelines with logging, alerting, and automated tests. Develop pipelines on Google Cloud Platform (GCP) using Python, dbt, and Airflow (Composer), and additional tools as needed. Evaluate new tools and approaches; bring forward practical ideas that improve speed, quality, or cost. Bring curiosity, ownership, and clear thinking to tough engineering problems. Skills that will help you thrive Degree in Computer Science or related field, or equivalent experience. 8+ years of object-oriented programming in an enterprise setting. Deep experience in Python; experience with Java, C#, or Go is a plus. Proficiency in a SQL (e.g. BigQuery, T‑SQL, Redshift, PostgreSQL), with an interest in dimensional modeling and data warehouses. Solid Git/GitHub skills and familiarity with Agile and the SD. Strong communication and collaboration skills across technical and non-technical teams. DevOps experience with CI/CD, containers (Docker, Kubernetes), and infrastructure as code (Terraform or similar). Proficient with LLM‑assisted development in IDEs such as Cursor. Commitment to an inclusive, learning‑focused culture and continuous improvement. What we offer The opportunity to collaborate with a team of creative, fun, and driven colleagues on products that have an immediate and significant impact on people's lives. The opportunity to work in a fast‑paced environment with experienced industry leaders. Flexible time off, comprehensive health coverage, competitive salary, paid parental leave, and other wellness benefits. A bevy of other perks including Udemy access, childcare assistance, pet insurance discounts, legal assistance, and additional discounts through Perkspot. Interview Process Recruiter Call: A brief screening to discuss your experience and initial questions. Department Interview: Deeper dive into technical skills and project alignment with the Hiring Manager or team member. Take‑Home Assignment: Analyze a real‑world problem, propose solutions, and present findings, evaluating analytical, strategic thinking, and presentation skills. Technical Interview: Deeper dive into coding skills. Team/Virtual Interview: Meet team members for collaborative discussions, problem‑solving, or technical exercises. $204,000 - $228,000 a year Compensation details: The salary for this position is $204k - $228k annually, plus bonus and generous benefits. In determining your salary, we will consider your location, experience, and other job‑related factors. #LI-SK1 About Us Founded in 2005 as the first peer‑to‑peer marketplace lending platform in the U.S., Prosper was built on a simple idea: connect people who want to borrow money with those who want to invest. Since inception, Prosper has helped more than 2 million people gain access to affordable credit with over $28 billion in loans originated through its platform. Our mission is to help our customers advance their financial well‑being through a variety of products including personal loans, credit, home equity lines of credit (HELOC), and our newest product, HELoan. Our diverse culture rewards accountability and cross functional teamwork because we believe this encourages innovative thinking and helps us deliver on our mission. We're on a mission to hire the very best, and we are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere. It is important to us that every hire connects with our vision, mission, and core values. Join a leading fintech company that's democratizing finance for all! Our Values Diversity expands opportunities Collaboration creates better solutions Curiosity fuels our innovation Integrity defines all our relationships Excellence leads to longevity Simplicity guides our user experience Accountability at all levels drives results Applicants have rights under Federal Employment Laws. Family & Medical Leave Act (FMLA) Equal Employment Opportunity (EEO) Employee Polygraph Protection Act (EPPA) California applicants: please click here to view our California Consumer Privacy Act (“CCPA”) Notice for Applicants, which describes your rights under the CCPA. At Prosper, we're looking for people with passion, integrity, and a hunger to learn. We encourage you to apply even if your experience doesn't precisely match the job description. Your unique skill set and diverse perspective will stand out and set you apart from other candidates. Prosper thrives with people who think outside of the box and aren't afraid to challenge the status quo. We invite you to join us on our mission to advance financial well‑being. Prosper is committed to an inclusive and diverse workplace. All aspects of employment including the decision to hire, promote, discipline, or discharge, will be based on merit, competence, performance, and business needs. We do not discriminate on the basis of race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law, including the San Francisco Fair Chance Ordinance. Prosper will consider for employment qualified applicants who are non‑US citizens and will provide green card sponsorship. Our Story & Team // Our Blog *************** #J-18808-Ljbffr
    $204k-228k yearly 1d ago
  • Rust Engineer - AI Data Trainer

    Labelbox 4.3company rating

    San Francisco, CA jobs

    Rust Developer - $90/hr Remote - Alignerr About the job At Alignerr, we partner with the world's leading AI research teams and labs to build and train cutting‑edge AI models. Organization Organizational details: Alignerr Position: Rust Developer Type: Hourly Contract Compensation: $60-$90 /hour Location: Remote Commitment: 10-40 hours/week Role Responsibilities (Training support will be provided) Design, build, and optimize high-performance systems in Rust to support AI evaluation workflows. Review AI-generated Rust code for memory safety, concurrency, and performance optimization. Develop backend services and tooling for large-scale data validation and quality control. Identify bottlenecks and edge cases in system behavior and implement scalable fixes. Requirements Bachelor's degree or higher in Computer Science or a related technical field. 5+ years of professional experience writing production Rust with a focus on memory safety and concurrency. Strong background in systems programming and performance optimization. Fluent in English with the ability to explain complex low-level programming concepts clearly. Preferred Experience with distributed systems or developer tooling. Application Process (Takes 15-20 min) Submit your resume Complete a short screening Project matching and onboarding #J-18808-Ljbffr
    $60-90 hourly 1d ago
  • Python Engineer - AI Data Trainer

    Labelbox 4.3company rating

    San Francisco, CA jobs

    Python Developer - $90/hr Remote - Alignerr About the job At Alignerr, we partner with the world's leading AI research teams and labs to build and train cutting‑edge AI models. Organization: Alignerr Position: Python Developer Type: Hourly Contract Compensation: $60-$90 /hour Location: Remote Commitment: 10-40 hours/week Role Responsibilities (Training support will be provided) Review and evaluate AI‑generated Python code for correctness, efficiency, and adherence to best practices. Solve complex algorithmic challenges and develop high‑quality Python solutions. Provide clear, human‑readable explanations for code logic and problem‑solving strategies. Identify and flag edge cases, bugs, or ambiguities in AI responses. Requirements Bachelor's or better in Computer Science or a related technical field. Deep expertise in Python, including modern frameworks (Django, Flask, FastAPI) and advanced language features. 3‑5+ years of professional experience writing production‑level Python code. Exceptional written communication skills to explain abstract technical concepts. Preferred Prior experience with data annotation, RLHF, or evaluation systems. Application Process (Takes 15‑20 min) Submit your resume Complete a short screening Project matching and onboarding $60 - $90 an hour #J-18808-Ljbffr
    $60-90 hourly 17h ago
  • Staff Data Engineer

    Tonal 4.1company rating

    San Francisco, CA jobs

    Who We Are Tonal is the smartest home gym and personal trainer. With cutting-edge hardware, AI-driven coaching, and the world's largest dataset of real-world strength training, we're redefining how people train. Our platform combines data from millions of workouts, sensors, and integrations to deliver personalized training, enhance coaching, and measure progress in entirely new ways. We're passionate about transforming lives through strength, performance, and health. At Tonal, innovation, science, and data come together to create a training experience that's personal, motivating, and measurable. Overview As a Staff Data Engineer, you will shape the backbone of Tonal's data platform. You'll design and scale systems that bring together massive volumes of workout, sensor, and health-related data while ensuring security, reliability, and trust. This role requires a deep understanding of compliance and security standards and the ability to build infrastructure that protects sensitive information while fueling product innovation, AI, and analytics. What You Will Do Architect secure and scalable data systems that support Tonal's growth and meet regulatory standards. Build and optimize data models and pipelines across diverse sources: sensors, workouts, health integrations, CRM, payments, and content. Establish controls for access, encryption, anonymization, monitoring, and auditability. Define and enforce best practices for managing sensitive data, including PHI and PII. Collaborate with teams across Product, Engineering, Sports Science, and Healthcare to translate needs into compliant solutions. Conduct risk assessments and implement safeguards guided by NIST frameworks. Support SOC 2 audits by documenting and demonstrating effective security controls. Mentor engineers and scientists, setting high standards for secure data engineering. Continuously evolve the platform, introducing new tools and frameworks to balance innovation with strong regulatory posture. Who You Are 8+ years of experience in data engineering, or 6+ years with a Master's degree (or equivalent). Strong skills in SQL, Python, and distributed data processing (Spark, Databricks, or similar). Experience building pipelines with DBT, Airflow, Fivetran, or related tools. Background in data modeling and warehousing with systems like Snowflake, Databricks, or Redshift. Hands-on experience working with regulated environments and sensitive data. Familiarity with frameworks such as HIPAA, SOC 2, and NIST for security and compliance. Skilled in access control design, audit logging, encryption, and governance. Excellent communicator who can explain complex tradeoffs to both technical and non-technical audiences. Known for technical leadership and mentoring, raising the bar for engineering quality. Extra Credit Experience with fitness, healthcare, IoT, or sensor data. Knowledge of privacy-preserving techniques (k-anonymity, l-diversity, differential privacy). Exposure to production ML/AI pipelines involving sensitive data. Background in connected fitness, digital health, or regulated healthcare products. At Tonal, we believe that the unique and varied lived experiences of our teammates contribute to our overall strength. We don't just appreciate differences, we celebrate them, and we always seek people that represent a wide variety of backgrounds. We're dedicated to adding new perspectives to the team and designing employee experiences that contribute to your growth as much as you do to ours. If your experience aligns with what we're looking for (even if you don't check every single box), send us your application. We would love to hear from you! Tonal is committed to meeting the diverse needs of people with disabilities in a timely manner that is consistent with the principles of independence, dignity, integration, and equality of opportunity. Should you have any accommodation requests, please reach out to us via our confidential email, accessibility@tonal.com. All requests will be addressed and responded to in accordance with Tonal's Accessibility Policy and local legislation. #J-18808-Ljbffr
    $126k-178k yearly est. 3d ago
  • Staff Data Engineer - Secure, Scalable Platform

    Tonal 4.1company rating

    San Francisco, CA jobs

    A leading fitness technology company in California is looking for a Staff Data Engineer to design and scale secure data systems. The ideal candidate will have extensive experience in data engineering and a deep understanding of compliance and security standards. Responsibilities include building data pipelines and collaborating across teams to ensure regulatory compliance. The company values diverse backgrounds and encourages applications from all qualified candidates. #J-18808-Ljbffr
    $126k-178k yearly est. 3d ago
  • Principal Data Platform Engineer - Real-Time Analytics

    Vercel.com 4.1company rating

    San Francisco, CA jobs

    A leading tech company is seeking a Principal Engineer to lead the development of a next-generation Data Platform. The role involves hands-on engineering and technical leadership, designing scalable systems using technologies like Kafka, ClickHouse, Tinybird, and Snowflake. Candidates should have 8+ years in data engineering, strong cloud experience, and excellent communication skills. The position offers a competitive salary, benefits, and a flexible work environment. #J-18808-Ljbffr
    $126k-178k yearly est. 2d ago
  • Staff Data Engineer

    Imprint 3.9company rating

    San Francisco, CA jobs

    Who We Are Imprint is reimagining co-branded credit cards & financial products to be smarter, more rewarding, and truly brand-first. We partner with companies like Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to launch modern credit programs that deepen loyalty, unlock savings, and drive growth. Our platform combines advanced payments infrastructure, intelligent underwriting, and seamless UX to help brands offer powerful financial products-without becoming a bank. Co-branded cards account for over $300 billion in U.S. annual spend-but most are still powered by legacy banks. Imprint is the modern alternative: flexible, tech-forward, and built for today's consumer. Backed by Kleiner Perkins, Thrive Capital, and Khosla Ventures, we're building a world-class team to redefine how people pay-and how brands grow. If you want to work fast, solve hard problems, and make a real impact, we'd love to meet you. The Team The Data Engineering team at Imprint is responsible for building and scaling the data infrastructure that supports product development, analytics, operations, and machine learning across the company. We own the pipelines, platforms, and processes that empower our stakeholders to trust and act on our data. The Role As our Staff Data Engineer, you'll architect our data platform while solving our most complex technical challenges. You'll build the foundation for Imprint's next decade of growth: scaling infrastructure for explosive expansion, delivering insights that drive million-dollar decisions, enabling bulletproof partner data sharing, and transforming how every team leverages data. Join us in creating a platform that doesn't just meet today's needs-but anticipates tomorrow's possibilities. What You'll Do Build & Scale Infrastructure Architect our next-generation data platform, optimizing Snowflake, dbt Cloud, and real-time CDC pipelines for enterprise scale Design secure, compliant partner data delivery systems via Snowflake shares, S3/SFTP integrations, and Marketplace listings Create mission-critical financial reporting pipelines with exceptional accuracy and reliability Drive Technical Excellence Establish company-wide data standards for modeling, lineage, contracts, and orchestration Champion data reliability, observability, and trust across all systems Make strategic technology decisions that balance innovation with pragmatism Lead & Mentor Elevate engineering practices across Analytics, Data, and Engineering teams Conduct architecture reviews and build reusable frameworks Share expertise through documentation, templates, and hands-on guidance What We Look For 10+ years of experience in data engineering or related fields, with proven experience owning platform-level architecture and strategy. Deep expertise in Snowflake, dbt Cloud, Change Data Capture frameworks, orchestration tools (Airflow, dbt Cloud), and reverse ETL. Strong background in external data sharing and partner integrations, including Snowflake data shares, S3/SFTP pipelines, and Marketplace listings. Proven ability to design and implement data governance and observability systems, including data contracts, lineage tracking, anomaly detection, and automated monitoring. Strong engineering skills in SQL and Python, emphasizing testing, CI/CD, and maintainability in complex data systems. Reputation as a mentor and technical authority who elevates the technical quality and rigor of peers and teams. Exceptional communication skills and the ability to influence technical and business leadership across multiple departments. Perks & Benefits Competitive compensation and equity packages Leading configured work computers of your choice Flexible paid time off Fully covered, high-quality healthcare, including fully covered dependent coverage Additional health coverage includes access to One Medical and the option to enroll in an FSA 16 weeks of paid parental leave for the primary caregiver and 8 weeks for all new parents Access to industry-leading technology across all of our business units, stemming from our philosophy that we should invest in resources for our team that foster innovation, optimization, and productivity Imprint is committed to a diverse and inclusive workplace. Imprint is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. Imprint welcomes talented individuals from all backgrounds who want to build the future of payments and rewards. If you are passionate about FinTech and eager to grow, let's move the world forward, together. #J-18808-Ljbffr
    $124k-176k yearly est. 3d ago
  • Data Infrastructure Engineer - Scale Spark, Kafka & AI Data

    Openai 4.2company rating

    San Francisco, CA jobs

    A leading AI research organization located in San Francisco is seeking an experienced data infrastructure engineer to design and operate data infrastructure supporting extensive compute fleets. You will manage the lifecycle ownership and ensure high performance, scalability, and security of data access. Candidates should have over 4 years of relevant experience and be comfortable with fast-paced environments. The position offers a hybrid work model and relocation assistance. #J-18808-Ljbffr
    $128k-178k yearly est. 17h ago
  • Lead Data Engineer, AI Platform (Relocation Included)

    Openai 4.2company rating

    San Francisco, CA jobs

    A leading AI research organization is seeking a Data Engineer to build essential data pipelines in San Francisco. This role involves designing robust systems for data processing and collaborating with various teams to support key metrics tracking. Ideal candidates have extensive experience in data engineering and proficiency in technologies like Python and Spark. The organization values responsible AI use and offers relocation assistance. #J-18808-Ljbffr
    $128k-178k yearly est. 17h ago
  • Data Engineer, Analytics

    Openai 4.2company rating

    San Francisco, CA jobs

    About the team The Applied team works across research, engineering, product, and design to bring OpenAI's technology to consumers and businesses. We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth. About the role: We're seeking a Data Engineer to take the lead in building our data pipelines and core tables for OpenAI. These pipelines are crucial for powering analyses, safety systems that guide business decisions, product growth, and prevent bad actors. If you're passionate about working with data and are eager to create solutions with significant impact, we'd love to hear from you. This role also provides the opportunity to collaborate closely with the researchers behind ChatGPT and help them train new models to deliver to users. As we continue our rapid growth, we value data-driven insights, and your contributions will play a pivotal role in our trajectory. Join us in shaping the future of OpenAI! In this role, you will: Design, build and manage our data pipelines, ensuring all user event data is seamlessly integrated into our data warehouse. Develop canonical datasets to track key product metrics including user growth, engagement, and revenue. Work collaboratively with various teams, including Infrastructure, Data Science, Product, Marketing, Finance, and Research to understand their data needs and provide solutions. Implement robust and fault-tolerant systems for data ingestion and processing. Participate in data architecture and engineering decisions, bringing your strong experience and knowledge to bear. Ensure the security, integrity, and compliance of data according to industry and company standards. You might thrive in this role if you: Have 3+ years of experience as a data engineer and 8+ years of any software engineering experience (including data engineering). Proficiency in at least one programming language commonly used within Data Engineering, such as Python, Scala, or Java. Experience with distributed processing technologies and frameworks, such as Hadoop, Flink and distributed storage systems (e.g., HDFS, S3). Expertise with any of ETL schedulers such as Airflow, Dagster, Prefect or similar frameworks. Solid understanding of Spark and ability to write, debug and optimize Spark code. This role is exclusively based in our San Francisco HQ. We offer relocation assistance to new employees. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology. #J-18808-Ljbffr
    $128k-178k yearly est. 17h ago
  • Data Acquisition Engineer - Scalable Distributed Systems

    Openai 4.2company rating

    San Francisco, CA jobs

    A leading AI research company in San Francisco seeks a Software Engineer for their Data Acquisition team. You'll lead projects in data collection, collaborate with various teams, and develop scalable distributed systems. Candidates should hold a BS/MS/PhD in computer science with 4+ years of software development experience and strong skills in Kubernetes. The role offers a salary range of $325K to $405K plus equity. #J-18808-Ljbffr
    $128k-178k yearly est. 4d ago
  • Staff Machine Learning Data Engineer

    Backflip 3.7company rating

    San Francisco, CA jobs

    Mechanical design, the work done in CAD, is the rate-limiter for progress in the physical world. However, there are only 2-4 million people on Earth who know how to CAD. But what if hundreds of millions could? What if creating something in the real world were as easy as imagining the use case, or sketching it on paper? Backflip is building a foundation model for mechanical design: unifying the world's scattered engineering knowledge into an intelligent, end-to-end design environment. Our goal is to enable anyone to imagine a solution and hit “print.” Founded by a second-time CEO in the same space (first company: Markforged), Backflip combines deep industry insight with breakthrough AI research. Backed by a16z and NEA, we raised a $30M Series A and built a deeply technical, mission-driven team. We're building the AI foundation that tomorrow's space elevators, nanobots, and spaceships will be built in. If you're excited to define the next generation of hard tech, come build it with us. The Role We're looking for a Staff Machine Learning Data Engineer to lead and build the data pipelines powering Backflip's foundation model for manufacturing and CAD. You'll design the systems, tools, and strategies that turn the world's engineering knowledge - text, geometry, and design intent - into high-quality training data. This is a core leadership role within the AI team, driving the data architecture, augmentation, and evaluation that underpin our model's performance and evolution. You'll collaborate with Machine Learning Engineers to run data-driven experiments, analyze results, and deliver AI products that shape the future of the physical world. What You'll Do Architect and own Backflip's ML data pipeline, from ingestion to processing to evaluation. Define data strategy: establish best practices for data augmentation, filtering, and sampling at scale. Design scalable data systems for multimodal training (text, geometry, CAD, and more). Develop and automate data collection, curation, and validation workflows. Collaborate with MLEs to design and execute experiments that measure and improve model performance. Build tools and metrics for dataset analysis, monitoring, and quality assurance. Contribute to model development through insights grounded in data, shaping what, how, and when we train. Who You Are You've built and maintained ML data pipelines at scale, ideally for foundation or generative models, that shipped into production in the real world. You have deep experience with data engineering for ML, including distributed systems, data extraction, transformation, and loading, and large-scale data processing (e.g. PySpark, Beam, Ray, or similar). You're fluent in Python and experienced with ML frameworks and data formats (Parquet, TFRecord, HuggingFace datasets, etc.). You've developed data augmentation, sampling, or curation strategies that improved model performance. You think like both an engineer and an experimentalist: curious, analytical, and grounded in evidence. You collaborate well across AI development, infra, and product, and enjoy building the data systems that make great models possible. You care deeply about data quality, reproducibility, and scalability. You're excited to help shape the future of AI for physical design. Bonus points if: You are comfortable working with a variety of complex data formats, e.g. for 3D geometry kernels or rendering engines. You have an interest in math, geometry, topology, rendering, or computational geometry. You've worked in 3D printing, CAD, or computer graphics domains. Why Backflip This is a rare opportunity to own the data backbone of a frontier foundation model, and help define how AI learns to design the physical world. You'll join a world-class, mission-driven team operating at the intersection of research, engineering, and deep product sense, building systems that let people design the physical world as easily as they imagine it. Your work will directly shape the performance, capability, and impact of Backflip's foundation model, the core of how the world will build in the future. Let's build the tools the future will be made in. #J-18808-Ljbffr
    $126k-178k yearly est. 2d ago

Learn more about Hatch jobs