Post job

Data Engineer jobs at Spectrum - 674 jobs

  • FPGA Engineer

    Lincoln Electric 4.6company rating

    Euclid, OH jobs

    Lincoln Electric is the world leader in the engineering, design, and manufacturing of advanced arc welding solutions, automated joining, assembly and cutting systems, plasma and oxy-fuel cutting equipment, and has a leading global position in brazing and soldering alloys. Lincoln is recognized as the Welding Expert™ for its leading materials science, software development, automation engineering, and application expertise, which advance customers' fabrication capabilities to help them build a better world. Headquartered in Cleveland, Ohio, Lincoln Electric is a $4.2B publicly traded company (NASDAQ:LECO) with over 12,000 employees around the world, with operations in 71 manufacturing and automation system integration locations across 21 countries and maintains a worldwide network of distributors and sales offices serving customers in over 160 countries. Location: Euclid - 22801 Employment Status: Salary Full-Time Function: Engineering Pay Grade and Range: US10-E-31 Level III - Min: 105,560 - Mid: $124,188; Level IV: Min: $133,043 - Mid: $156,521 Bonus Plan: AIPAIP Target Bonus: Level III-10%; Level IV - 15% Purpose Lincoln Electric is seeking a highly capable FPGA (Field-Programmable Gate Arrays) Design Engineer to join our R&D team. This role will focus on the architecture, design, and implementation of FPGA-based systems for embedded and high-performance applications. The ideal candidate will have deep experience with VHDL, timing analysis and closure, and integration of FPGA logic with ARM-based processing systems via AXI and other interconnect protocols. Familiarity with AMD (Xilinx), Intel (Altera), or Microchip (Microsemi) FPGA platforms is essential. Duties and Responsibilities FPGA Architecture & Design Develop and maintain VHDL-based designs for control, signal processing, and communication subsystems. Architect modular and reusable IP blocks for integration into complex FPGA systems. Collaborate with hardware and software engineers to define functional requirements and partition logic between hardware, firmware, and software. Timing Analysis & Closure Perform static timing analysis and achieve timing closure across multiple clock domains. Optimize designs for performance, area, and power using synthesis and place-and-route tools. Debug timing violations and implement constraints using industry-standard tools. Processor Interfacing & System Integration Design and implement AXI-based interfaces to ARM processors and other embedded subsystems. Integrate FPGA logic with SoC platforms and manage data flow between programmable logic and software. Support development of device drivers and firmware for FPGA-accelerated functions. Duties and Responsibilities (Continued) Verification & Validation Develop testbenches and simulation environments using VHDL. Perform functional and formal verification of FPGA designs. Support hardware bring-up and lab testing using logic analyzers, oscilloscopes, and JTAG tools. Cross-Functional Collaboration Work closely with embedded software, hardware, and systems teams to ensure seamless integration. Participate in design reviews and contribute to system-level architecture decisions. Document design specifications, test results, and performance metrics. Innovation & Continuous Improvement Stay current with FPGA technologies, high-level synthesis, and hardware acceleration trends. Evaluate new tools, platforms, and methodologies to improve design efficiency and reliability. Basic Requirements Bachelor's degree in Electrical Engineering, Computer Engineering, or related field. Level III: 5+ years of relevant experience. Works independently: receives minimal guidance. May lead projects or project steps within a broader project or have accountability for ongoing activities or objectives. Level IV: 8+ years of relevant experience. Recognized as an expert in own area within the organization. Works independently, with guidance in only the most complex situations. 3+ years of experience in FPGA design and development using VHDL. Proficiency with AMD/Xilinx, Intel/Altera, and/or Microchip/Microsemi FPGA platforms. Strong understanding of timing analysis, constraints, and closure techniques. Experience with AXI interconnects and integration with ARM-based processing systems. Familiarity with simulation and verification tools such as VUnit or Vivado Simulator. Hands-on experience with lab equipment such as oscilloscopes and logic analyzers. Excellent problem-solving skills and ability to work in cross-functional teams. Strong written and verbal communication skills. Preferred Requirements Experience with high-speed interfaces (e.g. PCI, Ethernet, DDR). Knowledge of High-Level Synthesis tools. Familiarity with embedded Linux and device driver development. Understanding of security implications within FPGA-based embedded systems. Experience with FPGA-based control systems and digital signal processing. Lincoln Electric is an Equal Opportunity Employer. We are committed to promoting equal employment opportunity for applicants, without regard to their race, color, national origin, religion, sex (including pregnancy, childbirth, or related medical conditions, including, but not limited to, lactation), sexual orientation, gender identity, age, veteran status, disability, genetic information, and any other category protected by federal, state, or local law.
    $124.2k-156.5k yearly 1d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Senior Data Warehouse & BI Developer

    Ariat International 4.7company rating

    San Leandro, CA jobs

    About the Role We're looking for a Senior Data Warehouse & BI Developer to join our Data & Analytics team and help shape the future of Ariat's enterprise data ecosystem. You'll design and build data solutions that power decision-making across the company, from eCommerce to finance and operations. In this role, you'll take ownership of data modeling, and BI reporting using Cognos and Tableau, and contribute to the development of SAP HANA Calculation Views. If you're passionate about data architecture, visualization, and collaboration - and love learning new tools - this role is for you. You'll Make a Difference By Designing and maintaining Ariat's enterprise data warehouse and reporting architecture. Developing and optimizing Cognos reports for business users. Collaborating with the SAP HANA team to develop and enhance Calculation Views. Translating business needs into technical data models and actionable insights. Ensuring data quality through validation, testing, and governance practices. Partnering with teams across the business to improve data literacy and reporting capabilities. Staying current with modern BI and data technologies to continuously evolve Ariat's analytics stack. About You 7+ years of hands-on experience in BI and Data Warehouse development. Advanced skills in Cognos (Framework Manager, Report Studio). Strong SQL skills and experience with data modeling (star schemas, dimensional modeling). Experience building and maintaining ETL processes. Excellent analytical and communication skills. A collaborative, learning-oriented mindset. Experience developing SAP HANA Calculation Views preferred Experience with Tableau (Desktop, Server) preferred Knowledge of cloud data warehouses (Snowflake, BigQuery, etc.). Background in retail or eCommerce analytics. Familiarity with Agile/Scrum methodologies. About Ariat Ariat is an innovative, outdoor global brand with roots in equestrian performance. We develop high-quality footwear and apparel for people who ride, work, and play outdoors, and care about performance, quality, comfort, and style. The salary range for this position is $120,000 - $150,000 per year. The salary is determined by the education, experience, knowledge, skills, and abilities of the applicant, internal equity, and alignment with market data for geographic locations. Ariat in good faith believes that this posted compensation range is accurate for this role at this location at the time of this posting. This range may be modified in the future. Ariat's holistic benefits package for full-time team members includes (but is not limited to): Medical, dental, vision, and life insurance options Expanded wellness and mental health benefits Paid time off (PTO), paid holidays, and paid volunteer days 401(k) with company match Bonus incentive plans Team member discount on Ariat merchandise Note: Availability of benefits may be subject to location & employment type and may have certain eligibility requirements. Ariat reserves the right to alter these benefits in whole or in part at any time without advance notice. Ariat will consider qualified applicants, including those with criminal histories, in a manner consistent with state and local laws. Ariat is an Equal Opportunity Employer and considers applicants for employment without regard to race, color, religion, sex, orientation, national origin, age, disability, genetics or any other basis protected under federal, state, or local law. Ariat is committed to providing reasonable accommodations to candidates with disabilities. If you need an accommodation during the application process, email *************************. Please see our Employment Candidate Privacy Policy at ********************* to learn more about how we collect, use, retain and disclose Personal Information. Please note that Ariat does not accept unsolicited resumes from recruiters or employment agencies. In the absence of a signed Agreement, Ariat will not consider or agree to payment of any referral compensation or recruiter/agency placement fee. In the event a recruiter or agency submits a resume or candidate without a previously signed Agreement, Ariat explicitly reserves the right to pursue and hire those candidate(s) without any financial obligation to the recruiter or agency. Any unsolicited resumes, including those submitted directly to hiring managers, are deemed to be the property of Ariat.
    $120k-150k yearly 22h ago
  • Software Engineer

    Plug 3.8company rating

    Santa Monica, CA jobs

    Plug is the only wholesale platform built exclusively for used electric vehicles. Designed for dealers and commercial consignors, Plug combines EV-specific data, systems and expertise to bring clarity and confidence to the wholesale buying and selling process. With the addition of Trade Desk™, dealers can quickly receive cash offers or list EV trade-ins directly into the auction, removing friction and maximizing returns. By replacing outdated wholesale methods with tools tailored to EVs, Plug empowers dealers to make faster and more profitable decisions with a partner they can trust. For more information, visit ***************** The Opportunity This is an on site role in Santa Monica, CA. We are looking for a Software Engineer to join our growing team! A full-stack software engineer who will report directly to our CTO, and who will own entire customer-facing products. We're building systems like multi-modal AI-enabled data onramps for EVs, near-real time API connectivity to the vehicles, and pricing intelligence tooling. As a member of the team you'll help lay the technical and product foundation for our growing business. We're building a culture that cares about collaboration, encourages intellectual honesty, celebrates technical excellence, and is driven by careful attention to detail and planning for the future. We believe diversity of perspective and experience are key to building great technology and a thriving team. Sound cool? Let's work together. Key Responsibilities Collaborate with colleagues and be a strong voice in product design sessions, architecture discussions, and code reviews. Design, implement, test, debug, and document work on new and existing software features and products, ensuring they meet business, quality, and operational needs. Write clear, efficient, and scalable code with an eye towards flexibility and maintainability. Take ownership of features and products, and support their planning and development by understanding the ultimate goal and evaluating effort, risk, and priority in an agile environment. Own and contribute to team productivity and process improvements. Use and develop APIs to create integrations between Plug and 3rd party platforms. Be an integral part of a close team of developers; this is an opportunity to help shape a nascent team culture. The ideal candidate will be a high-growth individual able to grow their career as the team grows. Qualifications 4-6 years of hands-on experience developing technical solutions Advanced understanding of web application technologies, both backend and frontend as well as relational databases. Familiarity with Cloud PaaS deployments. Familiarity with TypeScript or any other modern typed language. Familiarity with and positive disposition toward code generation AI tooling. Strong analytical and quantitative skills. Strong verbal and written communication skills with a focus on conciseness. A self-directed drive to deliver end-to-end solutions with measurable goals and results. Understanding and accepting of the ever-changing controlled chaos that is an early startup, and willing to work within that chaos to improve processes and outcomes. Experience balancing contending priorities and collaborating with colleagues to reach workable compromises. A proven track record of gaining trust and respect by consistently demonstrating sound critical-thinking and a risk-adjusted bias toward action. You pride yourself on having excellent reliability and integrity. Extraordinary grit; smart, creative, and persistent personality. Authorized to work in the US for any employer. Having worked in automotive or EV systems is a plus. Compensation and Benefits Annual Salary: 130K - 150K Equity: TBD Benefits: Health, vision, and dental insurance. Lunch stipend. Parking. This full-time position is based in Santa Monica, CA. We welcome candidates from all locations to apply, provided they are willing to relocate for the role. Relocation assistance will not be provided for successful candidates. Sponsorship not available at this time. Plug is an equal opportunity employer. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. And if you do, you suck.
    $108k-148k yearly est. 2d ago
  • Systems Software Engineer

    Sunbelt Controls 3.3company rating

    Pleasanton, CA jobs

    Now Hiring: Systems Software Engineer II 📍 Pleasanton, CA | 💰 $108,000 - $135,000 per year 🏢 About the Role We're looking for an experienced Systems Software Engineer II to join Sunbelt Controls, a leading provider of Building Automation System (BAS) solutions across the Western U.S. In this role, you'll develop and program databases, create custom graphics, and integrate control systems for smart buildings. You'll also support project startups, commissioning, and troubleshooting - working closely with project managers and engineers to deliver high-quality, energy-efficient building automation solutions. If you have a passion for technology, problem-solving, and helping create intelligent building systems, this opportunity is for you. ⚙️ What You'll Do Design and program BAS control system databases and graphics for assigned projects. Lead the startup, commissioning, and troubleshooting of control systems. Work with networked systems and diagnose LAN/WAN connectivity issues. Perform pre-functional and functional system testing, including LEED and Title 24 requirements. Manage project documentation, including as-builts and commissioning records. Coordinate with project teams, subcontractors, and clients for smooth execution. Mentor and support junior Systems Software Engineers. 🧠 What We're Looking For 2-5 years of experience in Building Automation Systems or a related field. Associate's degree in a technical field (Bachelor's in Mechanical or Electrical Engineering preferred). Proficiency in MS Office, Windows, and basic TCP/IP networking. Strong organizational skills and the ability to manage multiple priorities. Excellent communication and customer-service skills. Valid California driver's license. 💎 Why You'll Love Working With Us At Sunbelt Controls, we don't just build smart buildings - we build smart careers. As a 100% employee-owned company (ESOP), we offer a supportive, growth-oriented environment where innovation and teamwork thrive. What we offer: Competitive salary: $108K - $135K, based on experience Employee-owned company culture with a family-oriented feel Comprehensive health, dental, and vision coverage Paid time off, holidays, and 401(k)/retirement plan Professional growth, mentorship, and ongoing learning opportunities Veteran-friendly employer & Equal Opportunity workplace 🌍 About Sunbelt Controls Sunbelt Controls is a premier BAS solutions provider serving clients across multiple industries, including data centers, healthcare, education, biotech, and commercial real estate. We specialize in smart building technology, system retrofits, analytics, and energy efficiency - helping clients reduce operational costs and achieve sustainable performance. 👉 Apply today to join a team that's shaping the future of intelligent buildings. #Sunbelt #BuildingAutomation #SystemsEngineer #HVACControls #BASCareers
    $108k-135k yearly 1d ago
  • Data Engineer, Baseball Systems

    MLB 4.2company rating

    Washington, DC jobs

    Our Vision To become baseball's highest performing organization - defined by our relentless pursuit of excellence, strengthened by our connection, and fueled by our positive energy. Our Core Values Joy. We want to be around people that like to have fun. We remain optimistic through the ups and downs, we enjoy the process, and we share in something bigger than ourselves. Humility. We don't have all the answers. We lead with curiosity, listen generously, and seek growth from every experience - especially the tough ones. We have gotten over ourselves. Integrity. We do the right thing, even when it's hard. We act with honesty, accountability, and respect for our teammates and ourselves. We treat the custodian like the king. Competitiveness. We embrace challenges and thrive in high-stakes environments. We prepare relentlessly. We are energized by the idea of keeping score. Position Summary The Washington Nationals are seeking a software engineer focused on data engineering and infrastructure to join our Baseball Systems team. The data engineer will help ensure our datasets are well organized and accessible for our R&D analysts and other stakeholders in Baseball Operations. We are looking for candidates who are passionate about building impactful solutions around data workflows and enthusiastic about working in a baseball front office. The Washington Nationals Baseball R&D group is responsible for deriving insights from our baseball datasets and building proprietary metrics and data products which are used to inform baseball decision making and processes throughout our organization. As a data engineer in the Baseball Systems group, you'll have the opportunity to work with large, novel baseball datasets including video, pitch tracking, bat tracking, player tracking, biomechanical data, and performance data (i.e. from force plates, pressure mats, or wearable tech). These datasets present interesting engineering challenges given both the size of the datasets and the need to store the data in ways that are easy to access and use. We prefer candidates who are willing to relocate to Washington, DC area for in person work at Nationals Park, but we are willing to consider a fully remote option for exceptional candidates. Essential Duties and Responsibilities: Build robust data pipelines and ETL processes that pull data from a variety of sources (HTTP APIs, cloud object stores like AWS S3, relational databases) and write to our internal data systems Assist with the deployment, orchestration, and monitoring of our data pipelines and machine learning pipelines. We use Prefect for orchestration, utilizing AWS Fargate on ECS Design and build solutions to make working with our internal datasets easier. This work includes maintaining database tables and views, building out our Apache Iceberg data lakehouse, merging datasets from different sources into consistent formats, and building internal APIs to make data more accessible Develop validation processes to monitor data quality and flag potential sources of error Assist with the maintenance of our cloud computing infrastructure: manage and configure servers, databases, and other systems Research and advocate for any new tooling that can aide in timely, accurate and accessible data delivery Requirements: Minimum Education and Experience Requirements Bachelor's degree in computer science, computer engineering, information science, or a related field 4+ years of relevant work experience Knowledge, Skills, and Abilities necessary to perform essential functions Fluency in Python and SQL Experience with orchestration tools (e.g Prefect, Airflow, Dagster, etc.) Proficient with MySQL, PostgreSQL, DuckDB, or other relational database systems Experience with AWS or other cloud providers Some experience with Terraform and/or Ansible is a plus Comfortable working on the command line in a Linux environment Ability to work independently with close attention to detail Enthusiastic about working in baseball Authorized to work in the United States Physical/Environmental Requirements Office: Working conditions are normal for an office environment. Work may require occasional weekend and/or evening work Our Technical Stack Languages: Python, SQL, R (Analytics/ML) Orchestration & Compute: Prefect, AWS ECS (Fargate), AWS Batch Storage & Databases: PostgreSQL, MySQL, MongoDB, AWS S3 Modern Data Architecture: Apache Iceberg, Trino, DuckDB Infrastructure & DevOps: Terraform, Ansible, GitLab CI/CD, Ubuntu Linux Compensation: The projected annual salary range for this position is $130,000 - $150,000 per year. Actual pay is based on several factors, including but not limited to the applicant's: qualifications, skills, expertise, education/training, certifications, and other organization requirements. Starting salaries for new employees are frequently not at the top of the applicable salary range. Benefits: The Nationals offer a competitive and comprehensive benefits package that presently includes: Medical, dental, vision, life and AD&D insurance Short- and long-term disability insurance Flexible spending accounts 401(k) and pension plan Access to complimentary tickets to Nationals home games Employee discounts Free onsite fitness center Equal Opportunity Employer: The Nationals are dedicated to offering equal employment and advancement opportunities to all individuals regardless of their race, color, religion, national origin, sex, age, marital status, personal appearance, sexual orientation, gender identity or expression, family responsibilities, matriculation, political affiliation, genetic information, disability, or any other protected characteristic under applicable law.
    $130k-150k yearly 8d ago
  • Data Engineer

    Laura Mercier Cosmetics and Revive Skincare 4.4company rating

    Columbus, OH jobs

    About Us Orveon is a new kind of beauty company launched in December 2021 when we acquired our three iconic brands - bare Minerals, BUXOM, and Laura Mercier. With more than 600 associates, operating in 40+ countries, we're truly a global business. Our headquarters are in New York, with additional locations in major cities worldwide. We love our brands and are embarking on a powerful shift: To change how the world thinks about beauty. We are a collective of premium and prestige beauty brands committed to making beauty better and creating consumer love. People here are passionate, innovative, and thoughtful. This is an inspirational group of talented people, working together to build something better. We are looking for the best talent to join us on that journey. We believe we can accomplish more when we move as one. About The Role We are seeking a skilled and motivated Data Engineer to join our team. As a key contributor to our data architecture, you will play a central role in designing, building, and maintaining scalable data pipelines and solutions using Microsoft Fabric. You will collaborate closely with Power BI developers, and business analysts to ensure data is accessible, reliable, and optimized for analytics and decision-making. Primary Duties & Responsibilities * Design, develop, and maintain robust data pipelines using Microsoft Fabric, including Data Factory, OneLake, and Lakehouse. * Integrate data from various sources (structured and unstructured) into centralized data platforms. * Collaborate with Data Architects to implement scalable and secure data models. * Optimize data workflows for performance, reliability, and cost-efficiency. * Ensure data quality, governance, and compliance with internal and external standards. * Support Power BI developers and business analysts with curated datasets and semantic models. * Monitor and troubleshoot data pipeline issues and implement proactive solutions. * Document data processes, architecture, and best practices. Qualifications * Bachelor's degree in Computer Science, Data Science, Information Technology or related field. * 3+ years hands on experience in data engineering * Proficiency in Apache Spark. * Strong programming skills in Python, SQL, with experience in CI/CD * Experience in data modeling. * Best practices in managing lakehouses and warehouses. * Strong problem-solving skills and attention to detail. * Excellent communication and collaboration abilities. * Familiarity with MS fabric technologies and tools * Familiarity with version control (Git/Azure Devops) * Microsoft Data technologies especially Power BI * Experience with Azure data services such as Data factory, synapse, purview, logic/function apps. What Orveon Offers You You are a creator of Orveon's success and your own. This is a rare opportunity to share your voice, accelerate your career, drive innovation and fostering growth. We're a human sized company so your work will have a big impact on the organization. We invest in the well-being of our Orveoners - both personally and professionally and provide tailored benefits to support all of you, such as: * "Hybrid First" Model 2-3 days per week in office, balancing virtual and face-to-face interactions. * "Work From Anywhere" - Freedom to work three (3) weeks annually from the lo-cation of your choice. * Complimentary Products - Free and discounted products on new releases and fan-favorites. * Professional Development - Exposure to senior leadership, learning and development programs, and career advancement opportunities. * Community Engagement - Volunteer opportunities in the communities in which we live and work. Other things to know! Pay Transparency- One of our values is Stark Honesty, and the following represents a good faith estimate of the compensation range for this position. At Orveon Global, we carefully consider a wide range of non-discriminatory factors when deter-mining salary. Actual salaries will vary depending on factors including but not limited to location, education, experience, and qualifications. The pay range for this position $80,500-$100,500. Supplemented with all the amazing benefits above for full-time employees! Opportunities and Accommodations- Orveon is deeply committed to building a workplace and global community where inclusion is not only valued but prioritized. Find out more on our careers page. BE AWARE OF FRAUD! Please be aware of potentially fraudulent job postings or suspicious recruiter activity by persons that are posing as Orveon Global Recruiters/HR. Please confirm that the person you are working with has ******************** email address. Additionally, Orveon Global does NOT request financial information or payments from candidates at any point during the hiring process. If you suspect fraudulent activity, please visit the Orveon Global Careers Site at *********************************** to verify the posting and apply though our secure online portal.
    $80.5k-100.5k yearly 41d ago
  • Senior Data Engineer, Insights

    Decagon 3.9company rating

    San Francisco, CA jobs

    Decagon is the leading conversational AI platform empowering every brand to deliver concierge customer experience. Our AI agents provide intelligent, human-like responses across chat, email, and voice, resolving millions of customer inquiries across every language and at any time. Since coming out of stealth, Decagon has experienced rapid growth. We partner with industry leaders like Hertz, Eventbrite, Duolingo, Oura, Bilt, Curology, and Samsara to redefine customer experience at scale. We've raised over $200M from Bain Capital Ventures, Accel, a16z, BOND Capital, A*, Elad Gil, and notable angels such as the founders of Box, Airtable, Rippling, Okta, Lattice, and Klaviyo. We're an in-office company, driven by a shared commitment to excellence and velocity. Our values- customers are everything , relentless momentum , winner's mindset , and stronger together -shape how we work and grow as a team. About the Team The Insights team builds the product surfaces that help companies understand the conversations their AI agents have with customers. We create tools that uncover customer intents, highlight gaps in agent behavior, analyze voice of customer patterns, and guide teams toward writing better AOPs and improving overall agent quality. The team owns products like Insights, Watchtower, Suggestions, and AskAI. We focus on clear explanations, intuitive visualizations, and workflows that allow users to understand and act on information quickly. This product area has a significant opportunity to shape how customers learn from their data and improve their agents. About the Role As a Senior Data Engineer on the Insights team, you will design and build the critical data pipelines and feature stores that enable our suite of intelligent products to "learn" from every interaction. You will work closely with ML engineers and backend developers to architect systems that handle massive scale while maintaining the low latency required for real-time AI decision-making. This role is ideal for engineers who look beyond simple ETL and want to build the foundational data architecture for an AI-native product. You will solve complex problems around designing scalable feature stores, complex data aggregation, and data quality, defining how data flows through Decagon to power deep insights. In this role, you will Design and build scalable data pipelines that ingest, process, and structure millions of customer conversations (text and audio) in near real-time Architect and maintain feature stores that power workflows and products like Watchtower, Ask AI, and the rest of our analytics suite Develop the data foundation for our analytics platform, enabling deep queries into customer intent, sentiment trends, and agent performance Collaborate with Product and Engineering teams to translate complex product requirements into efficient, long-term data architecture solutions Your background looks something like this 5+ years of experience building production-grade data pipelines and distributed data systems Expert proficiency in Python, SQL, and data orchestration tools (Airflow, Dagster, Prefect, and other similar tooling) Strong understanding of data modeling, specifically for analytics and complex querying needs Experience working with unstructured or semi-structured data at scale Ability to own large technical projects from architectural design to production deployment Even better Experience making data-driven and user-driven decisions to shape product direction Experience designing actionable insights that help users understand complex behaviors Experience working with LLM applications or agentic systems Experience designing feature stores for machine learning applications Experience working with OLAP databases like Clickhouse Benefits: Medical, dental, and vision benefits Take what you need vacation policy Daily lunches, dinners and snacks in the office to keep you at your best Compensation $250K - $330K + Offers Equity
    $250k-330k yearly Auto-Apply 36d ago
  • Senior Data Engineer, Data Platform

    Otter 4.4company rating

    Mountain View, CA jobs

    The Opportunity We are looking for a Senior Data Engineer to join our Data Platform team and build the core data foundations that power analytics, experimentation, and decision-making across the company. In this role, you will design and own foundational data models, pipelines, and platforms that enable self-serve analytics and trustworthy insights at scale. You will partner closely with Growth, Product, Sales, Marketing, Data Insights, and Engineering teams as a technical thought leader who helps teams extract meaningful value from data. This is a high-ownership role in a growing company, and you will have a direct impact on how data is collected, modeled, and used to drive the business forward. Your Impact * Build and own foundational data models that enable self-serve analytics and key business metrics * Design, operate, and scale reliable data pipelines and platforms * Partner with stakeholders to translate business needs into trusted data products * Influence data collection and logging standards across production systems * Establish best practices for data modeling, quality, and governance * Implement data quality checks, statistical validation, and anomaly detection * Provide technical leadership that improves data reliability and data literacy We're Looking for Someone Who * Has 5+ years of experience in data engineering * Has a Bachelor's degree in Computer Science or equivalent experience * Is highly hands-on, actively contributing code and driving projects to completion. * Has strong programming skills in Python and SQL * Has expertise with cloud data warehouses such as Snowflake, BigQuery, or Databricks * Has experience building and maintaining ETL/ELT pipelines * Is experienced with workflow orchestration (e.g., Airflow, dbt) * Has strong data modeling skills and understands how to design for analytical workloads * Has experience with cloud platforms (preferably AWS) * Has worked in fast-paced tech, AI or SaaS environments Nice to Haves * Experience with data governance, data catalogs, and data quality frameworks * Familiarity with BI or data visualization tools * Experience with experimentation or A/B testing frameworks * Exposure to ML pipelines, feature engineering, or applied data science * Experience with streaming or near-real-time data systems About Otter.ai We are in the business of shaping the future of work. Our mission is to make conversations more valuable. With over 1B meetings transcribed, Otter.ai is the world's leading tool for meeting transcription, summarization, and collaboration. Using artificial intelligence, Otter generates real-time automated meeting notes, summaries, and other insights from in-person and virtual meetings - turning meetings into accessible, collaborative, and actionable data that can be shared across teams and organizations. The company is backed by early investors in Google, DeepMind, Zoom, and Tesla. Otter.ai is an equal opportunity employer. We proudly celebrate diversity and are committed to building an inclusive and accessible workplace. We provide reasonable accommodations for qualified applicants throughout the hiring process. Accessibility & Accommodations Otter.ai is committed to providing reasonable accommodations for candidates with disabilities in our hiring process. If you need assistance or an accommodation during any stage of the recruitment process, please contact *********** at least 3 business days before your interview. * Otter.ai does not accept unsolicited resumes from 3rd party recruitment agencies without a written agreement in place for permanent placements. Any resume or other candidate information submitted outside of established candidate submission guidelines (including through our website or via email to any Otter.ai employee) and without a written agreement otherwise will be deemed to be our sole property, and no fee will be paid should we hire the candidate. Salary range Salary Range: $185,000 to $230,000 USD per year. This salary range represents the low and high end of the estimated salary range for this position. The actual base salary offered for the role is dependent on several factors. Our base salary is just one component of a comprehensive total rewards package.
    $185k-230k yearly 12d ago
  • Data Engineer - Clearance Required

    LMI 3.9company rating

    Washington, DC jobs

    We are seeking a Data Engineer to join our team. The Data Engineer will be responsible for designing, developing, and maintaining enterprise database systems in support of various government and defense projects. This role involves implementing ETL processes, managing low latency application databases, and ensuring database performance and availability. LMI is a new breed of digital solutions provider dedicated to accelerating government impact with innovation and speed. Investing in technology and prototypes ahead of need, LMI brings commercial-grade platforms and mission-ready AI to federal agencies at commercial speed. Leveraging our mission-ready technology and solutions, proven expertise in federal deployment, and strategic relationships, we enhance outcomes for the government, efficiently and effectively. With a focus on agility and collaboration, LMI serves the defense, space, healthcare, and energy sectors-helping agencies navigate complexity and outpace change. Headquartered in Tysons, Virginia, LMI is committed to delivering impactful results that strengthen missions and drive lasting value. Responsibilities Oversee data architecture for large-scale APIs and web application back-end data stores. Implement ETL processes to supply application data for usage in web applications. Manage critical low latency application databases on various platforms. Develop and maintain packages, scripts, and reusable components for system enhancements and interfaces. Develop scripts to validate various data on systems and improve database performance. Troubleshoot and correct code problems identified during ETL and refresh processes. Develop, implement, and execute quality assurance programs and quality control standards. Establish database backup/recovery strategy and implement automation of DBA utility functions. Document database design, data definition language, and data migration strategy. Engineer extensive database solutions and interfaces for enhanced data requests performance. Qualifications Minimum Qualifications: 5+ years of experience designing, developing, and maintaining enterprise database systems. Strong understanding of SQL (MariaDB/MySQL, Postgres, MS SQL, or Oracle). Demonstrated experience with ETL pipeline design and implementation using third-party vendor solutions (NNCompass, Pentaho) or developing custom solutions. Experience with cloud database development and implementation (AWS preferred). Robust understanding of data migration procedures for backup, restore, as well as schema migration. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree preferred but not mandatory. Ability to obtain and maintain a security clearance (Confidential/NAC required). Disclaimer: The salary range displayed represents the typical salary range for this position and is not a guarantee of compensation. Individual salaries are determined by various factors including, but not limited to location, internal equity, business considerations, client contract requirements, and candidate qualifications, such as education, experience, skills, and security clearances. The salary range for this position is - $148,776-$207,000
    $148.8k-207k yearly Auto-Apply 60d+ ago
  • Data Engineer

    Kimball Midwest 4.4company rating

    Columbus, OH jobs

    Kimball Midwest, a national distributor of maintenance, repair, and operation products, is searching for a Data Engineer to join our IT team! In this role, you would be accountable for the development and maintenance of robust data management platforms, ensuring their efficient design, data accuracy, and secure accessibility. You would also be responsible for establishing and maintaining appropriate models while guaranteeing consistent naming conventions and calculations across all data repositories in alignment with the organization's Business Glossary As a Kimball Midwest associate, you will experience why we have been recognized as one of the Top Workplaces in Columbus Thirteen years in a row! Our sales revenue growth is dynamic, increasing from $1 million in 1983 to over $500 million today. Throughout all our growth we have kept the family owned and operated culture alive. At Kimball Midwest, you are a name and not a number and we pride ourselves on our unique culture. Responsibilities * Develop and maintain data management platforms, including on-premises and cloud data warehouses and data lakes. * Solution design and dimensional modeling of data warehouses and analytic models to support reporting, data science, AI and other forms of D&A workloads. * ETL and ELT processing, source to target, including advanced SQL transformations. * Sourcing from internal and external data sources, including on-premises SQL Servers, Azure SQL Databases, APIs and flat-files. * Research and understanding of data structures in enterprise systems including Dynamics AX, Dynamics CRM, D365 F&SCM, D365 CE, Korber WMS, custom microservices, web development and legacy systems. * Ensure consistency in naming and calculations across all data repositories, in partnership with data governance, development teams and business stakeholders. * Implement and maintain security measures to protect organizational data, including access control, secure data transfer, auditing and monitoring. * Participation in the on-call support rotation for providing prompt and effective system support and customer service outside of regular business hours. * Demonstrate a passion for technology through continued growth in data and cloud related skills, staying abreast of technology trends and sharing knowledge with others. Qualifications * 5+ years experience in data engineer, data scientist, or related role * Bachelor's degree in computer related field or equivalent experience * Experience with program languages related to data engineering such as Python, R, Azure Data Factory, Data Bricks, C#, SQL, Hadoop, Kafka, Spark, and Power BI * Microsoft Fabric experience is preferred Additional Information This is a fully on-site position reporting to the office Monday through Friday. We offer a benefits package that includes health, dental and vision insurance, company sponsored life, optional life and disability insurance, Health Savings Accounts and Flexible Spending Accounts, a 401(k) plus match, Tuition Assistance, Paid Parental Leave, Paid Time Off (PTO), a Dress for your Day dress code and paid holidays. Kimball Midwest is an equal opportunity employer that is committed to a program of recruitment of females, minority group members, individuals with disabilities, qualifying veterans and any other classification that is protected by federal, state, or local law. We Participate in E-Verify. Participamos en E-Verify.
    $77k-99k yearly est. 12d ago
  • Data Engineer

    Kimball Midwest 4.4company rating

    Columbus, OH jobs

    Kimball Midwest, a national distributor of maintenance, repair, and operation products, is searching for a Data Engineer to join our IT team! We are looking for someone who is driven, passionate about data and technology, and excited about the future of Microsoft Fabric. This role requires a leader who stays up-to-date on technology trends, brings new ideas to the table, and thrives in a collaborative team environment within a Microsoft ecosystem. As a Kimball Midwest associate, you will experience why we have been recognized as one of the Top Workplaces in Columbus thirteen years in a row! Our sales revenue growth is dynamic, increasing from $1 million in 1983 to over $500 million today. Throughout all our growth we have kept the family-owned and operated culture alive. At Kimball Midwest, you are a name and not a number and we pride ourselves on our unique culture.What We Value Passion for data, technology, and continuous learning. Leadership and initiative to drive innovation. Collaboration and teamwork across diverse stakeholders. Curiosity and excitement about emerging technologies, especially Microsoft Fabric. Commitment to delivering high-quality, secure, and scalable data solutions. Key Responsibilities Architect, develop, and maintain modern data platforms, including data warehouses and lakes. Design dimensional models that power reporting, analytics, and advanced insights. Build and optimize ETL/ELT pipelines using advanced SQL and PySpark transformations. Integrate data from diverse sources: SQL Server, Azure SQL, APIs, and flat files. Collaborate on systems such as Dynamics 365 F&SCM and CE, AX, CRM, Infios WMS, custom microservices, and legacy applications. Enable advanced analytics and AI/ML workloads through efficient, scalable data pipelines. Champion consistency in naming conventions and calculations in partnership with data governance and business teams. Implement robust data security measures, including access control, encryption, and auditing. Participate in on-call support rotation for complex tier-2/3 incidents. Stay ahead of industry trends, share knowledge, and proactively introduce innovative solutions. Qualifications 5+ years of experience in data engineering or a related field. Bachelor's degree in Computer Science or equivalent experience. Proven passion for data and technology, with a record of continuous learning and leadership. Expertise in tools and languages: Python, SQL, Azure Synapse Analytics, Microsoft Fabric (Lakehouse, Dataflows), Azure Data Factory, Spark, Databricks, and integration with Power BI and Copilot. Strong understanding of Microsoft ecosystem and enthusiasm for Microsoft Fabric advancements. Microsoft Fabric experience highly preferred. Additional Information This is a fully on-site position reporting to the Columbus, Ohio office Monday through Friday. We offer a benefits package that includes health, dental and vision insurance, company sponsored life, optional life and disability insurance, Health Savings Accounts and Flexible Spending Accounts, a 401(k) plus match, Tuition Assistance, Paid Parental Leave, Paid Time Off (PTO), a Dress for your Day dress code and paid holidays. Kimball Midwest is an equal opportunity employer that is committed to a program of recruitment of females, minority group members, individuals with disabilities, qualifying veterans and any other classification that is protected by federal, state, or local law. We Participate in E-Verify. Participamos en E-Verify.
    $77k-99k yearly est. Auto-Apply 13d ago
  • Data Engineer

    Tabs 4.5company rating

    New York, NY jobs

    Job Description Tabs is the leading AI-native revenue platform for modern finance and accounting teams. Tabs agents automates the entire contract-to-cash lifecycle, including billing, collections, revenue recognition, and reporting, to help teams eliminate manual work and accelerate cash flow. High-growth companies like Cursor and Statsig rely on Tabs to generate invoices directly from contracts, reconcile payments in real time, and automate ASC 606 compliance. Founded in 2023, Tabs has raised over $91 million from Lightspeed Venture Partners, General Catalyst, and Primary. The team is headquartered in New York and brings deep expertise in finance and AI. About the role You'll be the first Data Engineer at Tabs, building the core data infrastructure that powers our internal KPIs, customer insights, and our AI systems. Your initial mandate is to design and implement the foundational data platform around our core metrics, and set us up for long-term analytical scalability. This role is ideal for an engineer who has built a data platform end to end and wants ownership of the full data stack. You're comfortable operating in ambiguity and focused on driving real business impact. What you'll do Design and implement our first scalable data warehouse/lakehouse to support KPIs. Build and maintain reliable data pipelines (batch and/or streaming) from application databases and third-party tools. Work with leadership to translate business metrics into concrete data models and schemas. Define and own data modeling standards and best practices. Implement monitoring, data quality checks, and observability around pipelines and core tables. Enable BI and data analysts through well structured models and self-serve friendly datasets. Document the data platform (lineage, definitions, contracts) and help establish a shared source of truth for metrics. Experience 3-5+ years of experience as a Data Engineer, ideally in a mid-stage startup. Solid programming skills in Python. Strong SQL skills and experience designing/data modeling for analytics Hands-on experience with a modern cloud data stack, such as: Warehouses: Snowflake, BigQuery, Redshift, Databricks, etc. Orchestration/transformation: dbt, Airflow, Dagster, or similar. Ingestion: Fivetran, Stitch, custom ingestion pipelines, etc. Experience building and operating production-grade data pipelines (performance, reliability, cost-awareness). Strong understanding of data quality, testing, and monitoring practices. Comfort starting from a relatively greenfield environment and making pragmatic build vs. buy decisions. Nice to have Experience supporting BI tools (Looker, Mode, Tableau, Metabase, etc) and designing semantic layers. Experience in B2B SaaS, especially around revenue, usage, and customer health metrics. Prior experience as an early data hire or working in a small or medium, fast-growing startup. Perks and Benefits (Full-time Employees) Competitive compensation and equity Up to 100% employer covered monthly healthcare premium (medical, dental, vision) Daily meal stipend for in office days Tax free commuter and parking benefits Parental leave up to 12 weeks Voluntary insurances (Life, Hospital, Critical Illness, Accident) Employee Assistance Program (Rightway) Unlimited PTO 401k Tabs is an equal opportunity employer. We welcome teammates of all identities and do not discriminate on the basis of race, ethnicity, religion, gender identity, sexual orientation, age, disability, veteran status, or any other protected characteristic. We're committed to creating an environment where everyone can grow, contribute, and feel comfortable being themselves. Compensation Range: $140K - $195K
    $140k-195k yearly 17d ago
  • Data Engineer

    Ingenio 4.2company rating

    San Francisco, CA jobs

    Before we get started: Here at Ingenio, we'd love to talk with you regardless of your qualifications or years of experience. If you believe you'd be a great fit for this role, we invite you to apply even if you do not meet all points on the job description. Who we are: Ingenio is a global media and technology company developing products that provide guidance on love, relationships, careers, and all aspects of life. We are passionate about connecting people with the world's best advisors and content to empower everyone to live happier lives. Ingenio offers the world's largest portfolio of over 20 marketplace and media brands in the spiritual and emotional wellness space. Our flagship brands include Keen, Horoscope.com, Astrology.com, Purple Garden, Kasamba, and Kang. How you'll be impactful: As a Data Engineer, you will build and maintain data processing pipelines and systems that are critical for data operations and business efficiency. This includes designing scalable data workflows, maintaining new API integrations with marketing and affiliate partners (e.g., Google Ads, GA, Liveramp), and ensuring accurate, reliable data for analytics and machine learning applications. Please note: This role will require being in our SF office at least 3 days per week (Tuesday-Thursday). What you'll be doing: Assist in the design, construction, and maintenance of large-scale data processing systems. Design and build scalable data pipelines: Architect and implement data workflows for high-volume, high-complexity datasets, ensuring data is reliable, accurate, and accessible for analytics and machine learning applications. Work closely with data scientists to deploy models into production using modern MLOps practices (CI/CD for ML, automated retraining, monitoring, and rollout strategies). Operationalize feature stores, model registries, and model versioning workflows. Design data workflows optimized for AI/ML use cases, including real-time streaming data, vector data pipelines, or embeddings-based retrieval systems where applicable. Build and maintain data applications in the cloud using FastAPI and other related technologies. Implement data flow processes to integrate, transform, and summarize data from disparate sources. Develop ETL scripts and SQL queries to manage data across multiple platforms. Collaborate with team members to improve data efficiency and quality. What you'll need to be successful: Bachelor's degree in Computer Science, Engineering, or a related field. 4+ years experience in building data pipes, datalakes, and data warehouses. Familiarity with data warehousing solutions like Snowflake. Strong knowledge of ETL/ELT concepts, frameworks, or tools (e.g., Apache Airflow, DBT, or similar). Experience with ML workflow orchestration tools such as MLflow, Kubeflow, Airflow, SageMaker, or similar. Familiarity with model deployment, monitoring, and lifecycle management. Strong experience with SQL and familiarity with programming languages such as Python or Java. Good understanding of data warehousing and data modeling concepts. Strong problem-solving skills and attention to detail. Perks & Benefits: Opportunity to work alongside a friendly, talented, and highly collaborative team Premium medical, dental, and vision insurance Generous holiday and PTO policies (including Birthday PTO!) Summer Fridays Technology stipend 401k matching program Lunch Wellness allowance Training and development opportunities and allowance Fun and inclusive in-person and virtual events Pay Transparency: The US base salary range for this full-time position is $125,000-$155,000. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Candidates must have valid work authorization. This role does not sponsor or support new or transferred H-1B, F-1, OPT, STEM OPT, or any other visas. Why Ingenio? We are humble. We believe the best result is achieved by leveraging others' perspectives We think like owners. We make decisions that optimize for the greater good of the organization We challenge limiting beliefs. We are at our best when we identify and shatter status quo expectations Ingenio is an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
    $125k-155k yearly Auto-Apply 20d ago
  • Data Engineer

    Profound 3.7company rating

    New York, NY jobs

    Profound is looking for an experienced Data Engineer to own and scale our data platform. You will maintain and optimize our pipelines, design new ingestion processes, and support the data needs of our product, data science, and go-to-market teams. This is a great opportunity for someone who enjoys working with modern data tooling, improving performance and reliability, and building the foundation for ML-powered features. What You'll Do Maintain and improve our data pipelines to keep data flowing reliably from ingestion to delivery Optimize performance and costs across Snowflake, Clickhouse, AWS, dbt, and Dagster Own data quality by implementing monitoring, alerting, and validation Manage and extend our infrastructure, including orchestration and CI/CD for data workflows Support MLOps as we develop and productionize machine learning models Be on call for critical pipelines and respond quickly to issues Collaborate with data scientists, product managers, and engineers to launch new data products What We Expect from You Proven experience designing, building, and maintaining production data pipelines Proficiency in Python and SQL Hands-on experience with dbt, orchestration frameworks (such as Dagster, Airflow, or Prefect), and AWS Familiarity with Snowflake, Clickhouse, or other modern data warehouses Strong problem-solving skills and a proactive mindset Experience with or interest in MLOps and supporting ML workflows in production A sense of ownership and accountability for data quality and reliability What You'll Get Out of It Access to some of the most interesting and rare data assets in AI, based on real human interactions with large language models The chance to own and shape Profound's data platform at an early stage The opportunity to work on meaningful problems at the intersection of AI and marketing A fast-moving culture with autonomy, trust, and room to grow Competitive compensation and meaningful equity This is an on-site role in our Union Square office-designed for builders who thrive on speed, iteration, and impact. We're happy to support visa sponsorship for qualified international candidates.
    $100k-145k yearly est. Auto-Apply 60d+ ago
  • Data Engineer

    Laura Mercier Cosmetics and Revive Skincare 4.4company rating

    New York, NY jobs

    About Us Orveon is a new kind of beauty company launched in December 2021 when we acquired our three iconic brands - bare Minerals, BUXOM, and Laura Mercier. With more than 600 associates, operating in 40+ countries, we're truly a global business. Our headquarters are in New York, with additional locations in major cities worldwide. We love our brands and are embarking on a powerful shift: To change how the world thinks about beauty. We are a collective of premium and prestige beauty brands committed to making beauty better and creating consumer love. People here are passionate, innovative, and thoughtful. This is an inspirational group of talented people, working together to build something better. We are looking for the best talent to join us on that journey. We believe we can accomplish more when we move as one. About The Role We are seeking a skilled and motivated Data Engineer to join our team. As a key contributor to our data architecture, you will play a central role in designing, building, and maintaining scalable data pipelines and solutions using Microsoft Fabric. You will collaborate closely with Power BI developers, and business analysts to ensure data is accessible, reliable, and optimized for analytics and decision-making. Primary Duties & Responsibilities * Design, develop, and maintain robust data pipelines using Microsoft Fabric, including Data Factory, OneLake, and Lakehouse. * Integrate data from various sources (structured and unstructured) into centralized data platforms. * Collaborate with Data Architects to implement scalable and secure data models. * Optimize data workflows for performance, reliability, and cost-efficiency. * Ensure data quality, governance, and compliance with internal and external standards. * Support Power BI developers and business analysts with curated datasets and semantic models. * Monitor and troubleshoot data pipeline issues and implement proactive solutions. * Document data processes, architecture, and best practices. Qualifications * Bachelor's degree in Computer Science, Data Science, Information Technology or related field. * 3+ years hands on experience in data engineering * Proficiency in Apache Spark. * Strong programming skills in Python, SQL, with experience in CI/CD * Experience in data modeling. * Best practices in managing lakehouses and warehouses. * Strong problem-solving skills and attention to detail. * Excellent communication and collaboration abilities. * Familiarity with MS fabric technologies and tools * Familiarity with version control (Git/Azure Devops) * Microsoft Data technologies especially Power BI * Experience with Azure data services such as Data factory, synapse, purview, logic/function apps. What Orveon Offers You You are a creator of Orveon's success and your own. This is a rare opportunity to share your voice, accelerate your career, drive innovation and fostering growth. We're a human sized company so your work will have a big impact on the organization. We invest in the well-being of our Orveoners - both personally and professionally and provide tailored benefits to support all of you, such as: * "Hybrid First" Model 2-3 days per week in office, balancing virtual and face-to-face interactions. * "Work From Anywhere" - Freedom to work three (3) weeks annually from the lo-cation of your choice. * Complimentary Products - Free and discounted products on new releases and fan-favorites. * Professional Development - Exposure to senior leadership, learning and development programs, and career advancement opportunities. * Community Engagement - Volunteer opportunities in the communities in which we live and work. Other things to know! Pay Transparency- One of our values is Stark Honesty, and the following represents a good faith estimate of the compensation range for this position. At Orveon Global, we carefully consider a wide range of non-discriminatory factors when deter-mining salary. Actual salaries will vary depending on factors including but not limited to location, education, experience, and qualifications. The pay range for this position $80,500-$100,500. Supplemented with all the amazing benefits above for full-time employees! Opportunities and Accommodations- Orveon is deeply committed to building a workplace and global community where inclusion is not only valued but prioritized. Find out more on our careers page. BE AWARE OF FRAUD! Please be aware of potentially fraudulent job postings or suspicious recruiter activity by persons that are posing as Orveon Global Recruiters/HR. Please confirm that the person you are working with has ******************** email address. Additionally, Orveon Global does NOT request financial information or payments from candidates at any point during the hiring process. If you suspect fraudulent activity, please visit the Orveon Global Careers Site at *********************************** to verify the posting and apply though our secure online portal.
    $80.5k-100.5k yearly 29d ago
  • Data Engineer

    Tabs 4.5company rating

    New York, NY jobs

    Tabs is the leading AI-native revenue platform for modern finance and accounting teams. Tabs agents automates the entire contract-to-cash lifecycle, including billing, collections, revenue recognition, and reporting, to help teams eliminate manual work and accelerate cash flow. High-growth companies like Cursor and Statsig rely on Tabs to generate invoices directly from contracts, reconcile payments in real time, and automate ASC 606 compliance. Founded in 2023, Tabs has raised over $91 million from Lightspeed Venture Partners, General Catalyst, and Primary. The team is headquartered in New York and brings deep expertise in finance and AI. About the role You'll be the first Data Engineer at Tabs, building the core data infrastructure that powers our internal KPIs, customer insights, and our AI systems. Your initial mandate is to design and implement the foundational data platform around our core metrics, and set us up for long-term analytical scalability. This role is ideal for an engineer who has built a data platform end to end and wants ownership of the full data stack. You're comfortable operating in ambiguity and focused on driving real business impact. What you'll do Design and implement our first scalable data warehouse/lakehouse to support KPIs. Build and maintain reliable data pipelines (batch and/or streaming) from application databases and third-party tools. Work with leadership to translate business metrics into concrete data models and schemas. Define and own data modeling standards and best practices. Implement monitoring, data quality checks, and observability around pipelines and core tables. Enable BI and data analysts through well structured models and self-serve friendly datasets. Document the data platform (lineage, definitions, contracts) and help establish a shared source of truth for metrics. Experience 3-5+ years of experience as a Data Engineer, ideally in a mid-stage startup. Solid programming skills in Python. Strong SQL skills and experience designing/data modeling for analytics Hands-on experience with a modern cloud data stack, such as: Warehouses: Snowflake, BigQuery, Redshift, Databricks, etc. Orchestration/transformation: dbt, Airflow, Dagster, or similar. Ingestion: Fivetran, Stitch, custom ingestion pipelines, etc. Experience building and operating production-grade data pipelines (performance, reliability, cost-awareness). Strong understanding of data quality, testing, and monitoring practices. Comfort starting from a relatively greenfield environment and making pragmatic build vs. buy decisions. Nice to have Experience supporting BI tools (Looker, Mode, Tableau, Metabase, etc) and designing semantic layers. Experience in B2B SaaS, especially around revenue, usage, and customer health metrics. Prior experience as an early data hire or working in a small or medium, fast-growing startup. Perks and Benefits (Full-time Employees) Competitive compensation and equity Up to 100% employer covered monthly healthcare premium (medical, dental, vision) Daily meal stipend for in office days Tax free commuter and parking benefits Parental leave up to 12 weeks Voluntary insurances (Life, Hospital, Critical Illness, Accident) Employee Assistance Program (Rightway) Unlimited PTO 401k Tabs is an equal opportunity employer. We welcome teammates of all identities and do not discriminate on the basis of race, ethnicity, religion, gender identity, sexual orientation, age, disability, veteran status, or any other protected characteristic. We're committed to creating an environment where everyone can grow, contribute, and feel comfortable being themselves.
    $88k-116k yearly est. Auto-Apply 16d ago
  • Staff Data Engineer

    Imprint 3.9company rating

    San Francisco, CA jobs

    Who We Are Imprint is reimagining co-branded credit cards & financial products to be smarter, more rewarding, and truly brand-first. We partner with companies like Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to launch modern credit programs that deepen loyalty, unlock savings, and drive growth. Our platform combines advanced payments infrastructure, intelligent underwriting, and seamless UX to help brands offer powerful financial products-without becoming a bank. Co-branded cards account for over $300 billion in U.S. annual spend-but most are still powered by legacy banks. Imprint is the modern alternative: flexible, tech-forward, and built for today's consumer. Backed by Kleiner Perkins, Thrive Capital, and Khosla Ventures, we're building a world-class team to redefine how people pay-and how brands grow. If you want to work fast, solve hard problems, and make a real impact, we'd love to meet you. The Team The Data Engineering team at Imprint is responsible for building and scaling the data infrastructure that supports product development, analytics, operations, and machine learning across the company. We own the pipelines, platforms, and processes that empower our stakeholders to trust and act on our data. The Role As our Staff Data Engineer, you'll architect our data platform while solving our most complex technical challenges. You'll build the foundation for Imprint's next decade of growth: scaling infrastructure for explosive expansion, delivering insights that drive million-dollar decisions, enabling bulletproof partner data sharing, and transforming how every team leverages data. Join us in creating a platform that doesn't just meet today's needs-but anticipates tomorrow's possibilities. What You'll DoBuild & Scale Infrastructure Architect our next-generation data platform, optimizing Snowflake, dbt Cloud, and real-time CDC pipelines for enterprise scale Design secure, compliant partner data delivery systems via Snowflake shares, S3/SFTP integrations, and Marketplace listings Create mission-critical financial reporting pipelines with exceptional accuracy and reliability Drive Technical Excellence Establish company-wide data standards for modeling, lineage, contracts, and orchestration Champion data reliability, observability, and trust across all systems Make strategic technology decisions that balance innovation with pragmatism Lead & Mentor Elevate engineering practices across Analytics, Data, and Engineering teams Conduct architecture reviews and build reusable frameworks Share expertise through documentation, templates, and hands-on guidance What We Look For 10+ years of experience in data engineering or related fields, with proven experience owning platform-level architecture and strategy. Deep expertise in Snowflake, dbt Cloud, Change Data Capture frameworks, orchestration tools (Airflow, dbt Cloud), and reverse ETL. Strong background in external data sharing and partner integrations, including Snowflake data shares, S3/SFTP pipelines, and Marketplace listings. Proven ability to design and implement data governance and observability systems, including data contracts, lineage tracking, anomaly detection, and automated monitoring. Strong engineering skills in SQL and Python, emphasizing testing, CI/CD, and maintainability in complex data systems. Reputation as a mentor and technical authority who elevates the technical quality and rigor of peers and teams. Exceptional communication skills and the ability to influence technical and business leadership across multiple departments. Perks & Benefits Competitive compensation and equity packages Leading configured work computers of your choice Flexible paid time off Fully covered, high-quality healthcare, including fully covered dependent coverage Additional health coverage includes access to One Medical and the option to enroll in an FSA 16 weeks of paid parental leave for the primary caregiver and 8 weeks for all new parents Access to industry-leading technology across all of our business units, stemming from our philosophy that we should invest in resources for our team that foster innovation, optimization, and productivity Imprint is committed to a diverse and inclusive workplace. Imprint is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. Imprint welcomes talented individuals from all backgrounds who want to build the future of payments and rewards. If you are passionate about FinTech and eager to grow, let's move the world forward, together.
    $124k-176k yearly est. Auto-Apply 59d ago
  • Data Engineer

    Imprint 3.9company rating

    San Francisco, CA jobs

    Who We Are Imprint is reimagining co-branded credit cards & financial products to be smarter, more rewarding, and truly brand-first. We partner with companies like Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to launch modern credit programs that deepen loyalty, unlock savings, and drive growth. Our platform combines advanced payments infrastructure, intelligent underwriting, and seamless UX to help brands offer powerful financial products-without becoming a bank. Co-branded cards account for over $300 billion in U.S. annual spend-but most are still powered by legacy banks. Imprint is the modern alternative: flexible, tech-forward, and built for today's consumer. Backed by Kleiner Perkins, Thrive Capital, and Khosla Ventures, we're building a world-class team to redefine how people pay-and how brands grow. If you want to work fast, solve hard problems, and make a real impact, we'd love to meet you. The Team The Data Engineering team at Imprint is responsible for building and scaling the data infrastructure that supports product development, analytics, operations, and machine learning across the company. We own the pipelines, platforms, and processes that empower our stakeholders to trust and act on our data. We're looking for a Data Engineer to help evolve our modern data stack and deliver reliable, scalable data solutions. Your work will directly power decision-making and innovation across the business-from financial operations to real-time personalization. What You'll Do Build and maintain scalable data pipelines and infrastructure across batch and streaming systems. Support and improve core components of Imprint's data stack, including Snowflake, dbt Cloud, Change Data Capture frameworks, and reverse ETL integrations. Implement data modeling best practices and contribute to testing, observability, and governance initiatives. Collaborate with stakeholders across Product, Analytics, Finance, and Engineering to ensure timely and accurate data delivery. Assist with external data integrations, such as partner-facing data shares (e.g., S3, SFTP, Snowflake) and financial reporting pipelines (e.g., with Netsuite). Participate in design discussions for scaling data infrastructure, including schema design, orchestration, and data lineage. Create clear documentation and ensure reproducibility for datasets and workflows you develop. Learn about modern data tools and trends and suggest improvements to existing processes. What We Look For 3+ years of experience in data engineering, analytics engineering, or related roles. Solid experience with Snowflake and dbt, with an understanding of dimensional modeling and data warehouse concepts. Hands-on experience with ETL/ELT pipelines and familiarity with orchestration frameworks (e.g., dbt Cloud, Airflow). Working knowledge of data integration tools like Fivetran or similar Change Data Capture solutions. Strong SQL skills and proficiency in Python or a similar programming language. Experience building and maintaining production data systems with guidance from senior team members. A detail-oriented mindset and enthusiasm for building clean, maintainable data systems. Strong communication skills and ability to work effectively with cross-functional partners. Nice to Have Experience in fintech, high-growth startups, or customer-facing data products. Familiarity with event streaming technologies like Kafka or Kinesis. Exposure to data governance, security, or compliance practices. Interest in ML pipelines or experimentation frameworks. Perks & Benefits Competitive compensation and equity packages Leading configured work computers of your choice Flexible paid time off Fully covered, high-quality healthcare, including fully covered dependent coverage Additional health coverage includes access to One Medical and the option to enroll in an FSA 16 weeks of paid parental leave for the primary caregiver and 8 weeks for all new parents Access to industry-leading technology across all of our business units, stemming from our philosophy that we should invest in resources for our team that foster innovation, optimization, and productivity Imprint is committed to a diverse and inclusive workplace. Imprint is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. Imprint welcomes talented individuals from all backgrounds who want to build the future of payments and rewards. If you are passionate about FinTech and eager to grow, let's move the world forward, together.
    $124k-176k yearly est. Auto-Apply 60d+ ago
  • Senior Data Engineer

    Imprint 3.9company rating

    New York jobs

    Who We Are Imprint is reimagining co-branded credit cards & financial products to be smarter, more rewarding, and truly brand-first. We partner with companies like Crate & Barrel, Rakuten, Booking.com, H-E-B, Fetch, and Brooks Brothers to launch modern credit programs that deepen loyalty, unlock savings, and drive growth. Our platform combines advanced payments infrastructure, intelligent underwriting, and seamless UX to help brands offer powerful financial products-without becoming a bank. Co-branded cards account for over $300 billion in U.S. annual spend-but most are still powered by legacy banks. Imprint is the modern alternative: flexible, tech-forward, and built for today's consumer. Backed by Kleiner Perkins, Thrive Capital, and Khosla Ventures, we're building a world-class team to redefine how people pay-and how brands grow. If you want to work fast, solve hard problems, and make a real impact, we'd love to meet you. The Team The Data Engineering team at Imprint is responsible for building and scaling the data infrastructure that supports product development, analytics, operations, and machine learning across the company. We own the pipelines, platforms, and processes that empower our stakeholders to trust and act on our data. We're looking for a Senior Data Engineer to help evolve our modern data stack and deliver reliable, scalable data solutions. Your work will directly power decision-making and innovation across the business-from financial operations to real-time personalization. What You'll Do Design, build, and maintain scalable data pipelines and infrastructure across batch and streaming systems. Own core components of Imprint's data stack, including Snowflake, dbt Cloud, Change Data Capture frameworks, and reverse ETL integrations. Develop and enforce best practices in data modeling, testing, observability, and governance. Partner with stakeholders across Product, Analytics, Finance, and Engineering to ensure timely and accurate data delivery. Work on external data integrations, such as partner-facing data shares (e.g., S3, SFTP, Snowflake) and financial reporting pipelines (e.g., with Netsuite). Contribute to architectural decisions for how we scale data infrastructure, including schema design, orchestration, and data lineage. Champion clear documentation, reproducibility, and reliability for critical datasets and workflows. Stay informed about modern data tools and trends and help drive their adoption when appropriate. What We Look For 6+ years of experience in data engineering, analytics engineering, or related roles. Expertise in Snowflake and dbt Cloud, with a strong understanding of dimensional modeling and data warehouse best practices. Experience working with Change Data Capture (e.g., Fivetran, Hevo), ETL/ELT pipelines, and orchestration frameworks (e.g., dbt Cloud, Airflow). Familiarity with reverse ETL tools like Hightouch or Segment, and operational analytics use cases. Strong SQL skills and proficiency in Python or a similar programming language. A track record of technical ownership and shipping production-grade data systems. A detail-oriented mindset and a passion for building clean, maintainable, and observable data systems. Strong communication skills and the ability to collaborate effectively with cross-functional partners. Nice to Have Experience in fintech, high-growth startups, or customer-facing data products. Familiarity with event streaming technologies like Kafka or Kinesis. Exposure to data governance, security, or compliance practices. Previous work on ML pipelines or experimentation frameworks. Perks & Benefits Competitive compensation and equity packages Leading configured work computers of your choice Flexible paid time off Fully covered, high-quality healthcare, including fully covered dependent coverage Additional health coverage includes access to One Medical and the option to enroll in an FSA 16 weeks of paid parental leave for the primary caregiver and 8 weeks for all new parents Access to industry-leading technology across all of our business units, stemming from our philosophy that we should invest in resources for our team that foster innovation, optimization, and productivity Imprint is committed to a diverse and inclusive workplace. Imprint is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. Imprint welcomes talented individuals from all backgrounds who want to build the future of payments and rewards. If you are passionate about FinTech and eager to grow, let's move the world forward, together.
    $97k-139k yearly est. Auto-Apply 60d+ ago
  • Senior Game Engineer

    Rumble Entertainment 4.1company rating

    San Francisco, CA jobs

    Engineering | Remote Rumble Games was founded in 2011 and is headquartered in San Mateo, California. Our fully-remote development studio is home to a tight-knit team of professionals whose mission is to create the most engaging game experiences on the planet. We combine the best of AAA games, free-to-play accessibility and blockchain technology. We are passionate about collaboration and iteration to create games that will surprise and delight our players. We emphasize a positive work-life balance to allow our team to develop their best work. Join us! Your Mission We are looking for a talented Game Engineer to develop gameplay systems for online video games with large-scale deployments. You will work directly with our design and production teams using highly collaborative processes to create amazing products. You will write highly flexible code for prototyping game features and write robust, scalable code once the fun has been found, and you understand the trade-offs between both approaches. How You Will Contribute * You will collaborate with production, game and engineering teams to devise optimal engineering solutions to gameplay requirements. * You will architect and code sophisticated client/server gameplay systems. * You will implement software systems with attention to security, reliability, scalability, maintainability and performance. * You will innovate and iterate on processes, systems and technology to deliver a world-class gaming experience. * You will be a team-player; Identify and articulate technical and production risks and obstacles; generate and implement solutions in collaboration with the team. * You will help mentor other engineers to help develop their skill sets. We'd Love To Hear From You, If * You have a Bachelor's degree in Computer Science or related field, or equivalent experience. * You have 5+ years development experience with at least one shipped product. * You are Fluent in C#, C++, or Java; experience with other languages is a plus. * You have Unity Experience. * You have proven your effectiveness in the delivery of production quality code for client/server topologies and synchronous multiplayer gameplay. * You have passion for games, DApps, and Web3. * You have experience working on and playing RPGs, strategy, and action games. Benefits Having a happy team that collaborates well is our top priority. We offer exceptional benefits and invest in our team's happiness, wellbeing, and growth. * Generous salary, 401k matching, and paid time off. * Healthcare, Vision, Dental, & Disability Insurance. * Quarterly contribution & discounts for wellness related activities and programs. * Exceptional culture and dedication to our team. Send a resume to [email protected] California residents, please click here for our CCPA Employee and Applicant Privacy Notice.
    $105k-157k yearly est. 60d+ ago

Learn more about Spectrum jobs

View all jobs