Software Engineer II - North American Sports
Data engineer job at Hudl
At Hudl, we build great teams. We hire the best of the best to ensure you're working with people you can constantly learn from. You're trusted to get your work done your way while testing the limits of what's possible and what's next. We work hard to provide a culture where everyone feels supported, and our employees feel it-their votes helped us become one of Newsweek's Top 100 Global Most Loved Workplaces.
We think of ourselves as the team behind the team, supporting the lifelong impact sports can have: the lessons in teamwork and dedication; the influence of inspiring coaches; and the opportunities to reach new heights. That's why we help teams from all over the world see their game differently. Our products make it easier for coaches and athletes at any level to capture video, analyze data, share highlights and more.
Ready to join us?
Your Role
We're looking for a Software Engineer II to join our team and work on products for our Elite North American market, which includes American football, basketball, ice hockey and soccer. You'll have the chance to work on a new initiative that's going to make a significant impact on the top teams in these sports.
As a Software Engineer II, you'll:
Collaborate. By working closely with a cross-functional team across Engineering, Quality, Product, Design and Scrum, you'll help build products for the best of the best in sports.
Deliver full-stack features. We iterate rapidly and deploy changes to the product hundreds of times a day across our Engineering team.
Propose new solutions. You'll have the opportunity to solve technical challenges and provide guidance to less experienced Engineers.
We'd like to hire someone for this role who lives near our offices in Lincoln, Omaha or Lexington, but we're also open to remote candidates in Kansas City, Chicago, Austin or Dallas.
Must-Haves
Strong technical proficiency. You've spent 2+ years in full-stack engineering, and you've spent time with cloud-based systems/services. You're also an advocate of TDD and CI/CD, and you can drive engineering practices across any team.
A collaborative, team-first mindset. You know building excellent software is a team effort and you're willing to collaborate with others to get to the best outcome-whether that means providing input in technical discussions, pitching in when a teammate needs a hand, or providing quality feedback in code review.
Experience independently navigating uncertainty. You've been given ambiguous work with many possible implementation options and identified the one that pragmatically balances quality, consistency and immediate customer value.
Curiosity. You've picked up new technologies and domains on the job and know what form of learning helps you most. The idea of working across myriad layers of the stack and multiple products energizes you.
Nice-to-Haves
Professional background in TypeScript, React, GraphQL, C#, React, MongoDB and AWS. Adjacent languages, frameworks and services used at scale are also relevant experiences.
Sports industry knowledge. You've worked in sports technology for a high level college or professional American football team, or you've played the sport at those levels.
Familiarity with hybrid teams. Our Engineering team is spread across the U.S. with people working both in office and remotely. If you've worked with hybrid or remote teams before, that would help you adapt quickly to Hudl's working environment.
Our Role
Champion work-life harmony. We'll give you the flexibility you need in your work life (e.g., flexible vacation time, company-wide holidays and timeout (meeting-free) days, remote work options and more) so you can enjoy your personal life too.
Guarantee autonomy. We have an open, honest culture and we trust our people from day one. Your team will support you, but you'll own your work and have the agency to try new ideas.
Encourage career growth. We're lifelong learners who encourage professional development. We'll give you tons of resources and opportunities to keep growing.
Provide an environment to help you succeed. We've invested in our offices, designing incredible spaces with our employees in mind. But whether you're at the office or working remotely, we'll provide you the tech stack and hardware to do your best work.
Support your mental and physical health. We care about our employees' wellbeing. Our Employee Assistance Program, employee resource groups and fitness partner Peerfit have you covered.
Cover your medical insurance. We have multiple plans to pick from to ensure you'll have the coverage you (and your dependents) want, including vision, dental, fertility healthcare and family forming benefits.
Contribute to your 401(K). Yep, that's free money. We'll match up to 4% of your own contribution.
Compensation
The base salary range for this role is displayed below-starting salaries will typically fall near the middle of this range.
We make compensation decisions based on an individual's experience, skills and education in line with our internal pay equity practices.
Base Salary Range$90,000-$150,000 USDInclusion at Hudl
Hudl is an equal opportunity employer. Through our actions, behaviors and attitude, we'll create an environment where everyone, no matter their differences, feels like they belong.
We offer resources to ensure our employees feel safe bringing their authentic selves to work, including employee resource groups and communities. But we recognize there's ongoing work to be done, which is why we track our efforts and commitments in annual inclusion reports.
We also know imposter syndrome is real and the confidence gap can get in the way of meeting spectacular candidates. Please don't hesitate to apply-we'd love to hear from you.
Privacy Policy
Hudl Applicant and Candidate Privacy Policy
Auto-ApplySenior Data Engineer II
Remote
About Us
dbt Labs is the pioneer of analytics engineering, helping data teams transform raw data into reliable, actionable insights. Since 2016, we've grown from an open source project into the leading analytics engineering platform, now used by over 50,000 teams every week.
As of February 2025, we've surpassed $100 million in annual recurring revenue (ARR) and serve more than 5,400 dbt Cloud customers, including JetBlue, HubSpot, Vodafone New Zealand, and Dunelm. We're backed by top-tier investors including Andreessen Horowitz, Sequoia Capital, and Altimeter. At our core, we believe in empowering data practitioners:
Code-based data transformations unlock transparency, flexibility, and collaboration
Analysts should adopt software engineering best practices to build trusted data products
Core analytics infrastructure should be open source and user-controlled
Analytic code-not just tools-should be shared and community-driven
dbt is now synonymous with analytics engineering, defining the modern data stack and serving as the data control plane for enterprise teams around the world. And we're just getting started. We're growing fast and building a team of passionate, curious people across the globe. Learn more about what makes us special by checking out our values.
As a Senior Data Engineer II at dbt Labs, you'll take the lead in designing, building, and owning core components of our data ecosystem-from infrastructure to pipelines to data products. This data foundation is essential for enabling analytics, accelerating growth, and improving operational efficiency across the business. You'll be part of a tight-knit, strategic team that combines strong technical execution with a bias for impact and cross-functional influence.
This is a unique opportunity to join a team involved in using dbt Labs products daily to build the company's internal data capabilities. This group is responsible for building and maintaining foundational data infrastructure that powers critical business decisions across the company. With executive visibility and deep cross-functional impact, the work you do here will directly influence the trajectory of our growth. If you're excited by the challenge of building from the ground up, solving complex technical problems while utilizing cutting edge technology, and driving business strategy through data, this is your chance to make a lasting mark.
In this role, you can expect to:
Design, build, and manage scalable, reliable data pipelines that ingest product and event data into our data stores
Develop and maintain canonical datasets to track key product and business metrics-user growth, engagement, revenue, and more
Architect robust, reliable systems for large volume batch data processing
Drive decisions on data architecture, tooling, and engineering best practices
Enhance observability and monitoring of existing workflows and processes
Partner cross-functionally with teams across Infrastructure, Product, Marketing, Finance, and GTM to understand data needs and deliver impactful solutions
Provide product feedback by “dogfooding” new data infrastructure and AI technology
You're a great fit if you have:
Are an expert in SQL and Python
8+ years of experience as a data engineer, and 10+ years of total experience in software engineering (including data engineering roles)
Strong knowledge of data infrastructure and architecture design
Hands-on experience with modern orchestration tools like Airflow, Dagster, or Prefect
Hands-on experience with handling and ingesting large amounts of Parquet files
A bias for action-able to stay focused and prioritize effectively
You'll stand out if you have:
Experience developing and scaling dbt projects
Experience working in a SaaS or high-growth tech environment
Experience working with Product telemetry data in open file formats
Experience using OLAP engines (e.g. DuckDB, DataFusion) for data transformation within pipelines
#LI-Remote
Compensation:
We offer competitive compensation packages commensurate with experience, including salary, RSUs, and where applicable, performance-based pay. Our Talent Acquisition Team can answer questions around dbt Lab's total rewards during your interview process. In select locations (including Boston, Chicago, Denver, Los Angeles, Philadelphia, New York City, San Francisco, Washington, DC, and Seattle), an alternate range may apply, as specified below.
The typical starting salary range for this role is:
$172,000 - $207,900
The typical starting salary range for this role in the select locations listed is:
$191,000 - $231,000
Benefits:
Unlimited vacation time with a culture that actively encourages time off
401k plan with 3% guaranteed company contribution
Comprehensive healthcare coverage
Generous paid parental leave
Flexible stipends for:
Health & Wellness
Home Office Setup
Cell Phone & Internet
Learning & Development
Office Space
Remote Hiring Process:
Interview with a Talent Acquisition Partner
Interview with Hiring Manager
Interview with our VP of Data
White boarding session with a member of the data team
White boarding session with a member of the software engineer team
White boarding session with a cross functional stakeholder
dbt Labs is an equal opportunity employer, committed to building an inclusive team that welcomes diverse perspectives, backgrounds, and experiences. Even if your experience doesn't perfectly align with the job description, we encourage you to apply-we value potential just as much as a perfect resume.
Want to learn more about our focus on Diversity, Equity and Inclusion at dbt Labs? Check out our DEI page.
dbt Labs reserves the right to amend or withdraw the posting at any time. For employees outside the United States, dbt Labs offers a competitive benefits package. RSUs or comparable benefits may be offered depending on the legal or country limitations.
Privacy Notice Supplement to Privacy Notice - Californians Supplement to Privacy Notice - EEA/UK
Auto-ApplySenior Data Engineer
Remote
Ready to be a Titan?
Play a pivotal role as our Senior Data Engineer expert ! As a Senior Data Engineer, you will be part of the engineering team at ServiceTitan to help improve our data products and build new ones. This is an exciting role for an engineer to come in and lead the major feature development in the rapidly growing startup. We build for perfection, use the most modern tools, have an amazing culture, and love to solve complex problems. If you share the same values, you might find yourself in a perfect company.
What you'll do:
Implement high-performance solutions to support data and analytical products.
Engineer high availability, scalable and fault tolerant solutions
Develop modern data solutions to allow performance and reliable data processing.
Evaluate and implement efficient distributed storage and query techniques.
Partner with teams and systems to extract, transform, and load data from a wide variety of sources and destinations.
Implement robust and maintainable code.
Identify ways to improve data reliability, efficiency, and quality.
Automate data availability and quality monitoring and respond to alerts when data delivery SLAs are not being met.
What you'll bring:
B.S degree in Computer Science or a related field
5+ years of experience in Software Engineering / Data Engineering roles working in high traffic, fault tolerant, and highly available environments
Experience with Python, Spark, Java, Scala or a similar programming language
Experience with Big Data Technologies (Snowflake, Redshift, Hive/Hadoop,, etc.)
Experience with SQL skills Snowflake experience is desirable.
Experience with ETL tools, like DBT, experience is desirable.
Experience with streaming tools, like Kafka or Kinesis, experience is desirable.
Be Human With Us:
Being human isn't about checking every box on a list. It's about the experiences we have, people we meet, and the perspectives we share. So, if you have the skills but are hesitant to apply because of your background, apply anyway. We need amazing people like you to help us challenge the conventional and think differently about the problems that we're solving. We're in this together. Come be human, with us.
What We Offer:
When you join our team, you're not just accepting a job. You're making a career move. Here's how we'll support you in doing some of the most impactful work of your career:
Flextime, recognition, and support for autonomous work: Flexible time off with ample learning and development opportunities to continue growing your career. We offer a comprehensive onboarding program, leadership training for Titans at all levels, and other programs and events. Great work is rewarded through Bonusly, peer-nominated awards, and more.
Holistic health and wellness benefits: Company-paid medical, dental, and vision (with 100% employer paid options and 90% coverage for dependents), FSA and HSA, 401k match, and telehealth options including memberships to One Medical.
Support for Titans at all stages of life: Parental leave and support, up to $20k in fertility services (i.e. IUI and IVF), surrogacy, and adoption reimbursement, on demand maternity support through Maven Maternity, free breast milk shipping through Maven Milk, pet insurance, legal advisory services, financial planning tools, and more.
At ServiceTitan, we celebrate individuality and uniqueness. We believe that the convergence of fresh perspectives and experiences from all walks of life is what makes our product and culture so great. We strongly encourage people from underrepresented groups to apply. We do not discriminate against employees based on race, color, religion, sex, national origin, gender identity or expression, age, disability, pregnancy (including childbirth, breastfeeding, or related medical condition), genetic information, protected military or veteran status, sexual orientation, or any other characteristic protected by applicable federal, state or local laws.
ServiceTitan is committed to fair and equitable compensation for all of our employees. We thoughtfully consider a wide range of factors when determining individual compensation.The expected salary range for this role for candidates residing in the United States is between $151,100 USD - $202,100 USD. Compensation for candidates residing outside the United States will vary by location and the specific salary range will be discussed during the hiring process. Actual compensation for an individual may vary depending on skills, performance over time, qualifications, experience, and location. In addition to the base salary, the total compensation package also includes an annual bonus, equity and a holistic suite of benefits.
Auto-ApplySenior Data Engineer
Remote
Apollo.io is the leading go-to-market solution for revenue teams, trusted by over 500,000 companies and millions of users globally, from rapidly growing startups to some of the world's largest enterprises. Founded in 2015, the company is one of the fastest growing companies in SaaS, raising approximately $250 million to date and valued at $1.6 billion. Apollo.io provides sales and marketing teams with easy access to verified contact data for over 210 million B2B contacts and 35 million companies worldwide, along with tools to engage and convert these contacts in one unified platform. By helping revenue professionals find the most accurate contact information and automating the outreach process, Apollo.io turns prospects into customers. Apollo raised a series D in 2023 and is backed by top-tier investors, including Sequoia Capital, Bain Capital Ventures, and more, and counts the former President and COO of Hubspot, JD Sherman, among its board members.
As a Senior Software Engineer, you will play a key role in designing and building the foundational data infrastructure and APIs that power our analytics, machine learning, and product features. You'll be responsible for developing scalable data pipelines, managing cloud-native data platforms, and creating high-performance APIs using FastAPI to enable secure, real-time access to data services. This is a hands-on engineering role with opportunities to influence architecture, tooling, and best practices across our data ecosystem.
Daily Adventures and Responsibilities
Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
Monitor and ensure the health, quality, and reliability of data flows and platform services
Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
Continuously evaluate and integrate new tools and technologies to improve platform capabilities
Contribute to architectural discussions, code reviews, and cross-functional projects
Document your work, champion best practices, and help level up the team through knowledge sharing
Competencies
Excellent communication skills to work with engineering, product, and business owners to develop and define key business questions and build data sets that answer those questions.
Self-motivated and self-directed
Inquisitive, able to ask questions and dig deeper
Organized, diligent, and great attention to detail
Acts with the utmost integrity
Genuinely curious and open; loves learning
Critical thinking and proven problem-solving skills required
Proven experience leveraging AI tools to enhance software development processes, including code generation, debugging, and productivity optimization.
Candidates should demonstrate fluency in integrating AI-driven solutions into their workflows and a willingness to stay current with emerging AI technologies
Skills & Relevant Experience
Required:
5+ years experience in platform engineering, data engineering or in a data facing role
Experience in building data applications
Deep knowledge of data eco system with an ability to collaborate cross-functionally
Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
Preferred:
Experience using the Python data stack
Experience deploying and managing data pipelines in the cloud
Experience working with technologies like Airflow, Hadoop and Spark
Understanding of streaming technologies like Kafka, Spark Streaming
The listed Pay Range reflects base salary range, except for sales roles, the range provided is the role's On Target Earnings ("OTE") range, meaning that the range includes both the sales commission/sales bonus targets and annual base salary for the role. This pay range may be inclusive of several career levels at Apollo and will be narrowed during the interview process based on a number of factors, including the candidate's experience, qualifications, and location. Applicants interested in this role and who are not located in the US may request the annual salary range for their location during the interview process.
Additional benefits for this role may include equity; company bonus or sales commissions/bonuses; 401(k) plan; at least 10 paid holidays per year, flex PTO, and parental leave; employee assistance program and wellbeing benefits; global travel coverage; life/AD&D/STD/LTD insurance; FSA/HSA and medical, dental, and vision benefits.
Annual Pay Range$156,000-$195,000 USDWe are AI Native
Apollo.io is an AI-native company built on a culture of continuous improvement. We're on the front lines of driving productivity for our customers-and we expect the same mindset from our team. If you're energized by finding smarter, faster ways to get things done using AI and automation, you'll thrive here.
Why You'll Love Working at Apollo
At Apollo, we're driven by a shared mission: to help our customers unlock their full revenue potential. That's why we take extreme ownership of our work, move with focus and urgency, and learn voraciously to stay ahead.
We invest deeply in your growth, ensuring you have the resources, support, and autonomy to own your role and make a real impact. Collaboration is at our core-we're all for one, meaning you'll have a team across departments ready to help you succeed. We encourage bold ideas and courageous action, giving you the freedom to experiment, take smart risks, and drive big wins.
If you're looking for a place where your work matters, where you can push boundaries, and where your career can thrive-Apollo is the place for you.
Learn more here!
Auto-ApplySenior Data Engineer
Remote
Remote (US)
PerformLine is a category-leading SaaS company that empowers leaders with end-to-end marketing compliance technology, from automated review of documents to discovery and live monitoring across consumer-facing channels including the web, calls, messaging, emails, and social media. PerformLine powers compliance at some of the world's largest companies by proactively finding and remediating potential regulatory risks while scaling coverage and gaining efficiencies through automation.
Come as you are. We are an equal opportunity workplace celebrating diversity and committed to creating an inclusive and equitable experience for all.
MISSION
Our mission is to empower compliance leaders with the technology and knowledge to ensure their organization and partners provide transparent and accurate information to consumers across any channel.
TL;DR
We're looking for a Senior Data Engineer to take the lead on rebuilding our data platform from the ground up.
As the most senior member of the data team, you'll architect our Snowflake-based stack, design robust pipelines, and collaborate with teams across the business to drive data quality, scalability, and real-time insights. Ultimately you will be shaping how data is used across the company! You won't be programmer #300 working on a niche backoffice tool - we act like owners to make a real impact at a scale-up company. On our team, you can expect deep technical work with company-wide visibility.
WHAT YOU'LL DO
Lead the rebuild of our data stack with a modern Snowflake data lakehouse, architected for scale and performance on AWS
Design and implement best-in-class AWS data infrastructure using Terraform for provisioning, configuration, and automation
Influence data architecture, tooling choices, and long-term strategy, ensuring alignment with business and technology needs and growth plans
Build and optimize scalable ETL/ELT pipelines with AWS services, Python, and Airflow
Establish and enforce rigorous standards for data quality, observability, and governance, including access control, lineage, and compliance requirements
Prepare and evolve the data platform to support advanced analytics, AI, and machine learning use cases
Collaborate closely with Product, Engineering, and Customer Success to deliver reliable, trusted data for analytics and reporting
WHAT YOU BRING
Hands-on Snowflake experience in production environments
Proven experience designing and maintaining large-scale data pipelines
Strong SQL and Python skills for data transformation and orchestration
Experience with ETL/ELT tools like Airflow, dbt, or similar
Familiarity with AWS cloud infrastructure
Think: S3, CloudTrail, Lambda, Step Functions, EventBridge, and Glue
Deep understanding of data modeling, performance optimization, and query tuning
Experience designing data workflows, ensuring data quality, reliability, and performance
WHO YOU ARE
Ability to balance business concerns with technical goals
Possesses intellectual humility and seeks out constructive criticism of their work
Generates positivity and lift in the skills of those around them
Takes ownership of their codebase, even if they didn't write that code
Recognizes the value of documentation and boring code (and the dangers of unnecessary complexity)
Seeks out coaching and takes critical feedback well
Positive can-do attitude with the ability to thrive amidst ambiguity as needed
WHAT WE OFFER
Estimated base salary range $95,000 - 160,000 plus annual discretionary bonus.
Exact compensation for this role depends on a variety of factors, including experience, skills, internal equity, market data, and location.
To view our benefits package, visit the PerformLine Careers page!
PerformLine participates in E-Verify
Auto-ApplyData Engineer - Data Platform
Remote
Building the Future of Crypto
Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.
What makes us different?
Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you'll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken's focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.
Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.
As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.
Become a Krakenite and build the future of crypto!
Proof of work The team
Join our Data Engineering Team at Kraken!
Are you passionate about designing and building scalable data systems that power one of the fastest-growing companies in cryptocurrency? We're seeking a skilled Data Engineer to join our Data Platform team and help us architect the future of Kraken's data ecosystem.
As a Data Engineer at Kraken, you'll be responsible for building and maintaining high-performance data pipelines, ensuring the reliability and scalability of our data infrastructure, and enabling teams across the company to access clean, consistent, and timely data. You'll work with modern technologies and large-scale datasets, playing a key role in making data accessible for analytics, machine learning, and product innovation.
The opportunity
Build scalable and reliable data pipelines that collect, transform, load and curate data from internal systems
Augment data platform with data pipelines from external systems.
Ensure high data quality for pipelines you build and make them auditable
Drive data systems to be as near real-time as possible
Support design and deployment of distributed data store that will be central source of truth across the organization
Build data connections to company's internal IT systems
Develop, customize, configure self service tools that help our data consumers to extract and analyze data from our massive internal data store
Evaluate new technologies and build prototypes for continuous improvements in data engineering.
Skills you should HODL
5+ years of work experience in relevant field (Data Engineer, DWH Engineer, Software Engineer, etc)
Experience with data-lake and data-warehousing technologies and relevant data modeling best practices (Presto, Athena, Glue, etc)
Proficiency in at least one of the main programming languages used: Python and Scala. Additional programming languages expertise is a big plus!
Experience building data pipelines/ETL in Airflow, and familiarity with software design principles.
Excellent SQL and data manipulation skills using common frameworks like Spark/PySpark, or similar.
Expertise in Apache Spark, or similar Big Data technologies, with a proven record of processing high volumes and velocity of datasets.
Experience with business requirements gathering for data sourcing.
Bonus - Kafka and other streaming technologies like Apache Flink.
#LI-Remote
This job is accepting ongoing applications and there is no application deadline.
Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.
We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.
Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!
As an equal opportunity employer, we don't tolerate discrimination or harassment of any kind. Whether that's based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.
Stay in the know
Follow us on Twitter
Learn on the Kraken Blog
Connect on LinkedIn
Candidate Privacy Notice
Auto-ApplyData Engineer - Staked
Remote
Building the Future of Crypto
Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology.
What makes us different?
Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you'll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion. For over a decade, Kraken's focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world.
Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app. Learn how to create a Kraken account here.
As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.
Become a Krakenite and build the future of crypto!
Proof of work The team
At Kraken Digital Asset Exchange, the Staked team, now part of Kraken Institutional since Kraken's acquisition in December 2021, specializes in enabling non-custodial staking solutions tailored for institutional clients. Staked offers staking services across 40+ proof-of-stake (PoS) blockchains, managing billions in delegated assets. Trusted by top investors and cryptocurrency exchanges, Staked ensures optimal staking rewards.
As a member of our team, you will play a crucial role in extracting and organizing data from various blockchains preparing it for downstream analysis. Your work will maintain our high standards for data integrity and support our mission to deliver top-notch staking solutions to institutional clients. Join us to drive innovation in the cryptocurrency industry with a collaborative and curious mindset.
The opportunity
Extract and organize data from multiple blockchains using RPC, API, or smart contract calls
Develop and maintain the software stack for collecting and indexing transaction data for efficient consumption
Maintain a reliable API to provide comprehensive account histories and ensure data is auditable and meets SLAs
Track and monitor public blockchain sources and find necessary data for analysis
Document data extraction processes and methodologies
Collaborate with internal teams to ensure seamless integration and data flow
Skills you should HODL
3-5+ years of experience with Python and SQL, in data-intensive environments
Familiarity with Docker, Kubernetes, and other elements of our tech stack such as Typescript, Python, Rust, SQL, AWS Athena, and Apache Airflow
Strong experience with software engineering principles, code readability, and testing
Experience with SQL databases and query performance
Knowledge of basic web protocols and experience with API development is a plus
Ability to track down public source code and identify necessary data sources
Experience with blockchain nodes and eagerness to analyze blockchain data, passionate about decentralized tech and eager to drive the industry forward
B.S. in Computer Science or a related field, or equivalent experience
This job is accepting ongoing applications and there is no application deadline.
Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.
We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.
Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgable about crypto!
As an equal opportunity employer, we don't tolerate discrimination or harassment of any kind. Whether that's based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.
Stay in the know
Follow us on Twitter
Learn on the Kraken Blog
Connect on LinkedIn
Candidate Privacy Notice
Auto-ApplyMLOps/Data Engineer
Remote
Nift is disrupting performance marketing, delivering millions of new customers to brands every month. We're actively looking for a hands-on Engineer to focus on MLOps/Data Engineering to build the data and ML platform that powers product decisions and production models.
As an MLOps/Data Engineer, you'll report to the Data Science Manager and work closely with both our Data Science and Product teams. You'll architect storage and compute, harden training/inference pipelines, and make our ML code, data workflows, and services reliable, reproducible, observable, and cost-efficient. You'll also set best practices and help scale our platform as Nift grows.
Our Mission:
Nift's mission is to reshape how people discover and try new brands by introducing them to new products and services through thoughtful "thank-you" gifts. Our customer-first approach ensures businesses acquire new customers efficiently while making customers feel valued and rewarded.
We are a data-driven, cash-flow-positive company that has experienced 1,111% growth over the last three years. Now, we're scaling to become one of the largest sources for new customer acquisition worldwide. Backed by investors who supported Fitbit, Warby Parker, and Twitter, we are poised for exponential growth and ready to demonstrate impact on a global scale. Read more about our growth here.
What you will do:
Architecture & storage: Design and implement our data storage strategy (warehouse, lake, transactional stores) with scalability, reliability, security, and cost in mind
Pipelines & ETL: Build and maintain robust data pipelines (batch/stream), including orchestration, testing, documentation, and SLAs
ML platform: Productionize training and inference (batch/real-time), establish CI/CD for models, data/versioning practices, and model governance
Feature & model lifecycle: Centralize feature generation (e.g., feature store patterns), manage model registry/metadata, and streamline deployment workflows
Observability & quality: Implement monitoring for data quality, drift, model performance/latency, and pipeline health with clear alerting and dashboards
Reliability & cost control: Optimize compute/storage (e.g., spot, autoscaling, lifecycle policies) and reduce pipeline fragility
Engineering excellence: Refactor research code into reusable components, enforce repo structure, testing, logging, and reproducibility
Cross-functional collaboration: Work with DS/Analytics/Engineers to turn prototypes into production systems, provide mentorship and technical guidance
Roadmap & standards: Drive the technical vision for ML/data platform capabilities and establish architectural patterns that become team standards
What you need:
Experience: 5+ years in data engineering/MLOps or related fields, including ownership of data/ML infrastructure for large-scale systems
Software engineering strength: Strong coding, debugging, performance analysis, testing, and CI/CD discipline; reproducible builds
Cloud & containers: Production experience on AWS, Docker + Kubernetes (EKS/ECS or equivalent)
IaC: Terraform or CloudFormation for managed, reviewable environments
Data engineering: Expert SQL, data modeling, schema design, modern orchestration (Airflow/Step Functions) and ETL tools
ML tooling: MLflow/SageMaker (or similar) with a track record of production ML pipelines
Warehouses & lakes: Databricks, Redshift and lake formats (Parquet)
Monitoring/observability: Data/ML monitoring (quality, drift, performance) and pipeline alerting
Collaboration: Excellent communication, comfortable working with data scientists, analysts, and engineers in a fast-paced startup
PySpark/Glue/Dask/Kafka: Experience with large-scale batch/stream processing
Analytics platforms: Experience integrating 3rd party data
Model serving patterns: Familiarity with real-time endpoints, batch scoring, and feature stores
Governance & security: Exposure to model governance/compliance and secure ML operations
Mission-oriented: Proactive and self-driven with a strong sense of initiative; takes ownership, goes beyond expectations, and does what's needed to get the job done
What you get:
Competitive compensation, comprehensive benefits (401K, Medical/Dental/Vision), and we offer all full-time employees the potential to hold company equity
Flexible remote work
Unlimited Responsible PTO
Great opportunity to join a growing, cash-flow-positive company while having a direct impact on Nift's revenue, growth, scale, and future success
Auto-ApplyData Engineer, DPD Team (Remote, International) - Various Levels
Remote
Description A bit about us:PulsePoint is a leading healthcare ad technology company that uses real-world data in real-time to optimize campaign performance and revolutionize health decision-making. Leveraging proprietary datasets and methodology, PulsePoint targets healthcare professionals and patients with an unprecedented level of accuracy-delivering unparalleled results to the clients we serve. The company is now a part of Internet Brands, a KKR portfolio company and owner of WebMD Health Corp.Sr. Data EngineerPulsePoint Data Engineering team plays a key role in our technology company that's experiencing exponential growth. Our data pipeline processes over 80 billion impressions a day (> 20 TB of data, 200 TB uncompressed). This data is used to generate reports, update budgets, and drive our optimization engines. We do all this while running against tight SLAs and provide stats and reports as close to real-time as possible.The most exciting part about working at PulsePoint is the enormous potential for personal and professional growth. We are always seeking new and better tools to help us meet challenges such as adopting proven open-source technologies to make our data infrastructure more nimble, scalable and robust. Some of the cutting-edge technologies we have recently implemented are Kafka, Spark Streaming, Presto, Airflow, and Kubernetes.What you'll be doing:
Design, build, and maintain reliable and scalable enterprise-level distributed transactional data processing systems for scaling the existing business and supporting new business initiatives
Optimize jobs to utilize Kafka, Hadoop, Presto, Spark, and Kubernetes resources in the most efficient way
Monitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc)
Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and datasets that fit their use cases)
Collaborate within a small team with diverse technology backgrounds
Provide mentorship and guidance to junior team members
Team Responsibilities:
Ingest, validate and process internal & third party data
Create, maintain and monitor data flows in Python, Spark, Hive, SQL and Presto for consistency, accuracy and lag time
Maintain and enhance framework for jobs(primarily aggregate jobs in Spark and Hive)
Create different consumers for data in Kafka using Spark Streaming for near time aggregation
Tools evaluation
Backups/Retention/High Availability/Capacity Planning
Review/Approval - DDL for database, Hive Framework jobs and Spark Streaming to make sure they meet our standards
Technologies We Use:
Python - primary repo language
Airflow/Luigi - for job scheduling
Docker - Packaged container image with all dependencies
Graphite - for monitoring data flows
Hive - SQL data warehouse layer for data in HDFS
Kafka - distributed commit log storage
Kubernetes - Distributed cluster resource manager
Presto/Trino - fast parallel data warehouse and data federation layer
Spark Streaming - Near time aggregation
SQL Server - Reliable OLTP RDBMS
Apache Iceberg
GCP - BigQuery for performance, Looker for dashboards
Requirements
8+ years of data engineering experience
Strong skills in and current experience with SQL and Python
Strong recent Spark experience (3+ years)
Experience working in on-prem environments
Hadoop and Hive experience
Experience in Scala/Java is a plus (Polyglot programmer preferred!)
Proficiency in Linux
Strong understanding of RDBMS and query optimization
Passion for engineering and computer science around data
East Coast U.S. hours 9am-6pm EST; you can work fully remotely
Notice period needs to be less than 2 months (or 2 months max)
Knowledge and exposure to distributed production systems i.e Hadoop
Knowledge and exposure to Cloud migration (AWS/GCP/Azure) is a plus
Location:
We can hire as FTE in the, U.S., UK and Netherlands
We can hire as long-term contractor (independent or B2B) in most other countries
Selection Process:1) CodeSignal Online Assessment 2) Initial Screen (30 mins)3) Hiring Manager Interview (45 mins)4) Tech Challenge5) Interview with Sr. Data Engineer (60 mins)6) Team Interviews (90 mins + 3 x 45 mins) + SVP of Engineering (30 mins)7) WebMD Sr. Director, DBA (30 mins) Note that leetcode-style live coding challenges will be involved in the process.WebMD and its affiliates is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.
Auto-ApplyPrincipal Data Platform Engineer
Remote
Ready to be a Titan?
Play a pivotal role as our Platform expert on our data team! As a Principal Platform Engineer, you will be part of the engineering team at ServiceTitan to help improve our data products and build new ones. This is an exciting role for an engineer to come in and lead the major feature development in the rapidly growing startup. We build for perfection, use the most modern tools, have an amazing culture, and love to solve complex problems. If you share the same values, you might find yourself in a perfect company.
What You'll Do
Assess and recommend architecture frameworks, design and implement high-performance solutions to support data and analytical products.
Architect high availability, scalable and fault tolerant solutions
Lead implementation of modern data curation solutions to allow developers to quickly onboard new data sources and enhance existing data integrations.
Partner with teams and systems to develop tools to extract, transform, and load data from a wide variety of sources and destinations.
Evaluate and implement efficient distributed storage and query techniques.
Champion high-quality code with corresponding test coverage
Participate in regular code reviews and engage in constructive discussions
Participate in Design sessions across different teams
Design automation tools for monitoring and measuring data quality, with associated user interfaces.
What You'll Bring:
10+ years of experience in Software Engineering / Data Engineering roles working in high traffic, fault tolerant, and highly available environments
Experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of systems
Experience building real-time data pipelines
Experience with Spark, Python, DBT, C#, SQL
Experience with Big Data Technologies (Snowflake, Athena, Pinot, Clickhouse, Flink etc.)
Experience with Streaming platforms like Kafka or Kinesis
Experience with latest Generative AI technologies for data engineering development (Cursor, Copilot etc)
B.S., M.S. or PhD degree in Computer Science or a related field
Be Human With Us:
Being human isn't about checking every box on a list. It's about the experiences we have, people we meet, and the perspectives we share. So, if you have the skills but are hesitant to apply because of your background, apply anyway. We need amazing people like you to help us challenge the conventional and think differently about the problems that we're solving. We're in this together. Come be human, with us.
What We Offer:
When you join our team, you're not just accepting a job. You're making a career move. Here's how we'll support you in doing some of the most impactful work of your career:
Flextime, recognition, and support for autonomous work: Flexible time off with ample learning and development opportunities to continue growing your career. We offer a comprehensive onboarding program, leadership training for Titans at all levels, and other programs and events. Great work is rewarded through Bonusly, peer-nominated awards, and more.
Holistic health and wellness benefits: Company-paid medical, dental, and vision (with 100% employer paid options and 90% coverage for dependents), FSA and HSA, 401k match, and telehealth options including memberships to One Medical.
Support for Titans at all stages of life: Parental leave and support, up to $20k in fertility services (i.e. IUI and IVF), surrogacy, and adoption reimbursement, on demand maternity support through Maven Maternity, free breast milk shipping through Maven Milk, pet insurance, legal advisory services, financial planning tools, and more.
At ServiceTitan, we celebrate individuality and uniqueness. We believe that the convergence of fresh perspectives and experiences from all walks of life is what makes our product and culture so great. We strongly encourage people from underrepresented groups to apply. We do not discriminate against employees based on race, color, religion, sex, national origin, gender identity or expression, age, disability, pregnancy (including childbirth, breastfeeding, or related medical condition), genetic information, protected military or veteran status, sexual orientation, or any other characteristic protected by applicable federal, state or local laws.
ServiceTitan is committed to fair and equitable compensation for all of our employees. We thoughtfully consider a wide range of factors when determining individual compensation.The expected salary range for this role for candidates residing in the United States is between $244,000 USD - $326,400 USD. Compensation for candidates residing outside the United States will vary by location and the specific salary range will be discussed during the hiring process. Actual compensation for an individual may vary depending on skills, performance over time, qualifications, experience, and location. In addition to the base salary, the total compensation package also includes an annual bonus, equity and a holistic suite of benefits.
Auto-ApplyStaff Data Engineer- Data Architect
Remote
About the Staff Data Engineer, Data Architect at Headspace:
At Headspace, our mission is to transform mental healthcare to improve the health and happiness of the world. Core to this mission is our ability to responsibly and ethically leverage data to provide personalized care to each of our members, meeting them where they are on the mental health continuum. We're looking for an experienced Data Architect who can also operate hands-on as a Staff Data Engineer. You will design and evolve our domain-based Enterprise Data Model (EDM), lead Master Data Management (MDM) initiatives, and build production-grade data pipelines in Python / PySpark. The ideal candidate is equally comfortable whiteboarding conceptual models, building and reviewing ETL jobs, and coaching engineering teams on data architecture best practices.
Location: We are currently hiring this role in San Francisco (hybrid), Los Angeles (remote), New York City (remote) and Seattle (remote). Candidates must permanently reside in the US full-time and be based in these cities.
What you will do:
Lead the Development of Scalable Data Infrastructure: Drive the architecture and implementation of cutting-edge py Spark data pipelines to ingest and transform diverse datasets into the organization's data lake in a fault-tolerant, robust system.
Set Design Patterns: Drive the creation and enforcement of standard conventions in code, architecture, schema design, and table design.
Architect World-Class Data Platforms: Design and lead the evolution of secure, compliant, and privacy-forward data warehousing platforms to support the unique demands of the healthcare industry.
Strategic Collaboration for Business Insights: Partner with analytics, product, and engineering leaders to ensure the data ecosystem provides actionable and reliable insights into critical business metrics.
Champion Data-Driven Leadership: Mentor other members of the DE and broader data team, particularly around dbt architecture and query performance., Foster a data-first culture that prioritizes excellence, innovation, and collaboration across teams.
Influence Organizational Strategy: Act as a technical thought leader, shaping the company's data strategy and influencing cross-functional roadmaps with data-centric solutions.
What you will bring:
7+ years in data engineering / architecture, with 2+ years leading EDM/MDM programs and a proven track record of leading high-impact initiatives at scale.
Proven ability to create and maintain domain-based enterprise data models (canonical, hub-and-spoke, data-product-oriented).
Deep expertise with data-modeling tools (erwin, ER/Studio, PowerDesigner, or equivalent) and modeling techniques (3NF, Dimensional, Data Vault, Anchor).
Production experience writing performant Python and PySpark code on distributed compute (Spark 3+, Delta Lake).
Strong SQL skills across columnar and relational engines (e.g., Snowflake, Redshift, Databricks SQL, Postgres).
Solid grasp of data-governance practices: lineage, glossaries, PII/PHI controls, and data- quality frameworks.
Ability to articulate architecture choices to both executive stakeholders and hands-on engineers.
Deep experience designing and optimizing real-time and batch ETL pipelines (preferably within dbt), employing best practices for scalability and reliability.
Systems thinker who can balance near-term delivery with long-term architecture vision.
Comfortable in highly collaborative, agile environments; able to mentor cross-functional teams.
Excellent written and verbal communication; able to translate complex data topics into plain language.
Bias for automation, documentation, and continuous improvement.
Nice-To-Haves:
Hands-on with Databricks platform (Unity Catalog, Delta Live Tables, MLflow).
dbt Core for transformation, tests, and metadata; dbt Semantic Layer experience is a plus.
Exposure to event streaming (Kafka, EventHub) and CDC tools.
Experience integrating with commercial MDM suites or building custom match-merge solutions.
Familiarity with cloud data-platform services on AWS (Terraform).
Background in data-privacy standards (GDPR, CCPA, HIPAA) and differential-privacy or tokenization techniques.
Pay & Benefits:
The anticipated new hire base salary range for this full-time position is $140,400-$224,250 + equity + benefits.
Our salary ranges are based on the job, level, and location, and reflect the lowest to highest geographic markets where we are hiring for this role within the United States. Within this range, individual compensation is determined by a candidate's location as well as a range of factors including but not limited to: unique relevant experience, job-related skills, and education or training.
Your recruiter will provide more details on the specific salary range for your location during the hiring process.
At Headspace, base salary is but one component of our Total Rewards package. We're proud of our robust package inclusive of: base salary, stock awards, comprehensive healthcare coverage, monthly wellness stipend, retirement savings match, lifetime Headspace membership, generous parental leave, and more. Additional details about our Total Rewards package will be provided during the recruitment process.
About Headspace
Headspace exists to provide every person access to lifelong mental health support. We combine evidence-based content, clinical care, and innovative technology to help millions of members around the world get support that's effective, personalized, and truly accessible whenever and wherever they need it.
At Headspace, our values aren't just what we believe, they're how we work, grow, and make an impact together. We live them daily: Make the Mission Matter, Iterate to Great, Own the Outcome, and Connect with Courage. These values shape our decisions, guide our collaborations, and define our culture. They're our shared commitment to building a more connected, human-centered team-one that's redefining how mental health care supports people today and for generations to come.
Why You'll Love Working Here:
A mission that matters-with impact you can see and feel
A culture that's collaborative, inclusive, and grounded in our values
The chance to shape what mental health care looks like next
Competitive pay and benefits that support your whole self
How we feel about Diversity, Equity, Inclusion and Belonging:
Headspace is committed to bringing together humans from different backgrounds and perspectives, providing employees with a safe and welcoming work environment free of discrimination and harassment. We strive to create a diverse & inclusive environment where everyone can thrive, feel a sense of belonging, and do impactful work together.
As an equal opportunity employer, we prohibit any unlawful discrimination against a job applicant on the basis of their race, color, religion, gender, gender identity, gender expression, sexual orientation, national origin, family or parental status, disability*, age, veteran status, or any other status protected by the laws or regulations in the locations where we operate. We respect the laws enforced by the EEOC and are dedicated to going above and beyond in fostering diversity across our workplace.
*Applicants with disabilities may be entitled to reasonable accommodation under the terms of the Americans with Disabilities Act and certain state or local laws. A reasonable accommodation is a change in the way things are normally done which will ensure an equal employment opportunity without imposing undue hardship on Headspace.
Please inform our Talent team by filling out
this form
if you need any assistance completing any forms or to otherwise participate in the application or interview process.
Headspace participates in the
E-Verify Program
.
Privacy Statement
All member records are protected according to our Privacy Policy. Further, while employees of Headspace (formerly Ginger) cannot access Headspace products/services, they will be offered benefits according to the company's benefit plan. To ensure we are adhering to best practice and ethical guidelines in the field of mental health, we take care to avoid dual relationships. A dual relationship occurs when a mental health care provider has a second, significantly different relationship with their client in addition to the traditional client-therapist relationship-including, for example, a managerial relationship.
As such, Headspace requests that individuals who have received coaching or clinical services at Headspace wait until their care with Headspace is complete before applying for a position. If someone with a Headspace account is hired for a position, please note their account will be deactivated and they will not be able to use Headspace services for the duration of their employment.
Further, if Headspace cannot find a role that fails to resolve an ethical issue associated with a dual relationship, Headspace may need to take steps to ensure ethical obligations are being adhered to, including a delayed start date or a potential leave of absence. Such steps would be taken to protect both the former member, as well as any relevant individuals from their care team, from impairment, risk of exploitation, or harm.
For how how we will use the personal information you provide as part of the application process, please see: ******************************************
#LI-Hybrid
Auto-ApplyLead Data Scientist
Remote
May Mobility is transforming cities through autonomous technology to create a safer, greener, more accessible world. Based in Ann Arbor, Michigan, May develops and deploys autonomous vehicles (AVs) powered by our innovative Multi-Policy Decision Making (MPDM) technology that literally reimagines the way AVs think.
Our vehicles do more than just drive themselves - they provide value to communities, bridge public transit gaps and move people where they need to go safely, easily and with a lot more fun. We're building the world's best autonomy system to reimagine transit by minimizing congestion, expanding access and encouraging better land use in order to foster more green, vibrant and livable spaces. Since our founding in 2017, we've given more than 300,000 autonomy-enabled rides to real people around the globe. And we're just getting started. We're hiring people who share our passion for building the future, today, solving real-world problems and seeing the impact of their work. Join us.
Lead Data Scientist
May Mobility is experiencing a period of significant growth as we expand our autonomous shuttle and mobility services nationwide. We are seeking talented data scientists and machine learning engineers to develop automated methods for tagging data collected by our autonomous vehicles. This will enable us to generate valuable insights from our data, making it easily searchable for triaging issues, creating test sets, and building datasets for autonomy improvements. Join us and make a crucial impact on our development and business decisions!
Responsibilities
Work independently with cross functional teams to develop software and system requirements.
Design, implement, and deploy state-of-the-art machine learning models.
Monitor the performance of the auto-tagging system and drive continuous improvement.
Lead team code quality activities including design and code reviews.
Communicate complex analytical findings and model performance metrics to both technical and non-technical stakeholders through clear visualizations and presentations.
Provide technical guidance to team members.
Skills
Expertise in deep learning, with hands-on experience in the design, training, and evaluation of a wide range of algorithms.
Ability to build and productionize machine learning models and large-scale systems.
Awareness of the latest advancements in the field, with the ability to translate innovative concepts into practical solutions for May.
Excellent problem-solving skills with a meticulous approach to model architecture and optimization.
Ability to provide individual and team mentorship, including technical leadership for complex projects.
Strong understanding of data labeling best practices, label consistency, and performance metrics specifically relevant to large-scale auto-tagging accuracy and dataset curation.
Qualifications and Experience
Required
B.S, M.S. or Ph.D. Degree in Engineering, Data Science, Computer Science, Math, or a related quantitative field.
10+ years of hands-on experience as a Data Scientist or ML Engineer with a strong focus on algorithmic design and deep learning.
Expert-level programming skills in Python with extensive use of modern deep learning frameworks like TensorFlow or PyTorch.
Demonstrated experience in building and deploying production-level machine learning systems from conception to delivery.
Experience working with multimodal data like visual data (images/video), structured perception and behavior outputs (e.g., agent tracks, vehicle state estimation, motion planner outputs).
Demonstrated expertise in databases for data extraction, transformation, and analysis.
Prior experience in mentoring and supporting junior engineers.
Desirable
Background in robotics or autonomous systems.
Experience with multi-modal deep learning models, transformers, visual learning models (vLMs) etc.
Experience with classifying driving maneuvers and traffic interactions using machine learning methods.
Solid understanding of ML deployment lifecycle, MLOps practices, and cloud computing platforms (e.g., AWS, GCP).
Expertise in PySpark/Apache Spark for handling large-scale data processing.
Physical Requirements
Standard office working conditions which includes but is not limited to:
Prolonged sitting
Prolonged standing
Prolonged computer use
Lift up to 50 pounds
Remote role based out of Ann Arbor, MI.
Remote employees work primarily from home or an alternative work space.
Travel requirements - 0%
The salary range provided is based on a position located in the state of Michigan. Our salary ranges can vary across different locations in the United States.
Benefits and Perks
Comprehensive healthcare suite including medical, dental, vision, life, and disability plans. Domestic partners who have been residing together at least one year are also eligible to participate.
Health Savings and Flexible Spending Healthcare and Dependent Care Accounts available.
Rich retirement benefits, including an immediately vested employer safe harbor match.
Generous paid parental leave as well as a phased return to work.
Flexible vacation policy in addition to paid company holidays.
Total Wellness Program providing numerous resources for overall wellbeing
Don't meet every single requirement? Studies have shown that women and/or people of color are less likely to apply to a job unless they meet every qualification. At May Mobility, we're committed to building a diverse, inclusive, and authentic workforce, so if you're excited about this role but your previous experience doesn't align perfectly with every qualification, we encourage you to apply anyway! You may be the perfect candidate for this or another role at May.
Want to learn more about our culture & benefits? Check out our website!
May Mobility is an equal opportunity employer. All applicants for employment will be considered without regard to race, color, religion, sex, national origin, age, disability, sexual orientation, gender identity or expression, veteran status, genetics or any other legally protected basis. Below, you have the opportunity to share your preferred gender pronouns, gender, ethnicity, and veteran status with May Mobility to help us identify areas of improvement in our hiring and recruitment processes. Completion of these questions is entirely voluntary. Any information you choose to provide will be kept confidential, and will not impact the hiring decision in any way. If you believe that you will need any type of accommodation, please let us know.
Note to Recruitment Agencies:
May Mobility does not accept unsolicited agency resumes. Furthermore, May Mobility does not pay placement fees for candidates submitted by any agency other than its approved partners.
Salary Range$167,000-$190,000 USD
Auto-ApplyETL Architect
Wisconsin jobs
Come Find Your Spark at Quartz!
The ETL Architect will be responsible for the architecture, design, and implementation of data integration solutions and pipelines for the organization. This position will partner with multiple areas in the Enterprise Data Management team and the business to successfully translate business requirements into efficient and effective ETL implementations. This role will perform functional analysis, determining the appropriate data acquisition and ingestion methods, and design processes to populate various data platform layers. The ETL Architect will work with implementation stakeholders throughout the business to evaluate the state of data and constructs solutions that deliver data to enable analytics reporting capabilities in a reliable manner.
Skills this position will utilize on a regular basis:
Informatica PowerCenter
Expert knowledge of SQL development
Python
Benefits:
Opportunity to work with leading technology in the ever-changing, fast paced healthcare industry.
Opportunity to work across the organization interacting with business stakeholders.
Starting salary range based upon skills and experience: $107,500 - $134,400 - plus robust benefits package.
Responsibilities
Architects, designs, enhances, and supports delivery of ETL solutions.
Architects and designs data acquisition, ingestion, transformation, and load solutions.
Identifies, develops, and documents ETL solution requirements to meet business needs.
Facilitates group discussions and joins solution design sessions with technical subject matter experts.
Develops, implements, and maintains standards and ETL design procedures.
Contributes to the design of the data models, data flows, transformation specifications, and processing schedules.
Coordinates ETL solution delivery and supports data analysis and information delivery staff in the design, development, and maintenance of data implementations.
Consults and provides direction on ETL architecture and the implementation of ETL solutions.
Queries, analyzes, and interprets complex data stored in the systems of record, enterprise data warehouse, and data marts.
Ensures work includes necessary audit, HIPAA compliance, and security controls.
Data Management
Collaborates with infrastructure and platform administrators to establish and maintain scalable and reliable data processing environment for the organization.
Identifies and triages data quality and performance issues from the ETL perspective and see them through to resolution.
Tests and validates components of the ETL solutions to ensure successful end-to-end delivery.
Participates in support rotation.
Qualifications
Bachelor's degree with 8+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP (Online Transaction Processing) environments, semantic layer modeling experience, and SQL programming experience.
OR associate degree with 11+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience.
OR high school equivalence with 14+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience.
Expert understanding of ETL concepts and commercially available enterprise data integration platforms (Informatica PowerCenter, Python)
Expert knowledge of SQL development
Expert knowledge of data warehousing concepts, design principles, associated data management and delivery requirements, and best practices
Expert problem solving and analytical skills
Ability to understand and communicate data management and integration concepts within IT and to the business and effectively interact with all internal and external parties including vendors and contractors
Ability to manage multiple projects simultaneously
Ability to work independently, under pressure, and be adaptable to change
Inquisitive and seek answers to questions without being asked
Hardware and equipment will be provided by the company, but candidates must have access to high-speed, non-satellite Internet to successfully work from home.
We offer an excellent benefit and compensation package, opportunity for career advancement and a professional culture built on the foundations of Respect, Responsibility, Resourcefulness and Relationships. To support a safe work environment, all employment offers are contingent upon successful completion of a pre-employment criminal background check.
Quartz values and embraces diversity and is proud to be an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, gender identity or expression, sexual orientation, age, status as a protected veteran, among other things, or status as a qualified person with disability.
We can recommend jobs specifically for you! Click here to get started.
Auto-ApplyETL Architect
Wisconsin jobs
Come Find Your Spark at Quartz!
The ETL Architect will be responsible for the architecture, design, and implementation of data integration solutions and pipelines for the organization. This position will partner with multiple areas in the Enterprise Data Management team and the business to successfully translate business requirements into efficient and effective ETL implementations. This role will perform functional analysis, determining the appropriate data acquisition and ingestion methods, and design processes to populate various data platform layers. The ETL Architect will work with implementation stakeholders throughout the business to evaluate the state of data and constructs solutions that deliver data to enable analytics reporting capabilities in a reliable manner.
Skills this position will utilize on a regular basis:
Informatica PowerCenter
Expert knowledge of SQL development
Python
Benefits:
Opportunity to work with leading technology in the ever-changing, fast paced healthcare industry.
Opportunity to work across the organization interacting with business stakeholders.
Starting salary range based upon skills and experience: $107,500 - $134,400 - plus robust benefits package.
Responsibilities
Architects, designs, enhances, and supports delivery of ETL solutions.
Architects and designs data acquisition, ingestion, transformation, and load solutions.
Identifies, develops, and documents ETL solution requirements to meet business needs.
Facilitates group discussions and joins solution design sessions with technical subject matter experts.
Develops, implements, and maintains standards and ETL design procedures.
Contributes to the design of the data models, data flows, transformation specifications, and processing schedules.
Coordinates ETL solution delivery and supports data analysis and information delivery staff in the design, development, and maintenance of data implementations.
Consults and provides direction on ETL architecture and the implementation of ETL solutions.
Queries, analyzes, and interprets complex data stored in the systems of record, enterprise data warehouse, and data marts.
Ensures work includes necessary audit, HIPAA compliance, and security controls.
Data Management
Collaborates with infrastructure and platform administrators to establish and maintain scalable and reliable data processing environment for the organization.
Identifies and triages data quality and performance issues from the ETL perspective and see them through to resolution.
Tests and validates components of the ETL solutions to ensure successful end-to-end delivery.
Participates in support rotation.
Qualifications
Bachelor's degree with 8+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP (Online Transaction Processing) environments, semantic layer modeling experience, and SQL programming experience.
OR associate degree with 11+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience.
OR high school equivalence with 14+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience.
Expert understanding of ETL concepts and commercially available enterprise data integration platforms (Informatica PowerCenter, Python)
Expert knowledge of SQL development
Expert knowledge of data warehousing concepts, design principles, associated data management and delivery requirements, and best practices
Expert problem solving and analytical skills
Ability to understand and communicate data management and integration concepts within IT and to the business and effectively interact with all internal and external parties including vendors and contractors
Ability to manage multiple projects simultaneously
Ability to work independently, under pressure, and be adaptable to change
Inquisitive and seek answers to questions without being asked
Hardware and equipment will be provided by the company, but candidates must have access to high-speed, non-satellite Internet to successfully work from home.
We offer an excellent benefit and compensation package, opportunity for career advancement and a professional culture built on the foundations of Respect, Responsibility, Resourcefulness and Relationships. To support a safe work environment, all employment offers are contingent upon successful completion of a pre-employment criminal background check.
Quartz values and embraces diversity and is proud to be an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, gender identity or expression, sexual orientation, age, status as a protected veteran, among other things, or status as a qualified person with disability.
Auto-ApplyData & Evaluation Applied AI Scientist
Woburn, MA jobs
SES AI Corp. (NYSE: SES) is dedicated to accelerating the world's energy transition through groundbreaking material discovery and advanced battery management. We are at the forefront of revolutionizing battery creation, pioneering the integration of cutting-edge machine learning into our research and development. Our AI-enhanced, high-energy-density and high-power-density Li-Metal and Li-ion batteries are unique; they are the first in the world to utilize electrolyte materials discovered by AI. This powerful combination of "AI for science" and material engineering enables batteries that can be used across various applications, including transportation (land and air), energy storage, robotics, and drones.
To learn more about us, please visit: **********
What We Offer:
* A highly competitive salary and robust benefits package, including comprehensive health coverage and an attractive equity/stock options program within our NYSE-listed company.
* The opportunity to contribute directly to a meaningful scientific project-accelerating the global energy transition-with a clear and broad public impact.
* Work in a dynamic, collaborative, and innovative environment at the intersection of AI and material science, driving the next generation of battery technology.
* Significant opportunities for professional growth and career development as you work alongside leading experts in AI, R&D, and engineering.
* Access to state-of-the-art facilities and proprietary technologies are used to discover and deploy AI-enhanced battery solutions.
What we Need:
The SES AI Prometheus team is seeking an exceptional Data & Evaluation Applied AI Scientist to serve as the domain expert ensuring that SES AI's complex battery-domain knowledge is correctly represented and validated within advanced AI systems, including LLM pipelines and multi-agent workflows. This role is vital for bridging the gap between raw battery materials knowledge and structured, AI-trainable data. As the Data & Model Quality Manager, you will focus on the integrity, structure, and fidelity of the knowledge embedded within our AI systems.
Essential Duties and Responsibilities:
* Data Curation & Validation
* Translate deep Battery Materials Knowledge and next-generation battery concepts into correctly structured, high-quality, AI-trainable data.
* Lead processes for rigorous data validation, cleaning, and annotation to ensure consistency and correctness across all datasets.
* Oversee the creation and management of benchmark datasets and design domain-specific multimodal evaluations to test model accuracy.
* AI System Quality & Correctness
* Partner closely with AI architecture and engineering teams to ensure the correctness, reliability, and scientific reasoning quality of models, including LLM creation and multi-agent orchestration.
* Implement techniques, including those inspired by reinforcement learning (RLHF), to tune and validate model behavior against established scientific principles.
* Ensure that resulting models accurately understand molecular chemistry, materials data, and complex scientific reasoning in the battery domain.
* Strategy & Collaboration
* Drive the application of Battery Informatics principles across all data pipelines and modeling effots.
Education and/or Experience:
* Education: Ph.D. in Chemical Engineering with a focus on Lithium battery systems, Materials Science, or a closely related computational/domain field.
* Domain Expertise: Deep expertise in battery materials, particularly knowledge required to convert complex, real-world data into AI-trainable formats.
* Data Quality & Validation: Proven experience in data validation, annotation, and benchmark creation for complex scientific or engineering datasets.
* AI Exposure: Experience working with advanced AI systems, including familiarity with LLM pipelines and the principles of multi-agent orchestration.
* Applicable Background: Experience in roles such as Applied Scientist in Molecular/Materials AI or similar specialist roles focused on AI system quality in a scientific domain.
Preferred Qualifications:
* Advanced AI Techniques: Experience with specialized techniques used for model tuning and alignment, such as Reinforcement Learning from Human Feedback (RLHF).
* Industry Precedent: Previous experience in specialized environments like battery focus labs, materials data science groups, or AI4Science teams with a focus on agent pipeline building and model tuning (e.g., drawing from precedents like DeepMind or Fair Labs).
* Evaluation Design: Direct experience designing and executing domain-specific multimodal evaluations for complex AI models.
* Computational Focus: Experience as a Computational battery AI specialist.
Auto-ApplyData & Evaluation Applied AI Scientist
Boston, MA jobs
SES AI Corp. (NYSE: SES) is dedicated to accelerating the world's energy transition through groundbreaking material discovery and advanced battery management. We are at the forefront of revolutionizing battery creation, pioneering the integration of cutting-edge machine learning into our research and development. Our AI-enhanced, high-energy-density and high-power-density Li-Metal and Li-ion batteries are unique; they are the first in the world to utilize electrolyte materials discovered by AI. This powerful combination of "AI for science" and material engineering enables batteries that can be used across various applications, including transportation (land and air), energy storage, robotics, and drones.
To learn more about us, please visit: **********
What We Offer:
A highly competitive salary and robust benefits package, including comprehensive health coverage and an attractive equity/stock options program within our NYSE-listed company.
The opportunity to contribute directly to a meaningful scientific project-accelerating the global energy transition-with a clear and broad public impact.
Work in a dynamic, collaborative, and innovative environment at the intersection of AI and material science, driving the next generation of battery technology.
Significant opportunities for professional growth and career development as you work alongside leading experts in AI, R&D, and engineering.
Access to state-of-the-art facilities and proprietary technologies are used to discover and deploy AI-enhanced battery solutions.
What we Need:
The SES AI Prometheus team is seeking an exceptional Data & Evaluation Applied AI Scientist to serve as the domain expert ensuring that SES AI's complex battery-domain knowledge is correctly represented and validated within advanced AI systems, including LLM pipelines and multi-agent workflows. This role is vital for bridging the gap between raw battery materials knowledge and structured, AI-trainable data. As the Data & Model Quality Manager, you will focus on the integrity, structure, and fidelity of the knowledge embedded within our AI systems.
Essential Duties and Responsibilities:
Data Curation & Validation
Translate deep Battery Materials Knowledge and next-generation battery concepts into correctly structured, high-quality, AI-trainable data.
Lead processes for rigorous data validation, cleaning, and annotation to ensure consistency and correctness across all datasets.
Oversee the creation and management of benchmark datasets and design domain-specific multimodal evaluations to test model accuracy.
AI System Quality & Correctness
Partner closely with AI architecture and engineering teams to ensure the correctness, reliability, and scientific reasoning quality of models, including LLM creation and multi-agent orchestration.
Implement techniques, including those inspired by reinforcement learning (RLHF), to tune and validate model behavior against established scientific principles.
Ensure that resulting models accurately understand molecular chemistry, materials data, and complex scientific reasoning in the battery domain.
Strategy & Collaboration
Drive the application of Battery Informatics principles across all data pipelines and modeling effots.
Education and/or Experience:
Education: Ph.D. in Chemical Engineering with a focus on Lithium battery systems, Materials Science, or a closely related computational/domain field.
Domain Expertise: Deep expertise in battery materials, particularly knowledge required to convert complex, real-world data into AI-trainable formats.
Data Quality & Validation: Proven experience in data validation, annotation, and benchmark creation for complex scientific or engineering datasets.
AI Exposure: Experience working with advanced AI systems, including familiarity with LLM pipelines and the principles of multi-agent orchestration.
Applicable Background: Experience in roles such as Applied Scientist in Molecular/Materials AI or similar specialist roles focused on AI system quality in a scientific domain.
Preferred Qualifications:
Advanced AI Techniques: Experience with specialized techniques used for model tuning and alignment, such as Reinforcement Learning from Human Feedback (RLHF).
Industry Precedent: Previous experience in specialized environments like battery focus labs, materials data science groups, or AI4Science teams with a focus on agent pipeline building and model tuning (e.g., drawing from precedents like DeepMind or Fair Labs).
Evaluation Design: Direct experience designing and executing domain-specific multimodal evaluations for complex AI models.
Computational Focus: Experience as a Computational battery AI specialist.
Auto-ApplyLead Data Science Engineer
Boston, MA jobs
At DraftKings, AI is becoming an integral part of both our present and future, powering how work gets done today, guiding smarter decisions, and sparking bold ideas. It's transforming how we enhance customer experiences, streamline operations, and unlock new possibilities. Our teams are energized by innovation and readily embrace emerging technology. We're not waiting for the future to arrive. We're shaping it, one bold step at a time. To those who see AI as a driver of progress, come build the future together.
The Crown Is Yours
We are looking for a Lead Data Science Engineer to join our Daily Fantasy Sports reinvestment team, where we focus on understanding and optimizing how players engage with our DFS products over time. As a Lead Data Science Engineer, you will be responsible for building advanced models and algorithms, analyzing large-scale behavioral datasets, and driving measurable impact through experimentation and productionized solutions.
What you'll do as a Lead Data Science Engineer
Lead end-to-end modeling projects to improve customer engagement and retention, from ideation to production deployment.
Build, test, and optimize machine learning models to forecast user behavior, personalize promotions, and enhance Sportsbook product engagement.
Partner with engineers, analysts, product managers, and marketers to translate insights into scalable solutions embedded within customer-facing systems.
Mentor junior data scientists and share modeling and engineering best practices across the team.
Clearly communicate findings and the impact of your models to stakeholders to influence product and marketing strategy.
What you'll bring
Proven experience applying machine learning and statistical modeling to solve real-world business problems, ideally in marketing or customer lifecycle contexts.
Experience leading and coaching other data scientists
Strong proficiency in Python (or R) and experience working with large datasets using SQL and distributed computing platforms.
Ability to structure and execute data science projects and deliver business value through production-ready models.
Excellent communication and collaboration skills to work effectively across technical and non-technical teams.
A Bachelor's degree in a relevant field such as Computer Science, Statistics, Mathematics, or a related discipline.
Join Our Team
We're a publicly traded (NASDAQ: DKNG) technology company headquartered in Boston. As a regulated gaming company, you may be required to obtain a gaming license issued by the appropriate state agency as a condition of employment. Don't worry, we'll guide you through the process if this is relevant to your role.
The US base salary range for this full-time position is 140,800.00 USD - 176,000.00 USD, plus bonus, equity, and benefits as applicable. Our ranges are determined by role, level, and location. The compensation information displayed on each job posting reflects the range for new hire pay rates for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific pay range and how that was determined during the hiring process. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Auto-ApplyFintech Data Science Engineer
Boston, MA jobs
At DraftKings, AI is becoming an integral part of both our present and future, powering how work gets done today, guiding smarter decisions, and sparking bold ideas. It's transforming how we enhance customer experiences, streamline operations, and unlock new possibilities. Our teams are energized by innovation and readily embrace emerging technology. We're not waiting for the future to arrive. We're shaping it, one bold step at a time. To those who see AI as a driver of progress, come build the future together.
The Crown Is Yours
About the Team
The Fintech Data Science team builds intelligent systems that power DK's financial trust and risk decisioning ecosystem. Our mission is to deliver real-time, explainable, and automated risk decisions that protect our customers and safeguard the business from fraud and payment risk. You'll contribute to DK's first agentic AI system, built to help Risk Operations analysts act faster on alerts by summarizing behaviors, recommending next steps, and ultimately taking autonomous actions under human supervision. As an L10 Data Science Engineer, you'll help design and deliver scalable ML systems that turn cutting-edge research prototypes into robust, production-grade solutions used daily by our operations teams.
What You'll Do
Design, implement, and test AI agents that enable intelligent decisioning and automation.
Build and maintain data pipelines and model services using Python and Databricks for both batch and real-time decisioning.
Develop core components for data preprocessing, feature engineering, and model monitoring.
Partner with senior engineers to design feedback loops that let systems continuously learn from analyst ratings and real-world outcomes.
Write clean, well-documented, and testable Python code aligned with DK's engineering standards.
Collaborate cross-functionally with Risk Operations, Product, and Engineering teams to understand workflows, improve usability, and ensure high system adoption.
Enhance system reliability and performance by identifying bottlenecks, optimizing pipelines, and scaling key components.
What You'll Bring
Academic or internship experience in machine learning, data science, and/or software engineering.
Strong proficiency in Python, with experience building ML and data pipelines.
Proficiency in SQL and experience with complex data manipulation.
Understanding of the model lifecycle, including validation, deployment, retraining, and monitoring.
Familiarity with ML Ops tools such as MLflow, Airflow, or Databricks Workflows (a plus).
Experience or exposure to Databricks, Spark, or other distributed data processing frameworks (a plus).
Exposure to agentic or LLM-based systems (e.g., LangChain, OpenAI APIs, or vector databases); understanding of prompt design, evaluation, and safety mechanisms for LLMs is a plus.
Eagerness to work in a fast-paced, experimental environment exploring next-generation agentic and generative AI applications.
A growth mindset and strong ability to collaborate in a diverse, cross-functional team environment.
Bachelor's degree in Computer Science, Data Science, Statistics, or a related field (Master's degree a plus).
Join Our Team
We're a publicly traded (NASDAQ: DKNG) technology company headquartered in Boston. As a regulated gaming company, you may be required to obtain a gaming license issued by the appropriate state agency as a condition of employment. Don't worry, we'll guide you through the process if this is relevant to your role.
The US base salary range for this full-time position is 104,000.00 USD - 130,000.00 USD, plus bonus, equity, and benefits as applicable. Our ranges are determined by role, level, and location. The compensation information displayed on each job posting reflects the range for new hire pay rates for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific pay range and how that was determined during the hiring process. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Auto-ApplyLead Data Science Engineer, Personalization
Boston, MA jobs
At DraftKings, AI is becoming an integral part of both our present and future, powering how work gets done today, guiding smarter decisions, and sparking bold ideas. It's transforming how we enhance customer experiences, streamline operations, and unlock new possibilities. Our teams are energized by innovation and readily embrace emerging technology. We're not waiting for the future to arrive. We're shaping it, one bold step at a time. To those who see AI as a driver of progress, come build the future together.
The Crown Is Yours
As a Lead Data Science Engineer, Personalization, you will play a pivotal role in driving the strategic direction and technical execution of creating data-driven, predictive, and personalized customer experiences. You will lead initiatives and execute alongside a team of talented data scientists and engineers, leveraging your technical expertise and leadership skills to deliver impactful solutions. You'll work across the full stack of machine learning development, including model training, evaluation, deployment, and serving.
What You'll Do
Lead initiatives related to Casino & Sportsbook Personalization and mentor a team of data scientists and engineers to achieve high-impact business goals.
Develop and implement advanced machine learning models and algorithms to solve complex business problems, leveraging techniques such as supervised learning, reinforcement learning, and generative AI.
Work with stakeholders to understand business and customer-facing problems and translate these into machine learning problems.
Collaborate with cross-functional teams to integrate data science solutions into production systems.
Drive the data strategy by identifying key data needs, ensuring data quality, and developing data infrastructure.
Communicate findings and recommendations to senior leadership to influence strategic decision-making.
What You'll Bring
Master's degree or PhD in a relevant field such as Computer Science, Statistics, Mathematics, or Data Science is preferred.
At least 5 years of experience in machine learning and statistical modeling, with a proven track record of leading successful projects.
Proficiency in programming languages such as Python, and experience with data manipulation and visualization tools.
Experience with personalization algorithms and techniques, and a strong understanding of their application in content delivery.
Familiarity with user behavior analysis and its integration into content and product strategies.
Strong leadership skills with the ability to mentor and coach individual contributors.
Excellent communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
Join Our Team
We're a publicly traded (NASDAQ: DKNG) technology company headquartered in Boston. As a regulated gaming company, you may be required to obtain a gaming license issued by the appropriate state agency as a condition of employment. Don't worry, we'll guide you through the process if this is relevant to your role.
The US base salary range for this full-time position is 140,800.00 USD - 176,000.00 USD, plus bonus, equity, and benefits as applicable. Our ranges are determined by role, level, and location. The compensation information displayed on each job posting reflects the range for new hire pay rates for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific pay range and how that was determined during the hiring process. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Auto-ApplySoftware Engineer - May Graduates
Data engineer job at Hudl
At Hudl, we build great teams. We hire the best of the best to ensure you're working with people you can constantly learn from. You're trusted to get your work done your way while testing the limits of what's possible and what's next. We work hard to provide a culture where everyone feels supported, and our employees feel it-their votes helped us become one of Newsweek's Top 100 Global Most Loved Workplaces.
We think of ourselves as the team behind the team, supporting the lifelong impact sports can have: the lessons in teamwork and dedication; the influence of inspiring coaches; and the opportunities to reach new heights. That's why we help teams from all over the world see their game differently. Our products make it easier for coaches and athletes at any level to capture video, analyze data, share highlights and more.
Ready to join us?
Your Role
We're looking for a Software Engineer I to join our product teams, which are focused on building software to support teams, athletes, and fans across sports through products such as video analytics, livestreaming, ticketing, and organizational management tools.
In this role, you will:
Collaborate. By working closely with a cross-functional team across Engineering, Quality, Product, Design, and Scrum, you'll help improve our web and mobile products.
Deliver full-stack web and mobile features. We iterate rapidly and deploy changes to the product hundreds of times a day across our engineering team.
Propose new solutions. You'll have the opportunity to solve technical challenges and provide guidance to less experienced engineers.
For this role, we're currently considering candidates who live within a commuting distance of our offices in Lincoln, NE or Lexington, KY or are willing to relocate.
Must-Haves
Exposure to mature, full-stack web application code. You have at least 1 internship building across many levels of a web application, from client-side code down to the database.
A collaborative, team-first mindset. You know building excellent software is a team effort, and you're willing to collaborate with others to get to the best outcome, whether that means providing input in technical discussions, pitching in when a teammate needs a hand, or providing quality feedback in code review.
Experience independently navigating uncertainty. You're used to working with many possible implementation options and know how to identify the one that pragmatically balances quality, consistency, and delivering immediate customer value.
Curiosity. You've picked up new technologies and domains on the job and know what form of learning helps you most. Working across myriad layers of the stack and multiple products energizes you.
Nice-to-Haves
Professional background in C#, React, MongoDB, and/or AWS. Adjacent languages, frameworks, and services used at scale are relevant experiences, but a direct background in our core technologies is a plus.
Experience working with hybrid teams. Our engineering team is spread across the U.S., with a combination of people working in the office and remotely. A background working with global teams isn't a must, but will help you adapt more quickly to Hudl's culture.
Our Role
Champion work-life harmony. We'll give you the flexibility you need in your work life (e.g., flexible vacation time, company-wide holidays and timeout (meeting-free) days, remote work options and more) so you can enjoy your personal life too.
Guarantee autonomy. We have an open, honest culture and we trust our people from day one. Your team will support you, but you'll own your work and have the agency to try new ideas.
Encourage career growth. We're lifelong learners who encourage professional development. We'll give you tons of resources and opportunities to keep growing.
Provide an environment to help you succeed. We've invested in our offices, designing incredible spaces with our employees in mind. But whether you're at the office or working remotely, we'll provide you the tech stack and hardware to do your best work.
Support your mental and physical health. We care about our employees' wellbeing. Our Employee Assistance Program, employee resource groups and fitness partner Peerfit have you covered.
Cover your medical insurance. We have multiple plans to pick from to ensure you'll have the coverage you (and your dependents) want, including vision, dental, fertility healthcare and family forming benefits.
Contribute to your 401(K). Yep, that's free money. We'll match up to 4% of your own contribution.
#LI-DNI
Compensation
The base salary range for this role is displayed below-starting salaries will typically fall near the middle of this range.
We make compensation decisions based on an individual's experience, skills and education in line with our internal pay equity practices.
Base Salary Range$76,000-$127,000 USDInclusion at Hudl
Hudl is an equal opportunity employer. Through our actions, behaviors and attitude, we'll create an environment where everyone, no matter their differences, feels like they belong.
We offer resources to ensure our employees feel safe bringing their authentic selves to work, including employee resource groups and communities. But we recognize there's ongoing work to be done, which is why we track our efforts and commitments in annual inclusion reports.
We also know imposter syndrome is real and the confidence gap can get in the way of meeting spectacular candidates. Please don't hesitate to apply-we'd love to hear from you.
Privacy Policy
Hudl Applicant and Candidate Privacy Policy
Auto-Apply