Post job

Data Engineer jobs at Experfy

- 1172 jobs
  • Cloud Data Engineer

    GHR Healthcare 3.7company rating

    Columbus, OH jobs

    This is a 6 month contract and could be 2 different positions or 1 if someone has experience in both. Hybrid on site role so need to be local. Cloud Data engineer at S4 level · Person should have minimum hands on 5 years cloud data engineering experience (Specially on Azure, Databricks and MS Fabric) and overall minimum 10 to 15 years. · Handson experience with ELT & ETL pipelines development, Data modeling, AI/ML pipeline development, and unity catalog & Purview engineering experience. · Certifications on Azure cloud would be preferred.
    $93k-135k yearly est. 3d ago
  • Platform Engineer (Contract/Hybrid in St. Louis)

    Optomi 4.5company rating

    Saint Louis, MO jobs

    Platform Engineer (Full Time, Contract; Hybrid in St. Louis, Missouri) Optomi, in partnership with one of our clients, is seeking a highly skilled Platform Engineer Lead to support and enhance the Azure and Databricks platforms for a major enterprise client. In this role, you'll oversee day-to-day platform operations, lead both onshore and offshore engineers, and ensure the reliability, scalability, and security of cloud-native environments. You'll build and maintain Terraform-based infrastructure, manage Kubernetes/Helm deployments, optimize CI/CD pipelines in Azure DevOps, and support data engineering tools including Databricks, ADF, Airflow, and Data Lake. You'll manage IAM, AD groups, VMs, and Key Vault, while partnering with architecture and business teams to translate requirements into stable, production-ready solutions. This role offers a blend of hands-on engineering and technical leadership within a fast-growing cloud environment. Responsibilities: Lead day-to-day operations of the Azure and Databricks platforms, ensuring reliability, performance, and security. Build, maintain, and improve cloud infrastructure using Terraform, Kubernetes/Helm, Docker, and cloud-native tooling. Manage and optimize CI/CD pipelines in Azure DevOps, supporting deployments, automation, and release processes. Oversee IAM, Azure AD, RBAC, AD Groups, service principals, Key Vault, and platform security governance. Partner with architecture, data engineering, and business teams to translate requirements into scalable, production-ready solutions. Provide technical leadership to onshore and offshore engineers-assigning work, reviewing code, mentoring, and ensuring best practices. Support additional platform initiatives such as Dynatrace setup, monitoring enhancements, or environment expansions. Apply today if your background includes: 8+ years of hands-on platform engineering experience supporting large-scale Azure environments and core cloud services. Strong production experience with Terraform, including building reusable modules and managing IaC for complex cloud environments. Deep experience with Kubernetes, Helm, and Docker, with the ability to build, expand, and maintain containerized platforms. Solid platform administration background with Databricks, Azure Data Factory, Airflow, and Data Lake technologies. Proven ability to design, manage, and optimize CI/CD pipelines in Azure DevOps, with scripting skills in Python, PowerShell, or Bash. Hands-on experience with IAM, Azure AD, RBAC, AD Groups, service principals, Key Vault, and cloud security best practices. Bonus: Experience with Dynatrace, Unity Catalog, or Active Directory group management, plus prior leadership supporting onshore/offshore engineering teams.
    $62k-87k yearly est. 2d ago
  • Senior Data Engineer - Platform Enablement

    Soundcloud 4.1company rating

    Remote

    SoundCloud empowers artists and fans to connect and share through music. Founded in 2007, SoundCloud is an artist-first platform empowering artists to build and grow their careers by providing them with the most progressive tools, services, and resources. With over 400+ million tracks from 40 million artists, the future of music is SoundCloud. We're looking for a Senior Data Engineer to join our growing Platform Enablement team. This is a central engineering team that supports various initiatives around SoundCloud, including marketing technology, privacy & consent and observability, across web and mobile devices. As an engineer on this team, you can expect a fast-paced environment and a variety of types of projects. Great communication skills are a must! Key Responsibilities: Develop and optimize SQL data models and queries for analytics, reporting, and operational use cases. Design and maintain ETL/ELT workflows using Apache Airflow, ensuring reliability, scalability, and data integrity. Collaborate with analysts and business teams to translate data needs into efficient, automated data pipelines and datasets. Own and enhance data quality and validation processes, ensuring accuracy and completeness of business-critical metrics. Build and maintain reporting layers, supporting dashboards and analytics tools (e.g. Looker, or similar). Troubleshoot and tune SQL performance, optimizing queries and data structures for speed and scalability. Contribute to data architecture decisions, including schema design, partitioning strategies, and workflow scheduling. Mentor junior engineers, advocate for best practices and promote a positive team culture Experience and Background: 7+ years of experience in data engineering, analytics engineering, or similar roles. Expert-level SQL skills, including performance tuning, advanced joins, CTEs, window functions, and analytical query design. Proven experience with Apache Airflow (designing DAGs, scheduling, task dependencies, monitoring, Python). Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.) Knowledge of data governance, schema management, and versioning best practices Understanding observability practices: logging, metrics, tracing, and incident response Experience deploying and managing services in cloud environments, preferably GCP, AWS Excellent communication skills and a collaborative mindset. The salary range for this role is $160,000 - $210,000 annually. The final salary offered will be determined based on relative experience, skills, internal equity, and location. We also offer a generous total rewards program - read more about additional benefits and perks below! About us: We are a multinational company with offices in the US (New York and Los Angeles), Germany (Berlin), and the UK (London) We provide a flexible work culture that offers the opportunity to collaborate and connect in person at our offices as well as accommodating work from home We are deeply committed to ensuring diversity, equity and inclusion at all levels of our organization and fostering a community where everyone's voice, perspective and experience is respected and heard We believe a strong team is made by investing in employees through mentorship, workshops and enrichment opportunities Benefits: Comprehensive health benefits including medical, dental, and vision plans, as well as mental health resources Robust 401k program Employee Equity Plan Generous professional development allowance Interested in a gym membership, photography course or book? We have a Creativity and Wellness benefit! Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children Various snacks, goodies, and 2 free lunches weekly when at the office Diversity, Equity and Inclusion at SoundCloud SoundCloud is for everyone. Diversity and open expression are fundamental to our organization; they help us lead what's next in music by understanding and empowering our creators and fans, no matter their identity. We acknowledge the challenges in the music industry, and strive to influence an inclusive culture where everyone can contribute respectfully and thrive, especially the historically marginalized communities that many of our creators, fans and SoundClouders identify with. We are dedicated to creating an inclusive environment at SoundCloud for everyone, regardless of gender identity, sexual orientation, race, ethnicity, migration background, national origin, age, disability status, or care-giver status. At SoundCloud you can find your community or elevate your allyship by joining a Diversity Resource Group. Diversity Resource Groups are employee-organized groups focused on supporting and promoting the interests of a particular underrepresented community in order to build a more inclusive culture at SoundCloud. Anyone can join, whether you share the identity or strive to be an ally.
    $160k-210k yearly Auto-Apply 31d ago
  • Staff Data Scientist / Machine Learning Engineer - Listing Quality

    Faire 3.8company rating

    San Francisco, CA jobs

    Faire is an online wholesale marketplace built on the belief that the future is local - independent retailers around the globe are doing more revenue than Walmart and Amazon combined, but individually, they are small compared to these massive entities. At Faire, we're using the power of tech, data, and machine learning to connect this thriving community of entrepreneurs across the globe. Picture your favorite boutique in town - we help them discover the best products from around the world to sell in their stores. With the right tools and insights, we believe that we can level the playing field so that small businesses everywhere can compete with these big box and e-commerce giants. By supporting the growth of independent businesses, Faire is driving positive economic impact in local communities, globally. We're looking for smart, resourceful and passionate people to join us as we power the shop local movement. If you believe in community, come join ours. About this role Faire leverages the power of machine learning and data insights to revolutionize the wholesale industry, enabling local retailers to compete against giants like Amazon and big box stores. At Faire, the Data Science team is responsible for creating and maintaining a diverse range of algorithms and models that power our marketplace. We are dedicated to building machine learning models that help our customers thrive. As the Listing Quality lead within the Brand Data Science team, you will be responsible for improving the quality of product listings to help retailers find and evaluate products on Faire. You will use ML and AI to tackle critical challenges, such as enhancing image and text quality, extracting structured product attributes, and accurately identifying duplicates and product variants. You will leverage deep learning, multi-modal LLMs, and human-in-the-loop training to create high performance solutions. This space has been evolving rapidly with advancements in AI and you will be at the forefront of applying the latest technology to drive real-world impact. You will mentor and guide the work of more junior Data Scientists, while also executing as an individual contributor. You will define the ML roadmap and be the Data Science lead within the cross-functional Listing Quality pod, working with product, design, engineering, analytics, and operations to solve problems end-to-end. What you'll do * Drive data science vision, strategy, and execution on Listing Quality, using ML and AI solutions to improve the quality of Faire's product listings. * Use deep learning, LLM fine tuning, and human-in-the-loop training to automatically detect and address issues with high accuracy. * Mentor and guide the work of more junior Data Scientists. * Act as a lead on the cross-functional Listing Quality pod, thinking end-to-end about brand and retailer experiences. Qualifications * 5+ years of industry experience using machine learning to solve real-world problems * Experience with relevant business problems (e.g. e-commerce) * Experience with relevant technical methods (e.g. LLM fine tuning, deep learning, or human-in-the-loop machine learning) * Strong programming skills * An excitement and willingness to learn new tools and techniques * Demonstrated ability to mentor Senior Data Scientists, develop team strategy, and independently lead model development * Strong communication skills and the ability to work in a highly cross-functional team Great to Haves: * Master's or PhD in Computer Science, Statistics, or related STEM fields is highly recommended * Previous experience in listing quality for e-commerce * Previous experience in supervised fine tuning of multi-modal LLMs * Experience deploying and optimizing LLM inference systems at scale (10B+ tokens), with focus on cost efficiency and product impact Salary Range San Francisco: the pay range for this role is $224,000 to $308,000 per year. This role will also be eligible for equity and benefits. Actual base pay will be determined based on permissible factors such as transferable skills, work experience, market demands, and primary work location. The base pay range provided is subject to change and may be modified in the future. Hybrid Faire employees currently go into the office 2 days per week on Tuesdays and Thursdays. Effective starting in January 2026, employees will be expected to go into the office on a third flex day of their choosing (Monday, Wednesday, or Friday). Additionally, hybrid in-office roles will have the flexibility to work remotely up to 4 weeks per year. Specific Workplace and Information Technology positions may require onsite attendance 5 days per week as will be indicated in the job posting. Applications for this position will be accepted for a minimum of 30 days from the posting date. Why you'll love working at Faire * We are entrepreneurs: Faire is being built for entrepreneurs, by entrepreneurs. We believe entrepreneurship is a calling and our mission is to empower entrepreneurs to chase their dreams. Every member of our team is taking part in the founding process. * We are using technology and data to level the playing field: We are leveraging the power of product innovation and machine learning to connect brands and boutiques from all over the world, building a growing community of more than 350,000 small business owners. * We build products our customers love: Everything we do is ultimately in the service of helping our customers grow their business because our goal is to grow the pie - not steal a piece from it. Running a small business is hard work, but using Faire makes it easy. * We are curious and resourceful: Inquisitive by default, we explore every possibility, test every assumption, and develop creative solutions to the challenges at hand. We lead with curiosity and data in our decision making, and reason from a first principles mentality. Faire was founded in 2017 by a team of early product and engineering leads from Square. We're backed by some of the top investors in retail and tech including: Y Combinator, Lightspeed Venture Partners, Forerunner Ventures, Khosla Ventures, Sequoia Capital, Founders Fund, and DST Global. We have headquarters in San Francisco and Kitchener-Waterloo, and a global employee presence across offices in Toronto, London, and New York. To learn more about Faire and our customers, you can read more on our blog. Faire provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, genetics, sexual orientation, gender identity or gender expression. Faire is committed to providing access, equal opportunity and reasonable accommodation for individuals with disabilities in employment, its services, programs, and activities. Accommodations are available throughout the recruitment process and applicants with a disability may request to be accommodated throughout the recruitment process. We will work with all applicants to accommodate their individual accessibility needs. To request reasonable accommodation, please fill out our Accommodation Request Form (************************** Privacy For information about the type of personal data Faire collects from applicants, as well as your choices regarding the data collected about you, please visit Faire's Privacy Notice (******************************
    $224k-308k yearly Auto-Apply 56d ago
  • Data Scientist

    Nextdoor 4.1company rating

    San Francisco, CA jobs

    #TeamNextdoor Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 340,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com. Meet Your Future Neighbors As a Data Scientist at Nextdoor, you'll Collect, organize, interpret, and analyze statistical and behavioral data to support the design, improvement and scaling of Company's core products. In this role, you'll Develop and execute A/B tests and other experimentation frameworks to evaluate the impact of product changes on key metrics such as engagement and retention. At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The Impact You'll Make If you want the challenge of fast-paced growth, the satisfaction of seeing your design work come to life, and the pride in helping grow a world-class design team, this is the place for you. Your responsibilities will include: Communicate insights through advanced data visualization to influence strategic decision-making and product roadmaps Conduct causal inference studies to deeply understand Company's user behavior, engagement patterns, dynamics of local communities, and acquisition pattern Partner with cross-functional teams (including Product, Engineering and User Experience Research teams) to identify opportunities, solve complex problems, and inform the development of features that foster meaningful neighbor-to-neighbor connections Apply statistical methods, machine learning, and quantitative analysis to conduct exploratory analysis, develop predictive models, and inform operational strategies Apply data mining and data modeling to extract and analyze information from large structured and unstructured datasets Build scalable data solutions, including Extract, Transform, Load (ETL) pipeline and distributed data systems, to process and analyze large and complex datasets efficiently What You'll Bring To The Team Master's degree or foreign equivalent in Computer Science, Information Systems, Business Analytics, Engineering, or closely related quantitative discipline 3 years of experience in the position offered, as a Data Scientist, or closely related position in data sciences In the alternative, will accept a Bachelor's degree in the fields above followed by five (5) years of progressive, post-bachelor's experience in positions specified above Must have experience in the following: Designing and assessing A/B experiments accurately to evaluate the impact of product and feature updates, providing data-driven insights to guide strategic decision-making; Conducting deep analysis utilizing scripting languages, such as Python into user behavior, product features and content ecosystem, generating actionable business insights for strategic improvement initiatives; Developing a metrics framework to assess product health, monitored core metrics, and analyzed the underlying causes of metric fluctuations; Designing, building, and deploying scalable, reliable ETL pipelines and dashboards, along with processing frameworks to efficiently analyze large and complex datasets using SQL; Leveraging statistical methods, quantitative analysis, and machine learning to conduct in-depth analyses and deliver key strategic insights; and Partnering with engineers and product stakeholders to solve ambiguous business problems with structured analytics framework and deliver product insights and strategy Rewards Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications. Compensation may also vary by geography. The starting salary for this role is expected to range from $194,834 - $236,000/year on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date. When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care. At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records. For information about our collection and use of applicants' personal information, please see Nextdoor's Personnel Privacy Notice, found here. #LI-DNI
    $194.8k-236k yearly Auto-Apply 60d+ ago
  • Data Scientist

    Nextdoor 4.1company rating

    San Francisco, CA jobs

    #TeamNextdoor Nextdoor (NYSE: NXDR) is the essential neighborhood network. Neighbors, public agencies, and businesses use Nextdoor to connect around local information that matters in more than 340,000 neighborhoods across 11 countries. Nextdoor builds innovative technology to foster local community, share important news, and create neighborhood connections at scale. Download the app and join the neighborhood at nextdoor.com. Meet Your Future Neighbors As a Data Scientist 4 at Nextdoor, you'll apply statistical methods, machine learning, and quantitative analysis to conduct exploratory analysis, develop predictive models, and inform operational strategies. In this role, you'll apply data mining and data modeling to extract and analyze information from large structured and unstructured datasets. At Nextdoor, we offer a warm and inclusive work environment that embraces a hybrid employment model, blending an in office presence and work from home experience for our valued employees. The Impact You'll Make If you want the challenge of fast-paced growth, the satisfaction of seeing your design work come to life, and the pride in helping grow a world-class design team, this is the place for you. Your responsibilities will include: Collect, organize, interpret, and analyze statistical and behavioral data to support the design, improvement and scaling of Company's core products Develop and execute A/B tests and other experimentation frameworks to evaluate the impact of product changes on key metrics such as engagement and retention Communicate insights through advanced data visualization to influence strategic decision-making and product roadmaps Conduct causal inference studies to deeply understand Company's user behavior, engagement patterns, dynamics of local communities, and acquisition pattern Build scalable data solutions, including Extract, Transform, Load (ETL) pipeline and distributed data systems, to process and analyze large and complex datasets efficiently Partner with cross-functional teams (including Product, Engineering and User Experience Research teams) to identify opportunities, solve complex problems, and inform the development of features that foster meaningful neighbor-to-neighbor connections What You'll Bring To The Team Master's degree or foreign equivalent in Computer Science, Information Systems, Engineering, or closely related quantitative discipline 3 years of experience in the position offered, as a Data Scientist, or closely related position in data sciences In the alternative, will accept a Bachelor's degree in the fields above followed by five (5) years of progressive, post-bachelor's experience in positions specified above Must have experience in the following: Utilizing advanced statistical methods and quantitative techniques such as clustering, regression, pattern recognition, and inferential statistics to conduct exploratory analysis, develop and improve predictive models, and provide actionable recommendations to optimize product initiatives and generate data-driven business strategies; Leveraging Python, SQL and statistical tools such as MATLAB, R, and SAS to analyze large, complex datasets through data mining, manipulation, analysis, and modeling, ensuring data reliability to drive business growth; Designing trustworthy experimentation and analyzing complex product A/B testing results to evaluate the effectiveness of product strategies and provide data-driven insights to guide decision-making; Designing, constructing, and deploying scalable ETL (extraction, transformation, and loading) pipelines and data processing frameworks to develop metrics and dashboards that drive data-informed product strategy; Leading the execution of key ecosystem strategies, including developing new capabilities like customer segment frameworks, to drive growth and optimize product performance; Utilizing advanced data analysis to generate strategic insights, assessing product team performance and impact to inform decision-making and guide improvements; Collaborating with cross-functional teams (product, design, engineering, marketing, and operations) to apply advanced analytics in support of product development, ensuring alignment with business goals; and Utilizing strong communication skills to simplify complex problems and present clear, compelling narratives to diverse audiences, including executives Rewards Compensation, benefits, perks, and recognition programs at Nextdoor come together to create our total rewards package. Compensation will vary depending on your relevant skills, experience, and qualifications. Compensation may also vary by geography. The starting salary for this role is expected to range from $214,000 - $225,000/year on an annualized basis, or potentially greater in the event that your 'level' of proficiency exceeds the level expected for the role. We expect to award a meaningful equity grant for this role. With quarterly vesting, your first vest date will take place within 3 months of your start date. When it comes to benefits, we have you covered! Nextdoor employees can choose between a variety of health plans, including a 100% covered employee only plan option, and we also provide a OneMedical membership for concierge care. At Nextdoor, we empower our employees to build stronger local communities. To create a platform where all feel welcome, we want our workforce to reflect the diversity of the neighbors we serve. We encourage everyone interested in our mission to apply. We do not discriminate on the basis of race, gender, religion, sexual orientation, age, or any other trait that unfairly targets a group of people. In accordance with the San Francisco Fair Chance Ordinance, we always consider qualified applicants with arrest and conviction records. For information about our collection and use of applicants' personal information, please see Nextdoor's Personnel Privacy Notice, found here. #LI-DNI
    $214k-225k yearly Auto-Apply 46d ago
  • Big Data Engineer

    Throtle 4.1company rating

    Red Bank, NJ jobs

    Replies within 24 hours Benefits: 401(k) 401(k) matching Company parties Competitive salary Dental insurance Flexible schedule Free food & snacks Health insurance Opportunity for advancement Paid time off Parental leave Training & development Vision insurance Big Data Engineer (Hybrid position) Job Summary As a Big Data Engineer at Throtle, you will be responsible for designing and developing complex software systems that integrate with Throtle big data processing solutions. You will work closely with cross-functional teams to architect scalable, secure, and maintainable software systems that meet the needs of our business. Duties/Responsibilities · Design and develop software architectures for large-scale data processing and analytics platforms· Collaborate with data engineers to design and implement data pipelines and transformations using big data technologies such as Hadoop, Spark, Kafka, and related ecosystems· Develop and maintain complex SQL queries for data extraction and reporting, and optimize query performance and scalability· Design and develop software components that integrate with our identity graphs, client data, and keying processes· Work with cross-functional teams to ensure software systems meet business requirements and are scalable for future growth· Develop and maintain technical documentation for software architectures and designs· Ensure data integrity and quality in development process· Position will require that the individual understands all regulations and laws applicable to their assigned roles and responsibilities. Additionally, the individual will be responsible for the development, implementation, and regular maintenance of policies and procedures that govern the work of assigned roles and responsibilities, including compliance with the security requirements of ePHI. Required Skill and Abilities · Strong understanding of software design patterns, architecture principles, and big data technologies· Proficiency in programming languages such as Java, Python, Scala, or similar languages· Experience with large-scale data processing and analytics platforms, including Hadoop, Spark, Kafka, and related ecosystems· Knowledge of distributed computing concepts and experience with cloud-based infrastructure (AWS)· Strong analytical and problem-solving skills to address complex software architecture challenges· Ability to work effectively with cross-functional teams and communicate technical designs and solutions to non-technical stakeholders Education and Experience · Bachelor's Degree in Computer Science, Engineering, or a related field· Master's Degree or certification in Software Architecture or a related field preferred but not required· 5+ years of professional experience in software development, architecture, or a related field· Bachelor's Degree in related field such as computer science, Engineering, or related field. · Experience in data modeling, relational and NoSQL databases, SQL and programming languages for building big data pipelines and transformations. About Throtle: Throtle is a leading identity company trusted by the world's top brands and agencies located in Red Bank, NJ. At Throtle, we empower brands at scale with true individual-based marketing using a data-centric identity and onboarding approach. Throtle is a company that truly values its employees and their work-life balance. We offer a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being: Competitive compensation. Comprehensive benefits include Medical, Dental, and Vision. Life insurance. Long-Term Disability A generous PTO program. A 401k plan supported by a company match. Half Day Summer Fridays (close at 1 p.m. Memorial Day to Labor Day). Early Fridays (office closes at 3 p.m.). Hybrid Schedule (Mondays and Fridays WFH) The office is closed between Christmas and New Year. Company-sponsored lunch at least 1x a month. Professional Development Policy! And much MORE! Throtle is an equal-opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws. Flexible work from home options available. Compensation: $150,000.00 - $200,000.00 per year WE HIRE AND DEVELOP GREAT PEOPLE At Throtle, we focus on deterministic matching and identity resolution, empowering brands with true individual-based marketing. Our data centric onboarding approach guarantees the highest level of accuracy, scale, and responsiveness for our clients. Throtle works on the belief that their best employees should be given opportunities to grow and thrive in an energetic and technology driven culture. We empower employees to always think ahead and to keep attaining new levels of success for themselves and our clients. We hire and develop great people, which means that each and every one of our employees is not only talented, they genuinely care about the success of our customers and stand behind our company.
    $150k-200k yearly Auto-Apply 60d+ ago
  • Senior Data Engineer

    Codepath 3.9company rating

    Remote

    CodePath is reprogramming higher education to create the first generation of AI-native engineers, CTOs, and founders. We deliver industry-vetted courses and career support centered on the needs of first-generation and low-income students. Our students train with senior engineers, intern at top companies, and rise together to become the tech leaders of tomorrow. With 30,000 students and alumni from 700 colleges now working at 2,000 companies, we are reshaping the tech workforce and the industries of the future. About the Role Location: Remote, United States Role Type: Full-Time Reporting to: Lead Data Scientist Compensation: $110,000 to $150,000 per year We're looking for a highly capable and pragmatic Senior Data Engineer who thrives in environments where they can take initiative, wear multiple hats, and turn ambiguity into action. This is a deeply hands-on role for someone eager to help shape our next-generation infrastructure. As the sole data engineer on our team, you will have true ownership over designing the architecture, pipelines, and models that power every critical function at CodePath-from student success analytics to business intelligence to AI-driven personalization. Check out our platform overview. As the first Senior Data Engineer at CodePath, you will: Lead the design and development of a modern, scalable data platform that fuels analytics, data science, and ML Work cross-functionally with engineers, data scientists, and business leaders on high-impact projects Help build the foundation for data systems that directly enable student success, program outcomes, and organizational growth This is a unique opportunity to: Build something new: Be the architect of CodePath's first modern data platform Drive visibility and impact: Partner closely with leadership to deliver insights that shape our strategy Scale CodePath's mission: Ensure our 30,000+ students and alumni are supported with the infrastructure to thrive in the AI-native workforce Every pipeline you build and every model you optimize will ripple across an organization supporting tens of thousands of students, alumni, and partners, making this one of the highest-impact roles you'll find. This is your chance to take full ownership of a greenfield data engineering function-designing, building, and scaling systems that will serve as the backbone of CodePath's future. Key Activities Data Platform & Architecture: Design and maintain a scalable, modern data infrastructure (data warehouses, pipelines, data lakes, analytic products) Pipelines & ETL/ELT: Build and optimize robust data workflows using SQL, dbt, FiveTran and other tools to deliver clean, reliable, and timely data Modeling & Analytics Enablement: Create effective data models to power self-serve analytics, performance reporting, and strategic insights across teams Business Intelligence: Develop and maintain dashboards and reporting tools (Tableau or equivalent) that drive decision-making and clarity on key KPIs Governance & Quality: Establish data quality and governance standards, ensuring stakeholders can trust and act on data Documentation & Maintainability: Document systems, models, and pipelines to support transparency, onboarding, and long-term scalability Cross-Functional Collaboration: Work closely with engineers, data scientists, and business stakeholders to scope, prioritize, and deliver data solutions Strategic Data Leadership: Partner with leadership to define and evolve CodePath's data strategy, ensuring that infrastructure investments align with long-term organizational goals Mentorship & Standards: Establish engineering best practices for data workflows, mentoring future hires and empowering non-technical stakeholders to work confidently with data Key Success Metrics Review and improve existing data models and schemas to support analytical and reporting needs Take ownership of CodePath's ETL pipeline and (BigQuery) data warehouse; develop new data models (using dbt) support evolving data needs across the organization Manage CodePath's Google Cloud Platform infrastructure (CloudRun, BigQuery, App Engine) to support data and dashboardings needs, including owning security/permissions, scalability, and uptime Define a roadmap for data infrastructure that aligns with organizational growth and impact measurement goals Propose and implement innovative solutions to improve data ingestion, processing, and storage Develop and implement metrics to demonstrate that data engineering solutions align with business objectives and strategic goals Qualifications 5+ years of experience in data engineering or similar roles Strong SQL skills and experience building efficient pipelines, including performance tuning Proven ability to work autonomously and deliver outcomes in a fast-changing environment Expertise in building data pipelines end-to-end, from ingestion through transformation to final visualization. Deep understanding of data modeling principles (dimensional modeling, data normalization) Hands-on experience managing cloud data warehouses (BigQuery, Snowflake, Databricks, or Redshift) Proficiency with CodePath's primary data stack, including Postgres, Airbyte, dbt, and Tableau Strong communicator who can make complex analysis clear and actionable for diverse stakeholder Preferable Qualifications Experience developing custom data connectors and integration with platforms like SurveyMonkey, HubSpot, or Airtable Familiarity with data quality frameworks and governance best practices Background in a fast-paced, startup environment with the adaptability to thrive in ambiguity Compensation CodePath has standardized salaries based on the position's level, no matter where you live. For this role, we're hiring for a Senior level position at an annual salary of $110,000 to $150,000. Salary is determined based on your relevant experience and skills as evaluated through our interview process. Full-Time Employee Benefits This is a 100% remote position-work from anywhere in the U.S.! CodePath prioritizes employee well-being with a competitive benefits package to support your health, financial security, and work-life balance. Health & Wellness: Medical, dental, and vision insurance (90% employer-covered for employees and dependents), employer-funded healthcare reimbursement, FSAs, and Employee Assistance Program Financial Security: 401(k), employer-paid life & disability insurance, and identity theft protection Work-Life Balance: Generous PTO, paid holidays, 10 weeks of fully paid parental leave, and an annual year-end company closure (Dec 24 - Jan 2) Professional Growth: $1,000 annual professional development stipend and home office setup support Student Loan Forgiveness: CodePath is a qualifying employer for Public Service Loan Forgiveness (PSLF), helping employees manage student loan debt Additional Perks: Pet wellness plans, legal services, home/auto insurance discounts, and exclusive marketplace savings Pay range $110,000 - $150,000 USD
    $110k-150k yearly Auto-Apply 60d+ ago
  • MLOps/Data Engineer

    Nift 3.3company rating

    Remote

    Nift is disrupting performance marketing, delivering millions of new customers to brands every month. We're actively looking for a hands-on Engineer to focus on MLOps/Data Engineering to build the data and ML platform that powers product decisions and production models. As an MLOps/Data Engineer, you'll report to the Data Science Manager and work closely with both our Data Science and Product teams. You'll architect storage and compute, harden training/inference pipelines, and make our ML code, data workflows, and services reliable, reproducible, observable, and cost-efficient. You'll also set best practices and help scale our platform as Nift grows. Our Mission: Nift's mission is to reshape how people discover and try new brands by introducing them to new products and services through thoughtful "thank-you" gifts. Our customer-first approach ensures businesses acquire new customers efficiently while making customers feel valued and rewarded. We are a data-driven, cash-flow-positive company that has experienced 1,111% growth over the last three years. Now, we're scaling to become one of the largest sources for new customer acquisition worldwide. Backed by investors who supported Fitbit, Warby Parker, and Twitter, we are poised for exponential growth and ready to demonstrate impact on a global scale. Read more about our growth here. What you will do: Architecture & storage: Design and implement our data storage strategy (warehouse, lake, transactional stores) with scalability, reliability, security, and cost in mind Pipelines & ETL: Build and maintain robust data pipelines (batch/stream), including orchestration, testing, documentation, and SLAs ML platform: Productionize training and inference (batch/real-time), establish CI/CD for models, data/versioning practices, and model governance Feature & model lifecycle: Centralize feature generation (e.g., feature store patterns), manage model registry/metadata, and streamline deployment workflows Observability & quality: Implement monitoring for data quality, drift, model performance/latency, and pipeline health with clear alerting and dashboards Reliability & cost control: Optimize compute/storage (e.g., spot, autoscaling, lifecycle policies) and reduce pipeline fragility Engineering excellence: Refactor research code into reusable components, enforce repo structure, testing, logging, and reproducibility Cross-functional collaboration: Work with DS/Analytics/Engineers to turn prototypes into production systems, provide mentorship and technical guidance Roadmap & standards: Drive the technical vision for ML/data platform capabilities and establish architectural patterns that become team standards What you need: Experience: 5+ years in data engineering/MLOps or related fields, including ownership of data/ML infrastructure for large-scale systems Software engineering strength: Strong coding, debugging, performance analysis, testing, and CI/CD discipline; reproducible builds Cloud & containers: Production experience on AWS, Docker + Kubernetes (EKS/ECS or equivalent) IaC: Terraform or CloudFormation for managed, reviewable environments Data engineering: Expert SQL, data modeling, schema design, modern orchestration (Airflow/Step Functions) and ETL tools ML tooling: MLflow/SageMaker (or similar) with a track record of production ML pipelines Warehouses & lakes: Databricks, Redshift and lake formats (Parquet) Monitoring/observability: Data/ML monitoring (quality, drift, performance) and pipeline alerting Collaboration: Excellent communication, comfortable working with data scientists, analysts, and engineers in a fast-paced startup PySpark/Glue/Dask/Kafka: Experience with large-scale batch/stream processing Analytics platforms: Experience integrating 3rd party data Model serving patterns: Familiarity with real-time endpoints, batch scoring, and feature stores Governance & security: Exposure to model governance/compliance and secure ML operations Mission-oriented: Proactive and self-driven with a strong sense of initiative; takes ownership, goes beyond expectations, and does what's needed to get the job done What you get: Competitive compensation, comprehensive benefits (401K, Medical/Dental/Vision), and we offer all full-time employees the potential to hold company equity Flexible remote work Unlimited Responsible PTO Great opportunity to join a growing, cash-flow-positive company while having a direct impact on Nift's revenue, growth, scale, and future success
    $95k-138k yearly est. Auto-Apply 60d+ ago
  • Senior Data Engineer, DPD Team (Remote, International)

    Pulsepoint 4.2company rating

    Remote

    Description A bit about us:PulsePoint is a leading healthcare ad technology company that uses real-world data in real-time to optimize campaign performance and revolutionize health decision-making. Leveraging proprietary datasets and methodology, PulsePoint targets healthcare professionals and patients with an unprecedented level of accuracy-delivering unparalleled results to the clients we serve. The company is now a part of Internet Brands, a KKR portfolio company and owner of WebMD Health Corp.Sr. Data EngineerPulsePoint Data Engineering team plays a key role in our technology company that's experiencing exponential growth. Our data pipeline processes over 80 billion impressions a day (> 20 TB of data, 200 TB uncompressed). This data is used to generate reports, update budgets, and drive our optimization engines. We do all this while running against tight SLAs and provide stats and reports as close to real-time as possible.The most exciting part about working at PulsePoint is the enormous potential for personal and professional growth. We are always seeking new and better tools to help us meet challenges such as adopting proven open-source technologies to make our data infrastructure more nimble, scalable and robust. Some of the cutting-edge technologies we have recently implemented are Kafka, Spark Streaming, Presto, Airflow, and Kubernetes.What you'll be doing: Design, build, and maintain reliable and scalable enterprise-level distributed transactional data processing systems for scaling the existing business and supporting new business initiatives Optimize jobs to utilize Kafka, Hadoop, Presto, Spark, and Kubernetes resources in the most efficient way Monitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc) Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and datasets that fit their use cases) Collaborate within a small team with diverse technology backgrounds Provide mentorship and guidance to junior team members Team Responsibilities: Ingest, validate and process internal & third party data Create, maintain and monitor data flows in Python, Spark, Hive, SQL and Presto for consistency, accuracy and lag time Maintain and enhance framework for jobs(primarily aggregate jobs in Spark and Hive) Create different consumers for data in Kafka using Spark Streaming for near time aggregation Tools evaluation Backups/Retention/High Availability/Capacity Planning Review/Approval - DDL for database, Hive Framework jobs and Spark Streaming to make sure they meet our standards Technologies We Use: Python - primary repo language Airflow/Luigi - for job scheduling Docker - Packaged container image with all dependencies Graphite - for monitoring data flows Hive - SQL data warehouse layer for data in HDFS Kafka - distributed commit log storage Kubernetes - Distributed cluster resource manager Presto/Trino - fast parallel data warehouse and data federation layer Spark Streaming - Near time aggregation SQL Server - Reliable OLTP RDBMS Apache Iceberg GCP - BigQuery for performance, Looker for dashboards Requirements 8+ years of data engineering experience Strong skills in and current experience with SQL and Python Strong recent Spark experience (3+ years) Experience working in on-prem environments Hadoop and Hive experience Experience in Scala/Java is a plus (Polyglot programmer preferred!) Proficiency in Linux Strong understanding of RDBMS and query optimization Passion for engineering and computer science around data East Coast U.S. hours 9am-6pm EST; you can work fully remotely Notice period needs to be less than 2 months (or 2 months max) Knowledge and exposure to distributed production systems i.e Hadoop Knowledge and exposure to Cloud migration (AWS/GCP/Azure) is a plus Location: We can hire as FTE in the, U.S., UK and Netherlands We can hire as long-term contractor (independent or B2B) in most other countries Selection Process:1) CodeSignal Online Assessment 2) Initial Screen (30 mins)3) Hiring Manager Interview (45 mins)4) Tech Challenge5) Interview with Sr. Data Engineer (60 mins)6) Team Interviews (90 mins + 3 x 45 mins) + SVP of Engineering (30 mins)7) WebMD Sr. Director, DBA (30 mins) Note that leetcode-style live coding challenges will be involved in the process.WebMD and its affiliates is an Equal Opportunity/Affirmative Action employer and does not discriminate on the basis of race, ancestry, color, religion, sex, gender, age, marital status, sexual orientation, gender identity, national origin, medical condition, disability, veterans status, or any other basis protected by law.
    $103k-141k yearly est. Auto-Apply 60d+ ago
  • Senior Data Engineer, Graph Analytics

    TRM Labs 4.3company rating

    Remote

    TRM Labs is a blockchain intelligence company committed to fighting crime and creating a safer world. By leveraging blockchain data, threat intelligence, and advanced analytics, our products empower governments, financial institutions, and crypto businesses to combat illicit activity and global security threats. At TRM, you'll join a mission-driven, fast-paced team made up of experts in law enforcement, data science, engineering, and financial intelligence, tackling complex global challenges daily. Whether analyzing blockchain data, developing cutting-edge tools, or collaborating with global organizations, you'll have the opportunity to make a meaningful and lasting impact. As a Senior Data Engineer on the Graph Analytics team, you will build scalable graph systems that analyze large networks of cryptocurrency transactions. You will collaborate closely with engineers, data scientists, and investigators to design mission-critical graph algorithms that analyze flows of funds. You will leverage distributed databases and graph processors to implement real-time graph algorithms at the multi-blockchain scale. You will collaborate with data science teams to identify opportunities to apply tools and techniques from graph theory to a variety of predictive learning problems. The impact you will have: Designing and implementing graph algorithms that analyze large cryptocurrency transaction networks at multi-blockchain scale Researching new graph-native technology to evaluate benefit to data science and data engineering teams at TRM Working on a highly cross-functional team that collaborates with cryptocurrency investigators to identify key user stories and requirements for new graph algorithms and features Understanding and refining TRM's risk models which analyze large networks of cryptocurrency transactions to assign risk scores to addresses Communicating complex implementation details to a variety of audiences from investigators and customer success stakeholders to data engineers and data scientists Integrating with a diverse set of data inputs ranging from raw blockchain data to complex model outputs What we're looking for: Your academic background is in a quantitative field such as Computer Science, Mathematics, Engineering, or Physics. You have strong knowledge of algorithm design and data structures, and have experience applying this knowledge towards real-world problems. You have experience optimizing large-scale distributed data processing systems such as Apache Spark, Apache Hadoop, Dask, and distributed graph databases. You have experience converting academic research into products and have worked with research teams that regularly ship new features. You have strong programming experience with Python and SQL. You are an excellent communicator who is skilled at tailoring explanations of complex topics to both technical and non-technical audiences. You are self-motivated. You propose and validate solutions with minimal guidance. You're comfortable working with ambiguity and shaping your own research direction while being accountable for outcomes. You are knowledgeable of basic graph theory concepts. About the Team: Our team is spread out over several countries and time zones. To leverage the opportunities and address challenges that come with this working model, we've built a culture grounded in trust, transparency, and adaptability. Even though we're scattered across different time zones, we stay closely connected through open communication and regular check-ins. Our use of digital tools helps us collaborate seamlessly and work at an impressive pace, delivering results at TRM speed while staying in sync. The diversity of perspectives and expertise within our team sparks creative exchanges and enhances our problem-solving capabilities, allowing us to leverage our global synergy to drive innovation and efficiency. We're driven by a deep sense of ownership, constant pursuit of improvement and an obsession with delivering magical experiences to our customers; these are a core part of our culture. We're driven by the desire to continuously hone our craft and to this end, we regularly assess our processes, seek feedback, and embrace iterative development to refine what we do. We structure problems and use the 80/20 rule(for many outcomes, roughly 80% of consequences come from 20% of causes) to identify where to direct our razor-sharp focus. Each of us is passionate about excellence and growth, and we're encouraged to challenge the status quo and experiment with new methods. This mindset creates an environment where innovation is part of our daily routine and every team member is an impact-oriented trailblazer. By celebrating our successes and learning from our setbacks, we stay agile and forward-thinking, always striving to elevate our performance and impact. Learn about TRM Speed in this position: The team received a request from customers to show the running balance for assets tied to an address within the product. They brainstormed to determine the minimal viable solution that could be delivered within two weeks. The team decided to develop a framework that would facilitate enabling this feature for each chain (such as BTC, ETH, etc.) individually. They successfully implemented and tested this solution for the BTC chain and delivered it to customers within the two-week timeframe. This approach allowed the team to quickly provide value to customers and establish a repeatable process. The team was tasked with implementing a new feature that would enable TRM to gather feedback from the user community swiftly. Due to the company's goal of showcasing the feature at an upcoming event, the timeline was tight. The team quickly adjusted to the request, assessed the requirements, and developed a solution that addressed 80% of the needs and could be executed within a very short timeframe. They successfully delivered the feature on schedule, impressing customers with both the functionality and the speed of execution. The team was tasked with developing a solution to reveal Indirect Exposure through cross-chain swaps in response to a competitor's similar feature. Before choosing the technical approach, the team consulted with stakeholders to establish a feasible timeline for the implementation. With this input, the team devised a solution that fit the timeline, communicated the tradeoffs to stakeholders, and successfully delivered the solution as promised. About TRM's Engineering Levels: Engineer: Responsible for helping to define project milestones and executing small decisions independently with the appropriate tradeoffs between simplicity, readability, and performance. Provides mentorship to junior engineers, and enhances operational excellence through tech debt reduction and knowledge sharing. Senior Engineer: Successfully designs and documents system improvements and features for an OKR/project from the ground up. Consistently delivers efficient and reusable systems, optimizes team throughput with appropriate tradeoffs, mentors team members, and enhances cross-team collaboration through documentation and knowledge sharing. Staff Engineer: Drives scoping and execution of one or more OKRs/projects that impact multiple teams. Partners with stakeholders to set the team vision and technical roadmaps for one or more products. Is a role model and mentor to the entire engineering organization. Ensures system health and quality with operational reviews, testing strategies, and monitoring rigor. Life at TRM Labs Leadership Principles Our Leadership Principles shape everything we do-how we make decisions, collaborate, and operate day to day. Impact-Oriented Trailblazer - We put customers first, driving for speed, focus, and adaptability. Master Craftsperson - We prioritize speed, high standards, and distributed ownership. Inspiring Colleague - We value humility, candor, and a one-team mindset. Accelerate your Career At TRM, you'll do work that matters-disrupting terrorist networks, recovering stolen funds, and protecting people around the world. You will: Work alongside top experts and learn every day. Embrace a growth mindset with development opportunities tailored to your role. Take on high-impact challenges in a fast-paced, collaborative environment. Thrive in a culture of coaching, where feedback is fast, direct, and built to help you level up. What to Expect at TRM TRM moves fast- really fast . We know a lot of startups say that, but we mean it. We operate with urgency, ownership, and high standards. As a result, you'll be joining a team that's highly engaged, mission-driven, and constantly evolving. To support this intensity, we're also intentional about rest and recharge. We offer generous benefits, including PTO, Holidays, and Parental Leave for full time employees. That said, TRM may not be the right fit for everyone. If you're optimizing for work life balance, we encourage you to: Ask your interviewers how they personally approach balance within their teams, and Reflect on whether this is the right season in your life to join a high-velocity environment. Be honest with yourself about what energizes you-and what drains you We're upfront about this because we want every new team member to thrive-not just survive. The Stakes Are Real Our work has direct, real-world impact: Jumping online after hours to support urgent government requests tracing ransomware payments. Delivering actionable insights during terrorist financing investigations. Collaborating across time zones in real time during a major global hack. Building new processes in days, not weeks, to stop criminals before they cash out. Analyzing blockchain data to recover stolen savings and dismantle trafficking networks. Thrive as a Global Team As a remote-first company, TRM Labs is built for global collaboration. We cultivate a strong remote culture through clear communication, thorough documentation, and meaningful relationships. We invest in offsites, regional meetups, virtual coffee chats, and onboarding buddies to foster collaboration. By prioritizing trust and belonging, we harness the strengths of a global team while staying aligned with our mission and values. Join our mission! We're looking for team members who thrive in fast-paced, high-impact environments and love building from the ground up. TRM is remote-first, with an exceptionally talented global team. If you enjoy solving tough problems and seeing your work make a difference for billions of people, we want you here. Don't worry if your experience doesn't perfectly match a job description- we value passion, problem-solving, and unique career paths. If you're excited about TRM's mission, we want to hear from you. Recruitment agencies TRM Labs does not accept unsolicited agency resumes. Please do not forward resumes to TRM employees. TRM Labs is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company without a signed agreement. Privacy Policy By submitting your application, you are agreeing to allow TRM to process your personal information in accordance with the TRM Privacy Policy Learn More: Company Values | Interviewing | FAQs
    $93k-136k yearly est. Auto-Apply 38d ago
  • Data Engineer

    Throtle 4.1company rating

    Red Bank, NJ jobs

    Replies within 24 hours Big Data Engineer (Hybrid position-In office Tuesday, Wednesday and Thursday) As a Big Data Engineer at Throtle, you will be responsible for designing and developing complex software systems that integrate with Throtle big data processing solutions. You will work closely with cross-functional teams to architect scalable, secure, and maintainable software systems that meet the needs of our business. Duties/Responsibilities · Design and develop software architectures for large-scale data processing and analytics platforms · Collaborate with data engineers to design and implement data pipelines and transformations using big data technologies such as Hadoop, Spark, Kafka, and related ecosystems · Develop and maintain complex SQL queries for data extraction and reporting, and optimize query performance and scalability · Design and develop software components that integrate with our identity graphs, client data, and keying processes · Work with cross-functional teams to ensure software systems meet business requirements and are scalable for future growth · Develop and maintain technical documentation for software architectures and designs · Ensure data integrity and quality in development process · Position will require that the individual understands all regulations and laws applicable to their assigned roles and responsibilities. Additionally, the individual will be responsible for the development, implementation, and regular maintenance of policies and procedures that govern the work of assigned roles and responsibilities, including compliance with the security requirements of ePHI. Required Skill and Abilities · Strong understanding of software design patterns, architecture principles, and big data technologies · Proficiency in programming languages such as Java, Python, Scala, or similar languages · Experience with large-scale data processing and analytics platforms, including Hadoop, Spark, Kafka, and related ecosystems · Knowledge of distributed computing concepts and experience with cloud-based infrastructure (AWS) · Strong analytical and problem-solving skills to address complex software architecture challenges · Ability to work effectively with cross-functional teams and communicate technical designs and solutions to non-technical stakeholders Education and Experience · Bachelor's Degree in Computer Science, Engineering, or a related field · Master's Degree or certification in Software Architecture or a related field preferred but not required · 5+ years of professional experience in software development, architecture, or a related field · Bachelor's Degree in related field such as computer science, Engineering, or related field. · Experience in data modeling, relational and NoSQL databases, SQL and programming languages for building big data pipelines and transformations. About Throtle: Throtle is a leading identity company trusted by the world's top brands and agencies located in Red Bank, NJ. At Throtle, we empower brands at scale with true individual-based marketing using a data-centric identity and onboarding approach. Throtle is a company that truly values its employees and their work-life balance. We offer a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being: Competitive compensation. Comprehensive benefits include Medical, Dental, and Vision. Life insurance. Long-Term Disability A generous PTO program. A 401k plan supported by a company match. Half Day Summer Fridays (close at 1 p.m. Memorial Day to Labor Day). Early Fridays (office closes at 3 p.m.). Hybrid Schedule (Mondays and Fridays WFH) The office is closed between Christmas and New Year. Company-sponsored lunch at least 1x a month. And much MORE! Throtle is an equal-opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws. Flexible work from home options available. Compensation: $150,000.00 - $160,000.00 per year WE HIRE AND DEVELOP GREAT PEOPLE At Throtle, we focus on deterministic matching and identity resolution, empowering brands with true individual-based marketing. Our data centric onboarding approach guarantees the highest level of accuracy, scale, and responsiveness for our clients. Throtle works on the belief that their best employees should be given opportunities to grow and thrive in an energetic and technology driven culture. We empower employees to always think ahead and to keep attaining new levels of success for themselves and our clients. We hire and develop great people, which means that each and every one of our employees is not only talented, they genuinely care about the success of our customers and stand behind our company.
    $150k-160k yearly Auto-Apply 60d+ ago
  • Big Data Engineer

    Throtle 4.1company rating

    Red Bank, NJ jobs

    Replies within 24 hours Benefits: 401(k) 401(k) matching Company parties Competitive salary Dental insurance Flexible schedule Free food & snacks Health insurance Paid time off Parental leave Training & development Vision insurance Job Summary Please consider joining our technology team at Throtle as a Big Data Developer. In this role, you will be part of the team that is responsible for transforming and maintaining several billion records that make Throtle's data onboarding solution work. You'll need to be able to evaluate incoming data, perform specialized hygiene, and standardize it for consumption by multiple processes. You'll be working in a fast-paced, high volume processing environment where quality and attention to the details are paramount. Duties/Responsibilities Design and implement data pipelines and transformations using big data technologies such as Spark, Hadoop, and related ecosystems Develop and maintain complex SQL queries for data extraction and reporting, and optimize query performance and scalability Design and develop software components that integrate with our identity graphs, client data, and keying processes Participate in capacity monitoring and planning Develop and maintain technical documentation Ensure data integrity and quality in development process, including conducting data research and analysis to identify trends and patterns Work with cross-functional teams to ensure software systems meet business requirements and are scalable for future growth, including developing predictive models and enhancing data quality Position will require that the individual understands all regulations and laws applicable to their assigned roles and responsibilities. Additionally, the individual will be responsible for the development, implementation, and regular maintenance of policies and procedures that govern the work of assigned roles and responsibilities, including compliance with the security requirements of ePHI. Required Skill and Abilities Experience with large-scale data processing and analytics platforms, including Hadoop, Spark,or related ecosystems Knowledge of distributed computing concepts and experience with cloud-based infrastructure Experience with fuzzy logic matching and tools Experience with AWS infrastructure Education and Experience 4+ years of experience working with Big Data technologies, including, Spark, and SQL databases Proficiency in programming languages such as Scala, Java, Python, Shell or a combination thereof Expertise in data modeling, database design, and development Experience in building and maintaining data transformations (ETL) using SQL, Python, Scala, or Java Ability to analyze, troubleshoot and performance tune queries Ability to identify problems, and effectively communicate solutions to peers and management Strong analytical and problem-solving skills to address complex software and data challenges Ability to work effectively with cross-functional teams and communicate technical designs and solutions to non-technical stakeholders About Throtle:Throtle is a leading identity company trusted by the world's top brands and agencies located in Red Bank, NJ. At Throtle, we empower brands at scale with true individual-based marketing using a data-centric identity and onboarding approach. Throtle is a company that truly values its employees and their work-life balance. We offer a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being: Competitive compensation. Comprehensive benefits include Medical, Dental, and Vision. Life insurance. Long-Term Disability A generous PTO program. A 401k plan supported by a company match. Half Day Summer Fridays (close at 1 p.m. Memorial Day to Labor Day). Early Fridays (office closes at 3 p.m.). Hybrid Schedule (Mondays and Fridays WFH) The office is closed between Christmas and New Year. Company-sponsored lunch at least 1x a month. Professional Development Policy! And much MORE! Throtle is an equal-opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws. Flexible work from home options available. Compensation: $130,000.00 - $146,000.00 per year WE HIRE AND DEVELOP GREAT PEOPLE At Throtle, we focus on deterministic matching and identity resolution, empowering brands with true individual-based marketing. Our data centric onboarding approach guarantees the highest level of accuracy, scale, and responsiveness for our clients. Throtle works on the belief that their best employees should be given opportunities to grow and thrive in an energetic and technology driven culture. We empower employees to always think ahead and to keep attaining new levels of success for themselves and our clients. We hire and develop great people, which means that each and every one of our employees is not only talented, they genuinely care about the success of our customers and stand behind our company.
    $130k-146k yearly Auto-Apply 60d+ ago
  • Big Data Engineer

    Throtle, Inc. 4.1company rating

    Red Bank, NJ jobs

    Benefits: * 401(k) * 401(k) matching * Company parties * Competitive salary * Dental insurance * Flexible schedule * Free food & snacks * Health insurance * Paid time off * Parental leave * Training & development * Vision insurance Please consider joining our technology team at Throtle as a Big Data Developer. In this role, you will be part of the team that is responsible for transforming and maintaining several billion records that make Throtle's data onboarding solution work. You'll need to be able to evaluate incoming data, perform specialized hygiene, and standardize it for consumption by multiple processes. You'll be working in a fast-paced, high volume processing environment where quality and attention to the details are paramount. Duties/Responsibilities * Design and implement data pipelines and transformations using big data technologies such as Spark, Hadoop, and related ecosystems * Develop and maintain complex SQL queries for data extraction and reporting, and optimize query performance and scalability * Design and develop software components that integrate with our identity graphs, client data, and keying processes * Participate in capacity monitoring and planning * Develop and maintain technical documentation * Ensure data integrity and quality in development process, including conducting data research and analysis to identify trends and patterns * Work with cross-functional teams to ensure software systems meet business requirements and are scalable for future growth, including developing predictive models and enhancing data quality * Position will require that the individual understands all regulations and laws applicable to their assigned roles and responsibilities. Additionally, the individual will be responsible for the development, implementation, and regular maintenance of policies and procedures that govern the work of assigned roles and responsibilities, including compliance with the security requirements of ePHI. Required Skill and Abilities * Experience with large-scale data processing and analytics platforms, including Hadoop, Spark,or related ecosystems * Knowledge of distributed computing concepts and experience with cloud-based infrastructure * Experience with fuzzy logic matching and tools * Experience with AWS infrastructure Education and Experience * * 4+ years of experience working with Big Data technologies, including, Spark, and SQL databases * Proficiency in programming languages such as Scala, Java, Python, Shell or a combination thereof * Expertise in data modeling, database design, and development * Experience in building and maintaining data transformations (ETL) using SQL, Python, Scala, or Java * Ability to analyze, troubleshoot and performance tune queries * Ability to identify problems, and effectively communicate solutions to peers and management * Strong analytical and problem-solving skills to address complex software and data challenges * Ability to work effectively with cross-functional teams and communicate technical designs and solutions to non-technical stakeholders About Throtle: Throtle is a leading identity company trusted by the world's top brands and agencies located in Red Bank, NJ. At Throtle, we empower brands at scale with true individual-based marketing using a data-centric identity and onboarding approach. Throtle is a company that truly values its employees and their work-life balance. We offer a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being: * Competitive compensation. * Comprehensive benefits include Medical, Dental, and Vision. * Life insurance. * Long-Term Disability * A generous PTO program. * A 401k plan supported by a company match. * Half Day Summer Fridays (close at 1 p.m. Memorial Day to Labor Day). * Early Fridays (office closes at 3 p.m.). * Hybrid Schedule (Mondays and Fridays WFH) * The office is closed between Christmas and New Year. * Company-sponsored lunch at least 1x a month. * Professional Development Policy! And much MORE! Throtle is an equal-opportunity employer that is committed to diversity and inclusion in the workplace. We prohibit discrimination and harassment of any kind based on race, color, sex, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other protected characteristic as outlined by federal, state, or local laws. Flexible work from home options available.
    $92k-132k yearly est. 60d+ ago
  • Data Scientist (TS/SCI w/ Poly)

    Sixgen, Inc. 4.1company rating

    Maryland jobs

    SIXGEN's mission is to deliver agile, mission-ready cybersecurity solutions that empower government and critical infrastructure organizations to stay ahead of advanced cyber threats. We combine innovation, deep expertise, and cutting-edge capabilities to uncover vulnerabilities, protect vital systems, and ensure operational superiority in an ever-evolving digital landscape. POSITION OVERVIEW Position: Data Scientist Job Type: Full Time Location: Annapolis Junction or Fort Meade, MD Clearance Requirements: TS/SCI with Full Scope Poly Travel: Up to 10% ABOUT THE TEAM SIXGEN supports missions by serving government and commercial organizations as they overcome global cybersecurity challenges. In this position, you'll provide meaningful support to our customer's missions and operations. You'll work alongside a team of highly skilled operators who are dedicated to using innovative processes, tools, and techniques to get things done. WHAT YOU'LL DO Support missions by applying advanced data science, machine learning, and statistical analysis to large-scale, mission-critical datasets. Design, develop, and maintain modular software solutions that integrate with distributed data processing frameworks in Linux-based environments. Rapidly prototype and iterate on data-driven tools and models to meet evolving operational and mission requirements. Utilize high-level programming languages (e.g., Python) and mid-level languages (e.g., C) to build scalable data solutions and custom analytics. Perform data mining, modeling, and assessment to extract actionable insights and enhance mission outcomes. REQUIRED QUALIFICATIONS US Citizen with the ability to obtain a TS/SCI w/ Full Scope Poly Proficiency in a high-level programming language such as Python, and experience with a mid-level language such as C. Strong foundation in machine learning, data mining, statistical analysis, and data modeling. Experience developing and maintaining modular software for use with large-scale data frameworks. Familiarity with Linux-based environments and shell scripting. Demonstrated ability to support missions through advanced data analysis and rapid prototyping. COMPENSATION & BENEFITS $120,000 - $200,000 USD The salary range for this role is an estimate based on a wide range of compensation factors, inclusive of base salary only. Actual salary offer may vary based on (but not limited to) work experience, education and/or training, critical skills, and/or business considerations. The final salary offer will be determined after a thorough review of the candidate's background and alignment with the role. Additionally, SIXGEN offers top-tier benefits for full-time employees, including: Employer-paid health insurance premiums (medical, dental, vision) for you and your family Employer-paid short/long term disability insurance and basic life/AD&D insurance 401K with a 4% employer contribution Professional development reimbursement options available (training, certification, education, etc) Flexible and remote work policies for most positions Flexible PTO and holiday schedule OUR COMMITMENT SIXGEN is an Equal Opportunity Employer. We ensure that all applicants are considered for employment without regard to race, color, religion, sexual orientation, gender identity, national origin, disability, age, marital status, ancestry, projected veteran status, or any other protected group or class. We are committed to fostering an inclusive culture that values diversity in our people, reflecting the communities we serve and our customer base. We strive to attract and retain a diverse talent pool and create an environment where everyone is empowered to be their authentic selves at work.
    $120k-200k yearly Auto-Apply 60d+ ago
  • Data Scientist

    Houzz 4.8company rating

    Remote

    About the Role Houzz is looking for a world-class Data Scientist to join our Pro Data Science team. This team is focused on uncovering critical insights and delivering models to leverage those findings thus powering intelligent business decisions. As a Data Scientist here at Houzz you are on the front line of uncovering insights to directly impact product roadmap and revenue.What You'll Do Critical projects will focus on data-driven insights to guide product development and business direction Collaborate with leadership, product, data, and engineering teams to identify, define, and launch strategic and operational data science initiatives to influence product roadmap, set strategic directions and drive user engagement Define and manage the key metrics of the business and product efficiently Design product experiments, interpret results to draw detailed and impactful conclusions, and conduct root cause analysis Own the end-to-end execution of projects by working with a broad cross-functional team Work with data infrastructure/product engineering teams to define the data collection needs Provide recommendations to assist quick product ideation and feature launch decisions Develop and maintain data products to serve real-time business insights or to be used in production environments At a Minimum, We'd Like You to Have Bachelors/Masters Degree with 2+ years of experience working within an analytical role in a high-growth tech company Proficiency in writing complex SQL queries such as using aggregate functions and all types of joins using Hadoop, Hive and Presto. Strong programming skills in Python or R Strong in problem-solving and analytical thinking to take an open business question from exploratory data analysis to actionable insights and guided execution recommendations Proficiency in advanced experimental designing, testing, and analysis Strong communication, presentation, and interpersonal skills A passion for analytics and product development Self-motivated, independent, and eager to learn Ideally, You'll Also Have An advanced degree in a technical field such as Statistics, Operations Research, or Engineering Experience with subscription based analytics, web/app user engagement, and funnel conversion Ability to learn and act quickly, making decisions and drawing conclusions in the face of ambiguity Experience with statistical modeling Proficiency in using Tableau for visualization Experience working with remote teams with a time zone difference Compensation, Benefits and Perks This role has an annual starting salary range of $120,000 - $135,000. In addition to salary, you're eligible for competitive benefits that support you and your family as part of your total rewards package at Houzz. Also, depending on the role, you could be eligible for an equity award. Actual compensation is influenced by a wide array of factors, including, but not limited to, skills, experience, and specific work location. Benefits and perks include:- Flexible Paid Time Off (PTO)- Home internet stipend- Medical, dental, and vision benefits- Maternity/paternity leave program- Employee Assistance Program (EAP)- Professional Development Reimbursement Program- 401(k) retirement savings plans (Pre-Tax and Roth)- Flexible Spending Accounts (FSA) - Medical & Dependent Care- Health Savings Account (HSA) with company contribution - Healthy at Houzz program Houzz is an Equal Employment Opportunity employer. When applying for a role at Houzz, we guarantee your application will be considered regardless of your sex; race; color; gender; national origin; height or weight; ancestry; physical or mental disability; medical condition; genetic information; marital status; registered domestic partner status; age; sexual orientation; military and veteran status; or any other basis protected by federal, state or local law or ordinance or regulation. We embrace and celebrate the value that diversity brings to an organization. Diverse backgrounds and different points of view help Houzz provide the best experience for our community. Houzz is committed to fostering an inclusive environment through projects and initiatives, such as employee resource groups, that support Houzzers' efforts to be themselves and share their lives at work. If you would like assistance or an accommodation due to a disability, please email us at accommodations@houzz.com. This information will be treated as confidential and used only for determining an appropriate accommodation for the interview process. Houzz is an Equal Opportunity Employer. M/F/Disability/Veterans__________________ Be Who You Are and Do What You Love at Houzz About HouzzWhen founders Adi and Alon remodeled their home, they were frustrated by the lack of resources and inspiration to help them articulate a vision and select the right pro to make it a reality. So they built Houzz. Houzz is now the leading platform for home remodeling and design, providing an all-in-one software solution for industry professionals and tools for homeowners to update their homes from start to finish. Using Houzz, people can find ideas and inspiration, hire professionals, and shop for products. Houzz Pro (houzz.com/pro) provides home industry professionals with a business management and marketing SaaS solution that helps them to win projects, collaborate with clients and teams, and run their business efficiently and profitably. Our Mission and Core ValuesWe're proud to say there's no one quite like us. Houzz is a community-centric, innovative tech company that continues to disrupt the home renovation and design industry. Our mission-driven culture is rooted in our core values, and we're all here for one purpose: make the home remodeling and design process more fun and productive for everyone. Our MissionTo create the best experience for home renovation and design. Our Core Values We're a Community We put our community of Houzzers, industry professionals and homeowners first. We approach our work with care, humility and respect. We deliver value to our community through our products and services. We Build the Future We are visionaries who challenge the status quo. We are creative, innovative and curious. We embrace change and different ideas to drive our industry forward. We Make Things Happen We are solution-seekers and self-starters. We listen, move fast and empower our teams to deliver extraordinary results and products. We play to win. By applying for a job with us, you acknowledge and agree to the terms of our Job Applicant Privacy Notice. *Roles listing ‘Remote - US' as a location are not currently available in the following states: Alaska, Hawaii, Louisiana and Montana. #LI-Remote
    $120k-135k yearly Auto-Apply 60d+ ago
  • Data Scientist (Remote)

    Entefy 3.6company rating

    Remote

    Entefy's Senior Data Scientist is a highly visible position both internally and externally. Join the intelligence revolution, where we can push the state-of-the-art boundaries of data science. This is where deep experience and multi-dimensional insights intersect with an amazing opportunity to shape the future of productivity for people everywhere. Skills and Experience:We're not looking for “good.” Entefy is on a mission to find exceptional talent. The success of our mission depends on our team's ability to be creatively analytical, insatiably curious, and absolutely fearless in tackling big challenges.Requirements Master's degree in Computer Science, Mathematics, Statistics, Engineering, or a compatible field. PhD is preferred. 3+ years of applicable work experience. Demonstrable proficiency in Python, Scala, SQL, or R. Strong data visualization and reporting skills. Proven track record delivering high-value, advanced analytics. Experience leading the work of other data scientists or analysts is preferred. World-class ability to extract and communicate insights from real-world datasets. Startup agility and versatility. Visit ************** and **************/blog
    $92k-135k yearly est. Auto-Apply 60d+ ago
  • Data Scientist

    Codepath.org 3.9company rating

    Remote

    CodePath is reprogramming higher education to create the first generation of AI-native engineers, CTOs, and founders. We deliver industry-vetted courses and career support centered on the needs of first-generation and low-income students. Our students train with senior engineers, intern at top companies, and rise together to become the tech leaders of tomorrow. With 30,000 students and alumni from 700 colleges now working at 2,000 companies, we are reshaping the tech workforce and the industries of the future. About the Role Location: Remote, United States Role Type: Full-Time Reporting to: Lead Data Scientist Compensation: $90,000 to $130,000 per year CodePath is entering a pivotal stage of growth, building the data, analytics, and AI infrastructure that powers our learning platform and supports tens of thousands of students nationwide. We are looking for a highly capable and mission-driven Data Scientist to join our growing Measurement, Evaluation, and Learning (MEL) team and help shape this future. As a Data Scientist at CodePath, you will work across the full analytics and modeling lifecycle, supporting data pipelines, conducting exploratory analysis, building statistical and machine learning models, and developing the insights that guide organizational strategy. You will collaborate closely with data engineering and cross-functional partners to ensure our data systems, analyses, and models enable student success, operational efficiency, and accurate impact measurement. This role is ideal for someone who enjoys solving problems in a fast-paced environment, applies clear analytical thinking, and wants to grow by supporting high-impact projects that advance CodePath's data and AI work. Key Activities Impact Measurement: Define, track, and analyze organizational impact, student outcomes, and program performance with MEL leadership Data Modeling & ML: Build and refine statistical and machine learning models for outcomes analyses, forecasting, and decision support Exploratory Analysis: Conduct deep exploratory analyses to surface trends, anomalies, and insights across large datasets Dashboards & BI Tools: Develop and maintain dashboards, reports, and visualizations (Tableau, streamlit, or similar) that translate complex results into clear, actionable insights Feature Engineering & Data Preparation: Partner with data engineering to develop reliable datasets and features that power modeling and analytics workflows Pipeline Support: Support and validate data pipelines to ensure analytical datasets remain accurate, consistent, and well-structured Cross-Functional Work: Collaborate with program, product, and operations teams to understand their data needs and translate them into well-defined analytical questions Documentation: Document analytical processes, models, and methodologies to ensure clarity, scaling, and reproducibility Continuous Improvement: Identify opportunities to enhance data quality, improve modeling processes, and expand MEL's modeling and reporting capabilities Key Success Metrics High-quality Analytical Assets: Produces dashboards, Quarto reports, and reusable modules that meet MEL standards, with 90%+ requiring no major revision Stronger Impact Measurement: Improves outcomes reporting and forecasting through validated datasets, models, and analyses Improved Data Quality & Reliability: Identifies and resolves data issues across pipelines and transformations in partnership with Data Engineering Meaningful Modeling Contributions: Builds statistical or ML models that improve decision-making through greater accuracy, clarity, or adoption Qualifications 3+ years of professional experience in data science, machine learning, analytics, or a related field Strong foundation in statistics, probability, and machine learning algorithms Proficiency with Python (pandas/polars, numpy, scikit-learn, TensorFlow or PyTorch) Strong SQL skills and experience working with large-scale datasets Experience with cloud platforms (GCP, AWS, or Azure) and deploying or maintaining data-driven applications Familiarity with data modeling concepts, data engineering workflows, and data pipelines Strong communicator able to present complex analyses clearly and accessibly Preferred Qualifications Experience with impact evaluation, education data, or social science research methods Exposure to dbt, Airbyte, FiveTran, or similar tooling Experience with experimental or quasi-experimental methods, causal inference, or A/B testing Ability to turn ambiguous problems into structured analytical approaches Proactive, collaborative mindset with enthusiasm for continuous learning Compensation CodePath has standardized salaries based on the position's level, no matter where you live. For this role, we're hiring for an Individual Contributor level position at an annual salary of $90,000 to $130,000. Salary is determined based on your relevant experience and skills as evaluated through our interview process. Comprehensive Benefits & Remote Work Perks This is a 100% remote position; you can work from anywhere in the U.S. CodePath prioritizes employee well-being with a competitive benefits package to support your health, financial security, and work-life balance. Health & Wellness:Medical, dental, and vision insurance (90% employer-covered), employer-funded healthcare reimbursement, FSAs, and Employee Assistance Program Financial Security: 401(k), employer-paid life & disability insurance, and identity theft protection Work-Life Balance: Generous PTO, paid holidays, 10 weeks of fully paid parental leave, and an annual year-end company closure (Dec 24 - Jan 2) Professional Growth: $1,000 annual professional development stipend and home office setup support Student Loan Forgiveness: CodePath is a qualifying employer for Public Service Loan Forgiveness (PSLF) Additional Perks: Pet wellness plans, legal services, home/auto insurance discounts, and exclusive marketplace savings Pay range$90,000-$130,000 USD
    $90k-130k yearly Auto-Apply 3d ago
  • ETL Architect

    Quartz 4.5company rating

    Wisconsin jobs

    Come Find Your Spark at Quartz! The ETL Architect will be responsible for the architecture, design, and implementation of data integration solutions and pipelines for the organization. This position will partner with multiple areas in the Enterprise Data Management team and the business to successfully translate business requirements into efficient and effective ETL implementations. This role will perform functional analysis, determining the appropriate data acquisition and ingestion methods, and design processes to populate various data platform layers. The ETL Architect will work with implementation stakeholders throughout the business to evaluate the state of data and constructs solutions that deliver data to enable analytics reporting capabilities in a reliable manner. Skills this position will utilize on a regular basis: Informatica PowerCenter Expert knowledge of SQL development Python Benefits: Opportunity to work with leading technology in the ever-changing, fast paced healthcare industry. Opportunity to work across the organization interacting with business stakeholders. Starting salary range based upon skills and experience: $107,500 - $134,400 - plus robust benefits package. Responsibilities Architects, designs, enhances, and supports delivery of ETL solutions. Architects and designs data acquisition, ingestion, transformation, and load solutions. Identifies, develops, and documents ETL solution requirements to meet business needs. Facilitates group discussions and joins solution design sessions with technical subject matter experts. Develops, implements, and maintains standards and ETL design procedures. Contributes to the design of the data models, data flows, transformation specifications, and processing schedules. Coordinates ETL solution delivery and supports data analysis and information delivery staff in the design, development, and maintenance of data implementations. Consults and provides direction on ETL architecture and the implementation of ETL solutions. Queries, analyzes, and interprets complex data stored in the systems of record, enterprise data warehouse, and data marts. Ensures work includes necessary audit, HIPAA compliance, and security controls. Data Management Collaborates with infrastructure and platform administrators to establish and maintain scalable and reliable data processing environment for the organization. Identifies and triages data quality and performance issues from the ETL perspective and see them through to resolution. Tests and validates components of the ETL solutions to ensure successful end-to-end delivery. Participates in support rotation. Qualifications Bachelor's degree with 8+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP (Online Transaction Processing) environments, semantic layer modeling experience, and SQL programming experience. OR associate degree with 11+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience. OR high school equivalence with 14+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience. Expert understanding of ETL concepts and commercially available enterprise data integration platforms (Informatica PowerCenter, Python) Expert knowledge of SQL development Expert knowledge of data warehousing concepts, design principles, associated data management and delivery requirements, and best practices Expert problem solving and analytical skills Ability to understand and communicate data management and integration concepts within IT and to the business and effectively interact with all internal and external parties including vendors and contractors Ability to manage multiple projects simultaneously Ability to work independently, under pressure, and be adaptable to change Inquisitive and seek answers to questions without being asked Hardware and equipment will be provided by the company, but candidates must have access to high-speed, non-satellite Internet to successfully work from home. We offer an excellent benefit and compensation package, opportunity for career advancement and a professional culture built on the foundations of Respect, Responsibility, Resourcefulness and Relationships. To support a safe work environment, all employment offers are contingent upon successful completion of a pre-employment criminal background check. Quartz values and embraces diversity and is proud to be an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, gender identity or expression, sexual orientation, age, status as a protected veteran, among other things, or status as a qualified person with disability.
    $107.5k-134.4k yearly Auto-Apply 48d ago
  • ETL Architect

    Quartz 4.5company rating

    Wisconsin jobs

    Come Find Your Spark at Quartz! The ETL Architect will be responsible for the architecture, design, and implementation of data integration solutions and pipelines for the organization. This position will partner with multiple areas in the Enterprise Data Management team and the business to successfully translate business requirements into efficient and effective ETL implementations. This role will perform functional analysis, determining the appropriate data acquisition and ingestion methods, and design processes to populate various data platform layers. The ETL Architect will work with implementation stakeholders throughout the business to evaluate the state of data and constructs solutions that deliver data to enable analytics reporting capabilities in a reliable manner. Skills this position will utilize on a regular basis: Informatica PowerCenter Expert knowledge of SQL development Python Benefits: Opportunity to work with leading technology in the ever-changing, fast paced healthcare industry. Opportunity to work across the organization interacting with business stakeholders. Starting salary range based upon skills and experience: $107,500 - $134,400 - plus robust benefits package. Responsibilities Architects, designs, enhances, and supports delivery of ETL solutions. Architects and designs data acquisition, ingestion, transformation, and load solutions. Identifies, develops, and documents ETL solution requirements to meet business needs. Facilitates group discussions and joins solution design sessions with technical subject matter experts. Develops, implements, and maintains standards and ETL design procedures. Contributes to the design of the data models, data flows, transformation specifications, and processing schedules. Coordinates ETL solution delivery and supports data analysis and information delivery staff in the design, development, and maintenance of data implementations. Consults and provides direction on ETL architecture and the implementation of ETL solutions. Queries, analyzes, and interprets complex data stored in the systems of record, enterprise data warehouse, and data marts. Ensures work includes necessary audit, HIPAA compliance, and security controls. Data Management Collaborates with infrastructure and platform administrators to establish and maintain scalable and reliable data processing environment for the organization. Identifies and triages data quality and performance issues from the ETL perspective and see them through to resolution. Tests and validates components of the ETL solutions to ensure successful end-to-end delivery. Participates in support rotation. Qualifications Bachelor's degree with 8+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP (Online Transaction Processing) environments, semantic layer modeling experience, and SQL programming experience. OR associate degree with 11+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience. OR high school equivalence with 14+ years of experience translating business requirements into business intelligence solutions, data visualization, and analytics solution design and development experience in a data warehouse and OLTP environments, semantic layer modeling experience, and SQL programming experience. Expert understanding of ETL concepts and commercially available enterprise data integration platforms (Informatica PowerCenter, Python) Expert knowledge of SQL development Expert knowledge of data warehousing concepts, design principles, associated data management and delivery requirements, and best practices Expert problem solving and analytical skills Ability to understand and communicate data management and integration concepts within IT and to the business and effectively interact with all internal and external parties including vendors and contractors Ability to manage multiple projects simultaneously Ability to work independently, under pressure, and be adaptable to change Inquisitive and seek answers to questions without being asked Hardware and equipment will be provided by the company, but candidates must have access to high-speed, non-satellite Internet to successfully work from home. We offer an excellent benefit and compensation package, opportunity for career advancement and a professional culture built on the foundations of Respect, Responsibility, Resourcefulness and Relationships. To support a safe work environment, all employment offers are contingent upon successful completion of a pre-employment criminal background check. Quartz values and embraces diversity and is proud to be an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex, gender identity or expression, sexual orientation, age, status as a protected veteran, among other things, or status as a qualified person with disability. We can recommend jobs specifically for you! Click here to get started.
    $107.5k-134.4k yearly Auto-Apply 49d ago

Learn more about Experfy jobs