Post job

Data Engineer jobs at ALTA IT Services - 1263 jobs

  • SDET - Playwright

    Kellymitchell Group 4.5company rating

    Columbus, OH jobs

    Our client is seeking a SDET - Playwright to join their team! This position is located in Columbus, Ohio. Develop, maintain, and execute automated tests using Playwright (TypeScript/JavaScript) Build reusable test libraries and utilities, including authentication, pagination, idempotency, rate limiting, and error handling Define and execute test strategies across unit, integration, contract, and end-to-end test layers Create robust negative, edge-case, and resilience tests Apply mocking strategies where appropriate Manage test data and environments, including fixtures, seeding, and synthetic data, to ensure deterministic and reliable test runs Integrate automated test suites into CI/CD pipelines (GitHub Actions, Azure DevOps), ensuring fast, stable, and gated deployments Participate in design and code reviews, advocating for testability, automation best practices, and overall quality Document test frameworks, patterns, and runbooks; clearly communicate testing outcomes and recommendations to engineering teams Collaborate cross-functionally with QA, engineering, and product teams to support successful delivery Desired Skills/Experience: 3+ years of experience as an SDET or QA Automation Engineer with a strong focus on Playwright Hands-on experience with Playwright using TypeScript/JavaScript, or similar automation frameworks Experience testing POS systems or complex transactional platforms is preferred Proven experience configuring CI/CD pipelines, test reporting, and gating on failures or coverage thresholds Familiarity with mocking frameworks and test data management strategies Strong debugging skills across logs, traces, and network traffic; comfort using CLI tools such as curl Excellent written and verbal communication skills with a collaborative, team-first mindset Benefits: Medical, Dental, & Vision Insurance Plans Employee-Owned Profit Sharing (ESOP) 401K offered The approximate pay range for this position starting at $150,000. Please note that the pay range provided is a good faith estimate. Final compensation may vary based on factors including but not limited to background, knowledge, skills, and location. We comply with local wage minimums.
    $150k yearly 3d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • W2 Only - Principal Software Engineer in Test (UI Playwright & API focused) - HYBRID ONSITE

    Yoh, A Day & Zimmermann Company 4.7company rating

    Dallas, NC jobs

    Please send current resumes directly to ************************* Bhagyashree Yewle, Principal Lead Recruiter - YOH SPG ********************************************* W2 Only - Principal Software Engineer in Test (UI Playwright & API focused) - HYBRID ONSITE Location: Hybrid Onsite in the office Monday through Friday every alternate week is a MUST in Durham NC W2 Only - 1099 or CTC candidates will not be considered. Candidates requiring visa sponsorship are welcome to apply! ***TOP MUST HAVE*** • Playwright or Cypress experience, • Strong REST Assured/API testing, • CI/CD pipeline integration (Jenkins) • Database (Oracle, Postgres, DynamoDB): Simple to complex querying in at least one • AWS a plus (need to understand on-prem and cloud deployments/DB) • Knowledge in Batch Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the . All qualified applicants are welcome to apply. Estimated Min Rate: $65.00 Estimated Max Rate: $75.00 What's In It for You? We welcome you to be a part of the largest and legendary global staffing companies to meet your career aspirations. Yoh's network of client companies has been employing professionals like you for over 65 years in the U.S., UK and Canada. Join Yoh's extensive talent community that will provide you with access to Yoh's vast network of opportunities and gain access to this exclusive opportunity available to you. Benefit eligibility is in accordance with applicable laws and client requirements. Benefits include: Medical, Prescription, Dental & Vision Benefits (for employees working 20+ hours per week) Health Savings Account (HSA) (for employees working 20+ hours per week) Life & Disability Insurance (for employees working 20+ hours per week) MetLife Voluntary Benefits Employee Assistance Program (EAP) 401K Retirement Savings Plan Direct Deposit & weekly epayroll Referral Bonus Programs Certification and training opportunities Note: Any pay ranges displayed are estimations. Actual pay is determined by an applicant's experience, technical expertise, and other qualifications as listed in the job description. All qualified applicants are welcome to apply. Yoh, a Day & Zimmermann company, is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Visit ************************************************ to contact us if you are an individual with a disability and require accommodation in the application process. For California applicants, qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. All of the material job duties described in this posting are job duties for which a criminal history may have a direct, adverse, and negative relationship potentially resulting in the withdrawal of a conditional offer of employment. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. By applying and submitting your resume, you authorize Yoh to review and reformat your resume to meet Yoh's hiring clients' preferences. To learn more about Yoh's privacy practices, please see our Candidate Privacy Notice: **********************************
    $65 hourly 2d ago
  • Principal Data Engineer

    Wilson Sonsini Goodrich & Rosati, Professional Corporation 4.9company rating

    Palo Alto, CA jobs

    Wilson Sonsini is the premier legal advisor to technology, life sciences, and other growth enterprises worldwide. We represent companies at every stage of development, from entrepreneurial start-ups to multibillion-dollar global corporations, as well as the venture firms, private equity firms, and investment banks that finance and advise them. The firm has approximately 1,100 attorneys in 17 offices: 13 in the U.S., two in China, and two in Europe. Our broad spectrum of practices and entrepreneurial spirit allow exceptional opportunities for professional achievement and career growth. Wilson Sonsini is actively seeking an experience Principal Data Engineer to join our Data Science and Operations team. The Principal Data Engineer (PDE) will serve as a senior technical authority for designing and delivering an Azure-centric data ecosystem that powers the firm's data management system and its application to business processes, workflows, AI/ML solutions, enterprise search, analytics, automation and reporting. The PDE will serve as a key driver in developing, guiding, and executing the technology strategy for the firm's central data management function, helping to advance data- and cloud-driven strategies and data security, spearhead data automation and AI/ML initiatives, and deliver high-impact data solutions and application development, all anchored in modern engineering practices and DevOps methodology. The PDE will collaborate closely with internal business units, vendors and business stakeholders to further the firms' data foundation to accelerate digital transformation and next-generation legal services while maintaining rigorous governance, security and client-data protections. By helping to architect a secure, governed Azure data backbone that unifies authoritative information and powers search, analytics, and AI firm-wide, the Principal Data Engineer will help lead a team of data professionals committed to developing key, actionable insights and business workflows, and create a decisive competitive advantage for the firm and its clients. This position is available as a hybrid or fully remote work schedule. Essential Duties, Responsibilities: Azure-Based Data Warehouse & Lakehouse Development Architect, develop, and optimize a scalable Azure-centric Lakehouse that ingests data from firm databases, such as SQL Server, and firm-managed data platforms, such as NetDocuments, SharePoint, Aderant, Workday and SaaS practice and transactional platforms. Establish robust ELT/CDC pipelines with optimized Azure workflows for near-real-time ingestion and transformation. Implement semantic models that support enterprise search, knowledge-graph entities, AI stores and applications, and BI/dashboard datasets. Data Governance & Security Embed firm-wide data-classification, retention, and ethical-wall rules into pipelines using Azure/Microsoft-based data controls, such as Purview, Defender for Cloud, and attribute-based access control (ABAC). Ensure compliance with firm data security and governance policies and guidelines. Champion data-governance reviews; define lineage, cataloging, and stewardship workflows. Master Data Management (MDM) Lead the selection and rollout of an MDM hub (such as Profisee, Informatica 360, or equivalent) and its integration to Microsoft Master Data Services to unify firm data taxonomies and hierarchies. Define master record selection, match/merge logic, and data-quality SLAs. Integrate MDM outputs into downstream search indexes and analytics models. Enterprise API Management Work on a team to deploy API Management controls and workflows as the firm's secure gateway for internal micro-services and external client/data-provider integrations. Help enforce OAuth 2.0/OpenID Connect, policy-based throttling, and schema versioning. Establish data pipelines that integrate with the firm's DevOps CI/CD workflows for automated API lifecycle management. Data Pipeline and Workflow Automation Extend existing and new platforms (such as Power BI or Litera Foundation) with event-driven Azure Functions, Logic Apps, and Power Automate flows that push authoritative data to and from the appropriate data pipelines and transaction-management platforms. Automate document-metadata enrichment, classifications and clause-library updates to firm knowledge repositories, RAG databases, taxonomies and knowledge graphs. AI & Advanced Analytics Enablement Provision vector stores, databases, and embeddings pipelines for generative-AI knowledge assistants/agents; co-develop retrieval-augmented generation (RAG) patterns with legal-AI teams. Help develop ML-Ops infrastructure and DevOps pipelines to support ML and AI-related initiatives. Partner with data scientists and analysts to help with problem framing and scoping, data discovery and access, feature engineering and experimentation, code methodology and review, and model development and deployment. Strategic Leadership & Mentoring Define data-engineering roadmap, standards, and reference architectures; advocate cloud-native, (Terraform, Kubernetes, container management) and DevSecOps best practices. Mentor data professionals and analysts; assist with code reviews, participate in pair programming/system development, and conduct knowledge-sharing sessions. General Data Engineering Duties Participate in defining and standardizing firm data to develop workable and practical data dictionaries, controlled vocabularies, and business taxonomies that align with practice-area and client-matter needs. Assist with the buildout of knowledge-graph capabilities - design schemas and ontologies that surface relationships with key data objects to power advanced search and generative-AI solutions. Prepare authoritative datasets via robust ETL/ELT pipelines - orchestrate Azure-based ingestion, transformation, and data-quality checks to ensure data is analysis-ready and trusted. Establish practices and procedures toward a “single source of truth,” that help to reduce redundant or less-authoritative repositories and enforce governance policies to prevent data silos and duplication. Engineer scalable data pipelines - implement automated, version-controlled DataOps workflows that support continuous delivery, monitoring, and lineage tracking across the firm's analytics and AI technology stacks. Lead efforts to integrate various data systems and platforms to create a unified data ecosystem where information can be found using an AI-based guided enterprise search. Serve as a technical data owner for all the firm's data assets and coordinate with responsible parties to establish a one-firm data approach. Develop and execute the data strategy, aligning it with the firm's overall business objectives. Build and scale the firm's data infrastructure into a modern, robust, best-in-industry capability. Ensure that outputs and solutions (models, dashboards, insights, reports, subscriptions) are actionable and integrate seamlessly into daily business operations. Stay current with industry trends and advancements in technology, including AI, data science, and engineering. Qualifications: Ability to communicate clearly and effectively with people from both technical and non-technical backgrounds. Excellent writing and oral presentation skills. Experience performing root cause analysis on internal and external data, data integrations and processes to solve specific business problems and identify opportunities for improvement. Strong analytic skills related to working with structured and unstructured datasets and data models. Experience developing processes that support data transformation, integrations, data structures, metadata, dependency, and workflow management. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge of creating and maintaining large data stores in SQL and cloud platforms, such as Azure or AWS. Experience supporting and working with cross-functional teams in a dynamic environment. Ability to collaborate with team members and interact with others throughout the Firm. Ability to deal responsibly with sensitive and confidential information in a discreet and secure manner. Law firm experience a plus. Required Qualifications Proven experience in leading data-driven projects and teams to successful completion. 10+ years of experience in senior data management and engineering positions, including 3+ years as a lead/architect in Azure; prior work in highly regulated industries (legal, finance, healthcare) strongly preferred. This position is expected to remain technical in scope and daily practice. BA/BS/MA/MS and/or graduate degree in Computer Science, Data Science, Data Analytics, Information Systems or equivalent discipline. Experience with advanced data technologies, Microsoft SQL Server and related Microsoft data management and integration technologies. Excellent verbal and written communication and interpersonal skills. Technical Expertise: Data Warehouse experience, such as Azure Synapse/Fabric, Data Factory, Databricks, Delta Lake, Snowflake, Cosmos DB, and Event Hub/Kafka SQL Server, T-SQL, SSIS & SSRS, Stored Procedures Python or Scala, Spark, and modern ELT patterns Programming and scripting languages: Python, R, C++, Julia, Javascript, SQL Power Platform, PowerBI, Data Analysis Expressions (DAX) Excel / PowerQuery Azure Purview/Defender, RBAC/ABAC, encryption-at-rest/in-transit, key-vault management API design (REST/GraphQL), Swagger/OpenAPI, Azure APIM or Kong, OAuth 2.0 CI/CD with Azure DevOps or GitHub Actions; Infrastructure-as-Code (Bicep/Terraform) MDM and Master Data Services implementations and data-quality frameworks Agile/Scrum methodology Extensive experience working with a variety of data file formats, such as JSON, XML, SQL Additional skills that would be highly advantageous include: PowerShell Regular Expression (Regex) VBA, MS Access & Excel Documentation & Process Mapping Dynamic visualization tools, such as Microsoft Power BI, Tableau, Domo, etc. Experience developing and applying machine learning models using Python, R, SQL and Azure Machine Learning Experience integrating legal industry, line-of-business applications, such as Litera Foundation, Intapp Open, SharePoint/OneDrive, Aderant, Salesforce.com/CRM and NetDocs/DMS The primary location for this job posting is in Palo Alto, but other locations may be listed. The actual base pay offered will depend upon a variety of factors, including but not limited to the selected candidate's qualifications, years of relevant experience, level of education, professional certifications and licenses, and work location. The anticipated pay range for this position is as follows:Palo Alto, New York, San Francisco: $163,200 - $220,800 per year. Austin, Boston, Boulder, Century City, Delaware, Los Angeles, Salt Lake City, San Diego, Seattle, Washington, D.C.: $147,050 - $198,950 per year. The compensation for this position may include a discretionary year-end merit bonus based on performance. We offer a highly competitive salary and benefits package. Benefits information can be found here. Equal Opportunity Employer (EOE).
    $163.2k-220.8k yearly Auto-Apply 20d ago
  • Data Engineer III - (Data Platforms)

    Cedar 4.3company rating

    Remote

    Our healthcare system is the leading cause of personal bankruptcy in the U.S. Every year, over 50 million Americans suffer adverse financial consequences as a result of seeking care, from lower credit scores to garnished wages. The challenge is only getting worse, as high deductible health plans are the fastest growing plan design in the U.S. Cedar's mission is to leverage data science, smart product design and personalization to make healthcare more affordable and accessible. Today, healthcare providers still engage with its consumers in a “one-size-fits-all” approach; and Cedar is excited to leverage consumer best practices to deliver a superior experience. The Role Cedar's Data & Integration Platforms organization builds the data infrastructure, pipelines and tooling that power our products, analytics, and financial operations. We are looking for a Data Engineer to help evolve our data ecosystem from homegrown ETL scripts to a modern, scalable stack built on tools like dbt, Airflow and Snowflake. You will contribute to the design and implementation of critical data pipelines (e.g., client billing, product analytics, and platform data services), improve data quality and observability, and help implement patterns and standards for how Cedar builds and operates data products. This is a hands-on individual contributor role with meaningful ownership, technical depth, and cross-functional exposure. What You'll Do Design, build, and maintain scalable ELT/ETL pipelines that power core use cases including client billing, financial reporting, product analytics, and data services for downstream teams (Finance, Data Science, Commercial Analytics, Product). Modernize legacy data flows by migrating SQL- and Liquibase-based transformations into dbt, with solid testing, documentation and data contracts. Improve reliability and observability of our data platform by applying best practices in testing, monitoring, alerting and runbook-driven operations for pipelines orchestrated via Airflow (and/or similar tools). Model data for usability and performance in Snowflake and other systems, applying sound data modeling patterns (e.g., dimensional models, entity-centric designs) for analytics and operational use cases. Collaborate closely with product, finance, analytics and integrations teams to understand requirements, define interfaces, and ensure data is accurate, well-documented, and delivered in the right form and cadence for consumers. Contribute to Cedar's data platform vision by implementing standards for governance, metadata and access, and by helping pilot tools like OpenMetadata and data quality frameworks within your projects. Participate in code reviews and design discussions, helping to raise the bar on code quality, reliability, and operational excellence across the team. About You 3+ years of hands-on data engineering (or closely related software engineering) experience, including building and supporting production data pipelines. Strong SQL and Python proficiency, with experience implementing data transformations, utilities and tooling (e.g., dbt models, Airflow DAGs, internal scripts). Experience with modern data stack tools, including some combination of: Snowflake (or similar cloud data warehouse), dbt, Airflow/Dagster (or similar orchestrator). Comfort designing and operating reliable pipelines, including applying testing strategies (unit/integration/dbt tests), basic monitoring and alerting, and contributing to incident/root-cause analysis. Experience with data modeling and schema design for analytics and reporting use cases (e.g., star/snowflake schemas, event or entity-centric designs). Familiarity with cloud platforms, ideally AWS (e.g., S3, IAM, containerized workloads, or related infrastructure supporting data workloads). Strong collaboration and communication skills, with the ability to break down ambiguous business problems into clear technical tasks and work effectively with partners across engineering, product and business teams. Bias to learn and take ownership in a complex, evolving environment-comfortable asking questions, making trade-offs explicit, and driving your work to completion. Bonus Points Experience with metadata and data governance tools, such as OpenMetadata, DataHub or similar catalogs, and implementing data contracts or quality frameworks (e.g., Great Expectations, dbt tests). Exposure to streaming and event-driven data pipelines (e.g., Kafka, CDC tools) and integrating those into warehouse-centric architectures. Prior experience in healthcare, fintech, or other highly regulated domains, particularly with standards like HL7 or FHIR, or with complex billing/financial data flows. Familiarity with analytics and visualization tools (e.g., Looker, Hex) and enabling self-serve analytics through well-designed semantic layers and models. Experience contributing to team-level standards, patterns, and roadmaps for data engineering or platform teams (even if not as primary owner). Compensation Range and Benefits Salary/Hourly Rate Range*: $170,000 - $215,000 This role is equity eligible This role offers a competitive benefits and wellness package *Subject to location, experience, and education #LI-DS1 #LI-Remote What do we offer to the ideal candidate? A chance to improve the U.S. healthcare system at a high-growth company! Our leading healthcare financial platform is scaling rapidly, helping millions of patients per year Unless stated otherwise, most roles have flexibility to work from home or in the office, depending on what works best for you For exempt employees: Unlimited PTO for vacation, sick and mental health days-we encourage everyone to take at least 20 days of vacation per year to ensure dedicated time to spend with loved ones, explore, rest and recharge 16 weeks paid parental leave with health benefits for all parents, plus flexible re-entry schedules for returning to work Diversity initiatives that encourage Cedarians to bring their whole selves to work, including three employee resource groups: be@cedar (for BIPOC-identifying Cedarians and their allies), Pridecones (for LGBTQIA+ Cedarians and their allies) and Cedar Women+ (for female-identifying Cedarians) Competitive pay, equity (for qualifying roles), and health benefits, including fertility & adoption assistance, that start on the first of the month following your start date (or on your start date if your start date coincides with the first of the month) Cedar matches 100% of your 401(k) contributions, up to 3% of your annual compensation Access to hands-on mentorship, employee and management coaching, and a team discretionary budget for learning and development resources to help you grow both professionally and personally About us Cedar was co-founded by Florian Otto and Arel Lidow in 2016 after a negative medical billing experience inspired them to help improve our healthcare system. With a commitment to solving billing and patient experience issues, Cedar has become a leading healthcare technology company fueled by remarkable growth. "Over the past several years, we've raised more than $350 million in funding & have the active support of Thrive and Andreessen Horowitz (a16z). As of November 2024, Cedar is engaging with 26 million patients annually and is on target to process $3.5 billion in patient payments annually. Cedar partners with more than 55 leading healthcare providers and payers including Highmark Inc., Allegheny Health Network, Novant Health, Allina Health and Providence.
    $170k-215k yearly Auto-Apply 6d ago
  • Senior Data Engineer

    Ingenium.Agency 4.1company rating

    New York, NY jobs

    Job Description Senior Data Engineer $150k - $170k We are a leading cloud-based mobile patient intake and registration system. Our platform allows patients to complete their paperwork from the comfort of their own homes using their smartphones or computers, minimizing in-person contact and streamlining the check-in process. With our fully customizable patient scheduling, intake, and payment platform, you can maximize efficiency and minimize waiting room activity. Our patient engagement solution also syncs seamlessly with your EMR system to keep records updated in real-time. Role Description This is a full-time position for a Senior Data Engineer. As a Senior Data Engineer, you will be responsible for the day-to-day tasks associated with data engineering, including data modeling, ETL (Extract Transform Load), data warehousing, and data analytics. This is a hybrid role, with the majority of work located in the New York City office but with flexibility for some remote work. Qualifications Data Engineering, Data Modeling, and ETL (Extract Transform Load) skills Data Warehousing and Data Analytics skills Experience with cloud-based data solutions Strong problem-solving and analytical skills Proficiency in programming languages such as Python, PySpark or Java Experience with SQL and database management systems Knowledge of healthcare data requirements and regulations is a plus Bachelor's degree in Computer Science, Engineering, or related field If interested, please send your resume to: rick@ingenium.agency
    $150k-170k yearly 16d ago
  • Senior Data Engineer

    Peerspace 4.2company rating

    Remote

    Senior Data Engineer About Peerspace Peerspace is the leading and category defining online marketplace for venue rentals for meetings, productions, and events. We open doors to the most inspiring spaces around the world, from lofts and mansions to storefronts and studios. Over $500M has been transacted through the platform, and our investors include GV (Google Ventures) and Foundation Capital. We are looking for a Senior Data Engineer who approaches data with the mindset of a Software Engineer. You won't just be moving data; you will be building the robust infrastructure and services that power our product.In this role, you will take ownership of decomposing and modernizing our legacy business data pipelines. You will be a key architect in building data services that expose critical information directly within our product, as well as for internal analytics. We need a builder who values clean code, testability, and scalability alongside data quality. What you will do: * Modernize & Decompose: Take the lead on breaking down large, monolithic data initiatives into manageable, modern services and pipelines that allow us to ship fast and scale efficiently.* Architecture & Design: Drive the strategy for new data storage patterns and the services required to serve data back to the core product as well as for internal analytics consumption.* Engineering Excellence: Apply software engineering best practices-such as CI/CD, unit testing, and version control-to our data ecosystem.* End-to-End Data Ownership: Act with a strong ownership mindset, digging into the "why" and "how" to find answers in a startup-style environment.* Collaborate & Mentor: We're a small and nimble team. You will be a primary collaborator across the company, from Product and Engineering to Marketing and Finance, serving as a partner and mentor to bring the whole team along in building production-grade systems. What we are looking for: * Software-Driven Data Engineering: You have a deep understanding of software design patterns and a strong foundation in data engineering basics. You are opinionated on best practices regarding data orchestration and storage, and you know how to apply engineering rigor (modularity, testing, and maintainability) to data workflows.* Strategic Pragmatism: You have a track record of balancing short-term business requirements with long-term technical health. You know when to build for the future and when a tactical solution is the right call for the moment.* Python & SQL Mastery: You are highly proficient in Python and can write complex, performant SQL.* Architectural Grit: You have a track record of breaking down high-level business requirements into technical roadmaps and executing them.* Collaborative Ownership: You are a proactive communicator who takes pride in owning a problem from discovery to resolution. You enjoy digging in yourself to find answers and partnering with others to implement the right solution.* The "Startup" Spirit: You are comfortable with ambiguity. You don't wait for a manual; you enjoy digging into the code to figure out how things work. Bonus Points: * Experience or a strong interest in deploying ML pipelines and working with LLM tooling.* Experience with dbt.* Experience with real-time or near real-time data processing (streaming). Our Tech Stack: Knowing these specific tools is not a requirement for application, but we like to give you a heads-up on what you would be working with daily: Google Cloud Platform (GCP), BigQuery, Airflow, dbt, Metaplane, Segment, Postgres, Tableau Why Peerspace? Peerspace is proudly certified as a Great Place to Work™ and we're a remote first company with team members located in cities around the globe. Beyond competitive salary and equity compensation, we provide: ● 100% employee coverage of medical, dental and vision insurance● $500 annual professional development allowance● Discount on all Peerspace bookings● Laptop, high res display, and stipend to setup home office● Monthly cell phone and internet credit● Coworking membership if needed (in lieu of home office)● Flexible take it as you need it time off policy● Wellness Fridays observed company wide● Annual in-person, all company offsites and team-building events The annual salary range for this role is $160,000 to $175,000. The actual salary will vary depending on experience, skills, and abilities as well as internal equity and market data. Diversity At Peerspace, we're dedicated to creating a team that's diverse, equitable and inclusive. We believe bringing people together from different backgrounds and identities makes us stronger and better serves the Peerspace community. We'd especially like to encourage applicants from different backgrounds, locations, and experiences. At Peerspace, we are committed to maintaining a secure and transparent hiring experience. Please be aware that scammers sometimes impersonate companies like ours by posting fake job listings or reaching out to job seekers in misleading ways. All legitimate communication from Peerspace will come from an *********************** email address. We do not contact candidates through personal email accounts, third-party messaging apps such as WhatsApp or Microsoft Teams, or through the Peerspace platform's internal messaging system. All Interviews are conducted only by phone or Zoom, and we will never ask for sensitive personal or financial information such as your Social Security number or bank details during the early stages of the hiring process. If you receive any message that seems suspicious or does not align with these practices, please contact us immediately at ****************** so we can investigate. We only post our job openings and process applications, through our official careers page at ********************************
    $160k-175k yearly Auto-Apply 21d ago
  • Junior Data Engineer

    Havas 3.8company rating

    Lima, OH jobs

    As a Jr. Data Engineer at Havas you will serve as a technical backbone for our analytics capabilities. You will focus on the maintenance, enhancement, and expansion of our proprietary marketing data platform built on Google Cloud Platform (GCP) using BigQuery and Google Dataform. In this role, you will work in a cross-functional environment, bridging the gap between raw data and actionable insights. While your primary focus will be building SQL pipelines in Dataform, you will also have the opportunity to support the wider engineering team on larger-scale ingestion pipelines using Cloud Composer (Apache Airflow). You will also have the opportunity to support the wider engineering team on custom, large-scale pipelines using Cloud Composer (Apache Airflow). This is a hands-on opportunity perfect for someone looking to grow their expertise in the modern data stack, cloud infrastructure, and marketing analytics. Key Responsibilities * Develop and maintain SQL pipelines in Google Dataform. * Manage datasets in BigQuery and optimize data models for dashboards in Power BI and Looker Studio. * Support workflow orchestration with Cloud Composer (Apache Airflow). * Ensure data accuracy and troubleshoot discrepancies. Requirements * Solid foundation in SQL and basic Python skills. * Interest in working with Google Cloud Platform (GCP). * Strong problem-solving mindset and clear communication skills. * Advanced English skills as a must Why join us? * Global Exposure: Work with international teams and global brands. * Long-Term Contract: Stability and the opportunity to grow within a global network. * Remote Work Model: Virtual work environment with a healthy work-life balance. * Health & Wellness: EPS 100% health insurance and wellness initiatives. * Culture & Community: Inclusive, collaborative, and purpose-driven workplace. * Competitive Compensation: Full time contract monthly salary + food benefit card If you're passionate to work on exciting projects looking to grow in the modern data stack, and be part of a collaborative team that values innovation and learning, we would love to hear from you! Apply now to join the Noise Digital team within Havas Media Group and take your career to new heights. Contract Type : Permanent Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual's ability to perform their job.
    $74k-100k yearly est. Auto-Apply 6d ago
  • Data Engineer, Enterprise Data, Analytics and Innovation

    Vaniam Group 4.0company rating

    Remote

    at Vaniam Group Data Engineer, Enterprise Data, Analytics and Innovation, Digital Innovation What You'll Do Are you passionate about building robust data infrastructure and enabling innovation through engineering excellence? As our Data Engineer, your goal is to own and evolve the foundation of our data infrastructure. You will be central in ensuring data reliability, scalability, and accessibility across our lakehouse and transactional systems. This role is ideal for someone who thrives at the intersection of engineering and innovation, ensuring our data platforms are robust today while enabling the products of tomorrow. A Day in the Life Lakehouse and Pipelines Design, build, and operate reliable ETL and ELT pipelines in Python and SQ Manage ingestion into Bronze, standardization and quality in Silver, and curated serving in Gold layers of our Medallion architecture Maintain ingestion from transactional MySQL systems into Vaniam Core to keep production data flows seamless Implement observability, data quality checks, and lineage tracking to ensure trust in all downstream datasets Data Modeling and Governance Develop schemas, tables, and views optimized for analytics, APIs, and product use cases Apply and enforce best practices for security, privacy, compliance, and access control, ensuring data integrity across sensitive healthcare domains Maintain clear and consistent documentation for datasets, pipelines, and operating procedures Integration of New Data Sources Lead the integration of third-party datasets, client-provided sources, and new product-generated data into Vaniam Core Partner with product and innovation teams to build repeatable processes for onboarding new data streams Ensure harmonization, normalization, and governance across varied data types (scientific, engagement, operational) Analytics and Predictive Tools Collaborate with the innovation team to prototype and productionize analytics, predictive features, and decision-support tools Support dashboards, APIs, and services that activate insights for internal stakeholders and clients Work closely with Data Science and AI colleagues to ensure engineered pipelines meet modeling and deployment requirements Reliability and Optimization Monitor job execution, storage, and cluster performance, ensuring cost efficiency and uptime Troubleshoot and resolve data issues, proactively addressing bottlenecks Conduct code reviews, enforce standards, and contribute to CI/CD practices for data pipelines What You Must Have Education and Experience 5+ years of professional experience in data engineering, ETL, or related roles Strong proficiency in Python and SQL for data engineering Hands-on experience building and maintaining pipelines in a lakehouse or modern data platform Practical understanding of Medallion architectures and layered data design Skills and Competencies Familiarity with modern data stack tools, including: Spark or PySpark Workflow orchestration (Airflow, dbt, or similar) Testing and observability frameworks Containers (Docker) and Git-based version control Excellent communication skills, problem-solving mindset, and a collaborative approach What You Might Have, but Isn't Required Experience with Databricks and the Microsoft Azure ecosystem Expertise with Delta Lake formats, metadata management, and data catalogs Familiarity with healthcare, scientific, or engagement data domains Experience exposing analytics through APIs or lightweight microservices The Team You'll Work Closest With You will collaborate closely with the innovation team to prototype and productionize analytics solutions. Your main contacts will be Data Science and AI colleagues, product and innovation leaders, and internal stakeholders who rely on data-driven insights. You will work remotely with flexibility, growth opportunities, and the ability to influence how data shapes the future of medical communications, helping to turn raw data into client-ready insights that enable measurable healthcare impact. Why You'll Love Us: 100% remote environment with opportunities for local meet-ups Positive, diverse, and supportive culture Passionate about serving clients focused on Cancer and Blood diseases Investment in you with opportunities for professional growth and personal development through Vaniam Group University Health benefits - medical, dental, vision Generous parental leave benefit Focused on your financial future with a 401(k) Plan and company match Work-Life Balance and Flexibility Flexible Time Off policy for rest and relaxation Volunteer Time Off for community involvement Emphasis on Personal Wellness Virtual workout classes Discounts on tickets, events, hotels, child care, groceries, etc. Employee Assistance Programs Salary offers are based upon several factors including experience, education, skills, training, demonstrated qualifications, location, and organizational need. The range for this role is $110,000 - $125,000. Salary is one component of the total earnings and rewards package offered. About Us: Vaniam Group is a people-first, purpose-driven, independent network of healthcare and scientific communications agencies committed to helping biopharmaceutical companies realize the full potential of their compounds in the oncology and hematology marketplace. Founded in 2007 as a virtual-by-design organization, Vaniam Group harnesses the talents and expertise of team members around the world. For more information, visit ******************** Applicants have rights under Federal Employment Laws to the following resources: Family & Medical Leave Act (FMLA) poster - ********************************************* EEOC Know Your Rights poster - *************************** Employee Polygraph Protection Act (EPPA) poster - ************************************************************************** Bottom of Form
    $110k-125k yearly Auto-Apply 60d+ ago
  • Staff Data Engineer

    Nerdwallet 4.6company rating

    Remote

    At NerdWallet, we're on a mission to bring clarity to all of life's financial decisions and every great mission needs a team of exceptional Nerds. We've built an inclusive, flexible, and candid culture where you're empowered to grow, take smart risks, and be unapologetically yourself (cape optional). Whether remote or in-office, we support how you thrive best. We invest in your well-being, development, and ability to make an impact because when one Nerd levels up, we all do. Data engineers are the builders behind the insights that drive smarter decisions. They design and scale reliable data pipelines and models that power analytics, experimentation, and strategic decision-making across the company. As a Staff Data Engineer, you'll tackle complex, cross-functional data challenges-partnering closely with stakeholders across product, engineering, and business teams. You'll combine strong technical expertise with clear communication and thoughtful collaboration to ensure our data systems are not only technically sound but also deeply aligned with NerdWallet's strategic goals. As part of our embedded data model, you'll work directly within a product vertical-shaping the data that drives business decisions, product innovation, and user experiences. This is a unique opportunity to see your work translate into real-world outcomes, accelerating NerdWallet's mission through data that's closer than ever to the business. You'll design, develop, and maintain data systems and pipelines that serve as the foundation for analytics and product innovation in a fast-paced, ever-evolving environment. The right candidate thrives in ambiguity-comfortable toggling between projects, adapting to shifting priorities, and leading through change. You'll elevate the team's impact by leveraging both your technical depth and your ability to influence, mentor, and foster a culture of innovation, reliability, and continuous improvement. This role sits within Core Engineering and reports to a Senior Manager of Data Engineering. You'll join a passionate team of Nerds who believe clean, scalable data is at the heart of helping consumers make smarter financial decisions. Where you can make an impact: Lead the design, development, and maintenance of business-critical data assets, ensuring they are accurate, reliable, and aligned with evolving business priorities Drive technical innovation and process excellence, evaluating emerging technologies and implementing scalable, efficient solutions that improve data pipeline performance and reliability Tackle complex technical challenges - balancing scalability, security, and performance - while providing clear rationale for architectural decisions and aligning outcomes across teams Ensure data pipeline reliability and observability, proactively identifying and resolving issues, investigating anomalies, and improving monitoring to safeguard data integrity Build trust and alignment across cross-functional teams through transparent communication, collaborative problem-solving, and a deep understanding of partner needs Bring clarity and direction to ambiguity, taking ownership of initiatives that span multiple domains or teams, and providing technical leadership to ensure successful delivery Prioritize work strategically, balancing business impact, risk, and execution to drive measurable outcomes that support organizational goals Act as a trusted technical advisor and thought leader, shaping the team's long-term architecture and influencing best practices Foster a culture of technical excellence and continuous learning, mentoring engineers and championing modern data engineering practices, including AI and automation-enabled solutions Your experience: 7+ years of relevant professional experience in data engineering 5+ years of experience with AWS, Snowflake, DBT, Airflow Advanced level of proficiency in Python and SQL Working knowledge of relational databases and query performance tuning (SQL) Working knowledge of streaming technologies such as Storm, Kafka, Kinesis, and Flume Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent professional experience) Advanced level of proficiency applying principles of logical thinking to define problems, collect data, establish facts, and draw valid conclusions Experience designing, building, and operating robust data systems with reliable monitoring and logging practices Strong communication skills, both written and verbal, with the ability to articulate information to team members of all levels and various amounts of applicable knowledge throughout the organization Where: This role will be remote (based in the U.S.). We believe great work can be done anywhere. No matter where you are based, NerdWallet offers benefits and perks to support the physical, financial, and emotional well being of you and your family. What we offer: Work Hard, Stay Balanced (Life's a series of balancing acts, eh?) Industry-leading medical, dental, and vision health care plans for employees and their dependents Rejuvenation Policy - Flexible Vacation Time Off + 11 holidays + holiday company shutdown New Parent Leave for employees with a newborn child or a child placed with them for adoption or foster care Mental health support Paid sabbatical after 5 years for Nerds to recharge, gain knowledge, and pursue their interests Health and Dependent Care FSA and HSA Plan with monthly NerdWallet contribution Monthly Wellness Stipend, Cell Phone Stipend, and Wifi Stipend (Only remote Nerds are eligible for the Wifi Stipend) Work from home equipment stipend and co-working space subsidy (Only remote Nerds are eligible for these stipends) Have Some Fun! (Nerds are fun, too) Nerd-led group initiatives - Employee Resource Groups for Parents, Diversity, and Inclusion, Women, LGBTQIA, and other communities Hackathons and team events across all teams and departments Company-wide events like NerdLove (employee appreciation) and our annual Charity Auction Our Nerds love to make an impact by paying it forward - Take 8 hours of volunteer time off per quarter and donate to your favorite causes with a company match Plan for your future (And when you retire on your island, remember the little people) 401K with 4% company match Be the first to test and benefit from our new financial products and tools Financial wellness, guidance, and unlimited access to a Certified Financial Planner (CFP) through Northstar Disability and Life Insurance with employer-paid premiums If you are based in California, we encourage you to read this important information for California residents linked here. NerdWallet is committed to pursuing and hiring a diverse workforce and is proud to be an equal opportunity employer. We prohibit discrimination and harassment on the basis of any characteristic protected by applicable federal, state, or local law, so all qualified applicants will receive consideration for employment. NerdWallet will consider qualified applicants with a criminal history pursuant to the California Fair Chance Act and the San Francisco Fair Chance Act, which requires this notice, as well as the Los Angeles Fair Chance Act, which requires this notice. NerdWallet participates in the Department of Homeland Security U.S. Citizenship and Immigration Services E-Verify program for all US locations. For more information, please see: E-Verify Participation Poster (English+Spanish/Español) Right to Work Poster (English) / (Spanish/Español) #LI-4 #LI-MPLX
    $129k-170k yearly est. Auto-Apply 60d+ ago
  • Sr. Data Engineer (Data Platforms)

    Cedar 4.3company rating

    Remote

    Our healthcare system is the leading cause of personal bankruptcy in the U.S. Every year, over 50 million Americans suffer adverse financial consequences as a result of seeking care, from lower credit scores to garnished wages. The challenge is only getting worse, as high deductible health plans are the fastest growing plan design in the U.S. Cedar's mission is to leverage data science, smart product design and personalization to make healthcare more affordable and accessible. Today, healthcare providers still engage with its consumers in a “one-size-fits-all” approach; and Cedar is excited to leverage consumer best practices to deliver a superior experience. The Role Cedar's Data & Integration Platforms organization builds the data infrastructure, pipelines and tooling that power our products, analytics, and financial operations. We are looking for a Senior Data Engineer to help evolve our data ecosystem from homegrown ETL scripts to a modern, scalable stack built on tools like dbt, Airflow and Snowflake. You will design and own critical data pipelines (e.g., client billing, product analytics, and platform data services), improve data quality and observability, and help define patterns and standards for how Cedar builds and operates data products. This is a high-impact individual contributor role with significant autonomy, technical ownership, and cross-functional exposure. What You'll Do Design, build, and own scalable ELT/ETL pipelines that power core use cases including client billing, financial reporting, product analytics and data services for downstream teams (Finance, Data Science, Commercial Analytics, Product). Modernize legacy data flows by migrating SQL- and Liquibase-based transformations into dbt, with robust testing, documentation and data contracts. Improve reliability and observability of our data platform by implementing best practices in testing, monitoring, alerting and runbook-driven operations for pipelines orchestrated via Airflow (and/or similar tools). Model data for usability and performance in Snowflake and other systems, applying dimensional and domain-driven design patterns where appropriate (e.g., for analytics core models and financial engineering services). Partner closely with product, finance, analytics and integrations teams to understand requirements, define interfaces, and ensure data is accurate, well-documented, and delivered in the right form and cadence for consumers. Contribute to Cedar's data platform vision by helping decouple data infrastructure from data services, establishing standards for governance, metadata, and access, and piloting tools like OpenMetadata and data quality frameworks. Provide technical mentorship to other engineers, upleveling our data engineering practices in areas like code quality, reviews, architecture, and operational excellence. Balance short-term delivery with long-term architecture, making pragmatic trade-offs while moving us toward a clear “North Star” data platform that supports emerging use cases like AI/ML, personalization and experimentation. About You 5+ years of hands-on data engineering (or closely related software engineering) experience, including ownership of production data pipelines and systems at scale. Strong SQL and Python proficiency, with experience building data transformations, utilities and tooling (e.g., dbt models, Airflow DAGs, internal libraries). Deep experience with modern data stack tools, including several of: Snowflake (or similar cloud data warehouse), dbt, Airflow/Dagster (or similar orchestrator). Proven track record designing and operating reliable pipelines, including testing strategies (unit/integration/dbt tests), monitoring, alerting, and incident/root-cause analysis for data issues. Experience with data modeling and schema design for analytics, reporting and operational use cases (e.g., dimensional models, entity-centric designs, or medallion-style architectures). Familiarity with cloud platforms, ideally AWS (e.g., use of S3, IAM, containerized workloads, or related infrastructure supporting data workloads). Strong collaboration and communication skills, with the ability to translate ambiguous business problems into clear technical requirements and to work effectively with partners across engineering, product and business teams. High ownership and bias to action in complex, evolving environments-comfortable operating with partial information, making trade-offs explicit, and driving work to completion. Bonus Points Experience with metadata and data governance tools, such as OpenMetadata, DataHub or similar catalogs, and implementing data contracts or quality frameworks (e.g., Great Expectations, dbt tests). Exposure to streaming and event-driven data pipelines (e.g., Kafka, CDC tools) and integrating those into warehouse-centric architectures. Prior experience in healthcare, fintech, or other highly regulated domains, particularly with standards like HL7 or FHIR, or with complex billing/financial data flows. Familiarity with analytics and visualization tools (e.g., Looker, Hex) and enabling self-serve analytics through well-designed semantic layers and models. Experience helping define team-level standards, patterns, and roadmaps for data engineering or platform teams. Compensation Range and Benefits: Salary/Hourly Rate Range*: $195,500 - $247,250 This role is equity eligible This role offers a competitive benefits and wellness package *Subject to location, experience, and education #LI-REMOTE #LI-DS1 What do we offer to the ideal candidate? A chance to improve the U.S. healthcare system at a high-growth company! Our leading healthcare financial platform is scaling rapidly, helping millions of patients per year Unless stated otherwise, most roles have flexibility to work from home or in the office, depending on what works best for you For exempt employees: Unlimited PTO for vacation, sick and mental health days-we encourage everyone to take at least 20 days of vacation per year to ensure dedicated time to spend with loved ones, explore, rest and recharge 16 weeks paid parental leave with health benefits for all parents, plus flexible re-entry schedules for returning to work Diversity initiatives that encourage Cedarians to bring their whole selves to work, including three employee resource groups: be@cedar (for BIPOC-identifying Cedarians and their allies), Pridecones (for LGBTQIA+ Cedarians and their allies) and Cedar Women+ (for female-identifying Cedarians) Competitive pay, equity (for qualifying roles), and health benefits, including fertility & adoption assistance, that start on the first of the month following your start date (or on your start date if your start date coincides with the first of the month) Cedar matches 100% of your 401(k) contributions, up to 3% of your annual compensation Access to hands-on mentorship, employee and management coaching, and a team discretionary budget for learning and development resources to help you grow both professionally and personally About us Cedar was co-founded by Florian Otto and Arel Lidow in 2016 after a negative medical billing experience inspired them to help improve our healthcare system. With a commitment to solving billing and patient experience issues, Cedar has become a leading healthcare technology company fueled by remarkable growth. "Over the past several years, we've raised more than $350 million in funding & have the active support of Thrive and Andreessen Horowitz (a16z). As of November 2024, Cedar is engaging with 26 million patients annually and is on target to process $3.5 billion in patient payments annually. Cedar partners with more than 55 leading healthcare providers and payers including Highmark Inc., Allegheny Health Network, Novant Health, Allina Health and Providence.
    $81k-113k yearly est. Auto-Apply 5d ago
  • Data Engineer (AI Enablement)

    Octagon 4.0company rating

    Remote

    THE JOB / Data Engineer (AI Enablement) STRATEGY / Responsible for building and operating the data foundations that power Octagon's AI solutions and enterprise search. ***Our headquarters are in Stamford, CT, but the location of this position can be 100% remote for qualified candidates. You're a systems-minded builder who turns messy, multi-source data into reliable, searchable, and governed knowledge. Your mission is to stand up the pipelines, vector search, and metadata standards that make AI tools accurate, fast, and safe. You'll partner closely with the Solutions Engineer (peer role) to take prototypes and ship durable infrastructure-ingestion, embeddings, indexing, and APIs-so teams can find and use what they need. You'll report to the Director, Data Strategy and work across departments to reduce manual effort, improve data quality, and enable AI-powered workflows at scale. THE WORK YOU'LL DO Data foundations: Design and operate the vector database/search layer (e.g., FAISS/pgvector/Milvus) and document-chunking/embedding pipelines that make Octagon's content discoverable and auditable. Scalable pipelines for AI/ML/LLM: Implement and maintain ELT/ETL to support downstream workflows such as data labeling, classification, and document parsing; build robust validations, lineage, and observability. Retrieval APIs: Expose governed retrieval endpoints that respect permissions (ACLs), support metadata filters, and return source snippets/IDs for grounding and citations. Data structuring & manipulation: Normalize, transform, and move JSON and other structured payloads cleanly through workflows to ensure reliable handoffs and automation outputs. Align & collaborate: Align product peers, design, data science, engineering, and commercial teams around a unified roadmap and shared data contracts. Operationalize prototypes: Take MVPs from the Solutions Engineer and productionize with CI/CD, telemetry, cost/usage guardrails, and pilot → rollout gating. Reliability & security: Build monitoring (freshness, re-index SLAs, retrieval quality), secrets management, access controls, and audit logging aligned with enterprise governance. Flexibility and willingness to travel and work weekends or holidays as needed. Anticipated travel level: Low (0-15%). THE BIGGER TEAM YOU'LL JOIN Recognized as one of the “Best Places to Work in Sports”, Octagon is the global sports, entertainment, and experiential marketing arm of the Interpublic Group. We take pride in being Playmakers - finding insightful, bold ways to create play in our work, our lives, and in the world. We believe in the power of play to create big ideas and unlock potential for our clients and talent. We can put ourselves in the shoes of fans because we ARE fans - of sports, entertainment, and culture at large. This expertise allows us to continually evolve the fan experience across sports and entertainment alongside some of the biggest brands and talent in the world. The world needs play more than ever. Are you a Playmaker? WHO WE'RE LOOKING FOR 3+ years (or equivalent portfolio) building data systems: data modeling, ELT/ETL, Python + SQL; experience with cloud object storage and relational databases. Hands-on with embeddings and vector databases (e.g., FAISS/pgvector/Milvus) and document processing pipelines for RAG-style retrieval. Scalable pipeline experience supporting AI/ML/LLM use cases (labeling, classification, doc parsing) and partnering closely with Data Science and Data Labeling teams. Data structuring & manipulation expertise: cleanly normalizing and transforming JSON/Parquet/CSV payloads; designing resilient data contracts and schemas. Orchestration/ops: Airflow/Prefect (or similar), CI/CD, structured logging/monitoring, cost/usage guardrails; secure secrets management. Strong collaboration and communication skills; proven ability to align product/design/engineering/commercial stakeholders around a unified roadmap. Nice-To-Haves Enterprise connectors and productivity stacks (e.g., Microsoft 365/SharePoint/Teams/Graph, Copilot or Copilot Studio/Power Automate; Google Workspace; Salesforce; DAMs). Experience implementing LLM inference patterns, similarity search, guardrails, and memory; familiarity with agent frameworks or custom orchestration. Additional languages for systems work (e.g., C++, C#, Java, or Go). Containers (Docker), GitHub Actions, IaC; lightweight internal UIs (Streamlit or R Shiny) to expose services. Familiarity with marketing/media-measurement datasets and associated normalization/quality checks. The base range for this position is $90,000 - $100,000. Where an employee or prospective employee is paid within this range will depend on, among other factors, actual ranges for current/former employees in the subject position; market considerations; budgetary considerations; tenure and standing with the company (applicable to current employees); as well as the employee's/applicant's background pertinent experience, and qualifications We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, age, disability, gender identity, marital or veteran status, or any other protected class.
    $90k-100k yearly 50d ago
  • Junior Data Engineer

    Havas 3.8company rating

    Bogota, NJ jobs

    Ready to take your data career to the next level? Let's make it happen! As a Jr. Data Engineer at Havas you will serve as a technical backbone for our analytics capabilities. You will focus on the maintenance, enhancement, and expansion of our proprietary marketing data platform built on Google Cloud Platform (GCP) using BigQuery and Google Dataform. In this role, you will work in a cross-functional environment, bridging the gap between raw data and actionable insights. While your primary focus will be building SQL pipelines in Dataform, you will also have the opportunity to support the wider engineering team on larger-scale ingestion pipelines using Cloud Composer (Apache Airflow). You will also have the opportunity to support the wider engineering team on custom, large-scale pipelines using Cloud Composer (Apache Airflow). This is a hands-on opportunity perfect for someone looking to grow their expertise in the modern data stack, cloud infrastructure, and marketing analytics. Key Responsibilities * Develop and maintain SQL pipelines in Google Dataform. * Manage datasets in BigQuery and optimize data models for dashboards in Power BI and Looker Studio. * Support workflow orchestration with Cloud Composer (Apache Airflow). * Ensure data accuracy and troubleshoot discrepancies. Requirements * Solid foundation in SQL and basic Python skills. * Interest in working with Google Cloud Platform (GCP). * Strong problem-solving mindset and clear communication skills. * Advanced English skills as a must Why join us? * Global Exposure: Work with international teams and global brands. * Long-Term Contract: Stability and the opportunity to grow within a global network. * Remote Work Model: Virtual work environment with a healthy work-life balance. * Health & Wellness: EPS 100% health insurance and wellness initiatives. * Culture & Community: Inclusive, collaborative, and purpose-driven workplace. * Competitive Compensation: Full time contract monthly salary + food benefit card If you're passionate to work on exciting projects looking to grow in the modern data stack, and be part of a collaborative team that values innovation and learning, we would love to hear from you! Apply now to join the Noise Digital team within Havas Media Group and take your career to new heights. Contract Type : Permanent Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual's ability to perform their job.
    $77k-105k yearly est. Auto-Apply 37d ago
  • Data Scientist (or AI Engineer) - Hybrid

    Elder Research 3.9company rating

    Tampa, FL jobs

    Job Title: Data Scientist (or AI Engineer) Workplace: Hybrid; 2-3 days per week onsite at MacDill AFB, FL Clearance: Top Secret (TS) Elder Research is seeking mid to senior-level Data Scientists (AI Engineers) to support a U.S. national security client at MacDill AFB in Tampa, FL. In this mission-focused role, you will apply advanced data science and AI/ML techniques to enable intelligence analysts to uncover hidden patterns, enhance decision-making, and drive intelligence innovation in support of national security. This hybrid role offers the opportunity to work at the cutting edge of analytics and defense, directly impacting military operations across the U.S. Intelligence and Defense community. Our team integrates expertise in data science, AI/ML, and intelligence operations to deliver data-driven solutions for the U.S. National Security enterprise. The work directly contributes to decision-making, mission readiness, and the ability of operators to succeed in a complex global battlespace. Key Responsibilities: As a Data Scientist on this program, you will: Lead and conduct multifaceted analytic studies on large, diverse data sets. Develop and deploy AI/ML models to enrich data and provide utility to intelligence analysts. Perform complex data assessments to determine the operational relevance of proposed data sets for answering priority intelligence requirements. Build and maintain data pipelines to ingest, transform, and structure both structured and unstructured data. Discover links, patterns, and connections in disparate datasets, providing analysts with actionable context. Experiment with exploratory mathematical, statistical, and computational techniques to identify new insights. Provide solutions to command-level data challenges through rigorous analysis and innovative applications. Support the development of tailored data environments and tools for intelligence analysts. Requirements: Education: Bachelor s degree in a technical field (e.g., Engineering, Mathematics, Statistics, Physics, Computer Science, IT, or related discipline). Years of Experience: 3-6 years; mid-level data scientist 6+ years; senior-level data scientist Clearance: active Top Secret (TS) AI/ML Expertise: Demonstrated experience applying Artificial Intelligence (AI) or Machine Learning (ML) to real-world problems. Hands-on experience training, fine-tuning, and configuring AI/ML models and deploying them into production environments. Proficiency in at least one AI/ML branch (e.g., Natural Language Processing, computer vision, generative AI, agentic AI). Programming & Tools: Strong programming skills in Python, R, Java, Rust, or similar languages. Experience with AI/ML Ops, SQL/no SQL databases, and frameworks such as SQLAlchemy, Flask, Streamlit, Dash, React, and Spark. Familiarity with APIs, CI/CD pipelines, and web technologies (JavaScript, HTML/CSS). Analytical & Research Skills: Ability to interpret and analyze structured and unstructured data using exploratory mathematical and statistical techniques. Experience cleaning, transforming, and organizing data to support advanced analytics. Ability to experiment with datasets, derive insights, and provide innovative solutions to complex mission challenges. Collaboration & Communication: Ability to work independently as well as within cross-functional teams. Strong communication, problem-solving, and critical-thinking skills. Capable of coordinating research and analytic activities with diverse stakeholders. Preferred Qualifications: Active TS/SCI clearance. Experience supporting the intelligence domain, particularly Intelligence, Surveillance, and Reconnaissance (ISR). Previous work supporting Special Operations Forces (SOF) missions or U.S. national security customers. Demonstrated expertise across multiple AI/ML disciplines (e.g., NLP, computer vision, generative AI, agentic AI). Why apply to this position at Elder Research? Competitive Salary and Benefits Important Work / Make a Difference supporting U.S. national security. Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract. People-Focused Culture: we prioritize work-life-balance and provide a supportive, positive, and collaborative work environment as well as opportunities for professional growth and advancement. Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits. About Elder Research, Inc People Centered. Data Driven Elder Research is a fast growing consulting firm specializing in predictive analytics. Being in the data mining business almost 30 years, we pride ourselves in our ability to find creative, cutting edge solutions to real-world problems. We work hard to provide the best value to our clients and allow each person to contribute their ideas and put their skills to use immediately. Our team members are passionate, curious, life-long learners. We value humility, servant-leadership, teamwork, and integrity. We seek to serve our clients and our teammates to the best of our abilities. In keeping with our entrepreneurial spirit, we want candidates who are self-motivated with an innate curiosity and strong team work. Elder Research believes in continuous learning and community - each week the entire company attends a Tech Talk and each office location provides lunch. Elder Research provides a supportive work environment with established parental, bereavement, and PTO policies. By prioritizing a healthy work-life balance - with reasonable hours, solid pay, low travel, and extremely flexible time off - Elder Research enables and encourages its employees to serve others and enjoy their lives. Elder Research, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Elder Research is a Government contractor and many of our positions require US Citizenship.
    $66k-97k yearly est. 60d+ ago
  • Data Scientist (Remote)

    Elder Research 3.9company rating

    Arlington, VA jobs

    Job Title: Data Scientist Workplace: Remote (preference for candidates located in the National Capital Region - DMV) Clearance Required: a TS or CBP BI or DHS Suitability clearance adjudicated within the past 4 years Supports U.S. Customs and Border Protection. As a Data Scientist, with little or no supervision, apply expert knowledge in statistical analysis, complex data mining, and artificial intelligence to make value out of data. Provide consulting relating to the data mining and analysis of data from a range of sources to transform raw data into concise and actionable insights. Design and implement data-driven solutions, with specific focus on advanced analytical methods, data models, and visualizations. Develop quantitative simulations and models to provide descriptive and predictive analytics solution recommendations. Identify trends and problems through complex big data analysis. Maintain current in emerging tools and techniques in machine learning, statistical modeling, and analytics. Requirements: Six (6) years of relevant experience in applied research, big data analytics, statistics, applied mathematics, data science, computer science, operations research or other closely related other quantitative or mathematical discipline. At least three (3) years of direct experience in machine learning. Advanced Degree (Masters or PhD) in Statistics, Applied Mathematics, Data Science, Computer Science, Operations Research or other closely related other quantitative or mathematical discipline. A PhD degree may be substituted for up to three (3) years of relevant experience. Demonstrates knowledge of data mining methods, databases, data visualization and machine learning. Ability to communicate analysis techniques, concepts and products. Ability to develop data-driven solutions, data models, and visualizations Preferred Experience and Skills: Expertise using Qlik to embed visualizations into webpages Familiarity with Databricks or similar cloud-based distributed database technologies Familiarity with PySpark and Python Comfortable developing complex SQL queries to extract, transform, and load data Experience with analytic techniques such as Anomaly detection, Clustering, and Time-series (e.g., ARIMA) Experience implementing NLP concepts including preprocessing (stemming, etc.), TF-IDF, Named Entity Recognition, and LLMs. Why apply to this position at Elder Research? Competitive Salary and Benefits Important Work / Make a Difference: supporting Customs and Border Protection in their efforts to protect the United States. Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract. Remote Work: in an industry of declining remote work opportunities. People-Focused Culture: we prioritize work-life-balance and provide a positive, supportive work environment as well as opportunities for professional growth and advancement. Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits. About Elder Research, Inc People Centered. Data Driven Elder Research is a fast growing consulting firm specializing in predictive analytics. Being in the data mining business almost 30 years, we pride ourselves in our ability to find creative, cutting edge solutions to real-world problems. We work hard to provide the best value to our clients and allow each person to contribute their ideas and put their skills to use immediately. Our team members are passionate, curious, life-long learners. We value humility, servant-leadership, teamwork, and integrity. We seek to serve our clients and our teammates to the best of our abilities. In keeping with our entrepreneurial spirit, we want candidates who are self-motivated with an innate curiosity and strong team work. Elder Research believes in continuous learning and community - each week the entire company attends a Tech Talk and each office location provides lunch. Elder Research provides a supportive work environment with established parental, bereavement, and PTO policies. By prioritizing a healthy work-life balance - with reasonable hours, solid pay, low travel, and extremely flexible time off - Elder Research enables and encourages its employees to serve others and enjoy their lives. Elder Research, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Elder Research is a Government contractor and many of our positions require US Citizenship.
    $76k-111k yearly est. 4d ago
  • Data Scientist (Remote)

    Elder Research 3.9company rating

    Arlington, VA jobs

    Job Title: Data Scientist Workplace: Remote (preference for candidates located in the National Capital Region - DMV) Clearance Required: a TS or CBP BI or DHS Suitability clearance adjudicated within the past 4 years Supports U.S. Customs and Border Protection. As a Data Scientist, with little or no supervision, apply expert knowledge in statistical analysis, complex data mining, and artificial intelligence to make value out of data. Provide consulting relating to the data mining and analysis of data from a range of sources to transform raw data into concise and actionable insights. Design and implement data-driven solutions, with specific focus on advanced analytical methods, data models, and visualizations. Develop quantitative simulations and models to provide descriptive and predictive analytics solution recommendations. Identify trends and problems through complex big data analysis. Maintain current in emerging tools and techniques in machine learning, statistical modeling, and analytics. Requirements: * Six (6) years of relevant experience in applied research, big data analytics, statistics, applied mathematics, data science, computer science, operations research or other closely related other quantitative or mathematical discipline. At least three (3) years of direct experience in machine learning. * Advanced Degree (Masters or PhD) in Statistics, Applied Mathematics, Data Science, Computer Science, Operations Research or other closely related other quantitative or mathematical discipline. A PhD degree may be substituted for up to three (3) years of relevant experience. * Demonstrates knowledge of data mining methods, databases, data visualization and machine learning. * Ability to communicate analysis techniques, concepts and products. * Ability to develop data-driven solutions, data models, and visualizations Preferred Experience and Skills: * Expertise using Qlik to embed visualizations into webpages * Familiarity with Databricks or similar cloud-based distributed database technologies * Familiarity with PySpark and Python * Comfortable developing complex SQL queries to extract, transform, and load data * Experience with analytic techniques such as Anomaly detection, Clustering, and Time-series (e.g., ARIMA) * Experience implementing NLP concepts including preprocessing (stemming, etc.), TF-IDF, Named Entity Recognition, and LLMs. Why apply to this position at Elder Research? * Competitive Salary and Benefits * Important Work / Make a Difference: supporting Customs and Border Protection in their efforts to protect the United States. * Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract. * Remote Work: in an industry of declining remote work opportunities. * People-Focused Culture: we prioritize work-life-balance and provide a positive, supportive work environment as well as opportunities for professional growth and advancement. * Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits.
    $76k-111k yearly est. 6d ago
  • Senior Data Engineer

    Capital Technology Group 4.1company rating

    Washington, DC jobs

    Job Description Capital Technology Group provides expert consulting services software development, digital transformation, human-centered design, data analytics and visualization, and cybersecurity. Our multidisciplinary teams use agile methodologies to rapidly and incrementally deliver value in close collaboration with our clients. For over a decade, we have been trusted by both federal and commercial clients to solve complex, mission-critical business challenges. The quality of our work has been recognized by our partners and peers through our inclusion in the Digital Services Coalition, a group of forward- thinking firms recognized for excellence in delivering IT services. Description Capital Technology Group (CTG) is on a mission to modernize and innovate the way the federal government delivers software. We are passionate about our work, dedicated to our clients, and committed to a culture of continuous learning and growth. For this role specifically, we are seeking an individual to help support high-impact, civic tech within the federal government. As an integral part of the program, the Data Engineer leads the team in evaluating new or emerging technologies using prototypes and/or proof of concepts, analyzes and communicates the benefits and risks in implementing solutions using the new technologies, lead teams to support the adoption of new technologies across the enterprise, and provides technical leadership by supervising and mentoring junior members of the team. Client Requirements: applicants MUST BE US Citizens and be able to obtain Public Trust clearance Responsibilities Design, build, and maintain scalable, efficient data pipelines and systems following modern data engineering best practices Evaluate and prototype new tools and technologies, assessing their risks and benefits Build and maintain scalable data pipelines and models using Databricks, dbt, and SQL (PostgreSQL) Develop robust data solutions leveraging AWS services (Redshift, RDS, S3, Glue, Lambda, DMS, CloudWatch) Implement data transformations and integrations using Python and Java Ensure reliability, performance, and maintainability of end-to-end data systems Implement infrastructure-as-code using Terraform for managing cloud resources Monitor and optimize system health and performance using CloudWatch and Splunk Mentor junior engineers and code reviews Clearly communicate technical topics to both technical and non-technical audiences Collaborate with cross-functional stakeholders to define data requirements and deliver solutions in agile environments Requirements Bachelor's degree in Computer Science, Engineering, or a related technical field 7+ years of professional experience in data engineering or related domains Strong hands-on experience with: Databricks, dbt, SQL (PostgreSQL), Python, Java, AWS (Redshift, RDS, S3, Glue, Lambda, DMS, CloudWatch), Data Pipelines, Data Modeling, System Maintenance, Data Integration, Performance Optimization Proven leadership experience in technical teams Strong analytical and problem-solving skills Experience working in agile, iterative software development environments Ability to quickly learn and apply new technologies and domain knowledge Excellent written and verbal communication skills, with the ability to explain complex topics to diverse audiences Nice to Have Skills Experience with Docker and Kubernetes for containerization and orchestration Proficiency with Splunk for log aggregation and system monitoring Experience using Terraform for infrastructure automation and management Strong SQL skills, including performance tuning and complex query design Full Time Employee Benefits Remote Work (Hybrid roles will be specified in the job post) Competitive Compensation Package Medical, Dental, and Vision Life Insurance, Short/Long Term Disability Employee Assistance Program 401(k) with 4% matching Liberal PTO vacation policy Generous Annual Continuing Education Annual Wellness Budget Bonus Incentive Programs (Employee referrals and performance-based rewards) Thanks for your interest in Capital Technology Group! Capital Technology Group is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law.
    $89k-126k yearly est. 4d ago
  • Data Engineer (Hybrid)

    Elder Research 3.9company rating

    Tampa, FL jobs

    Job Title: Data Engineer Workplace: Hybrid, 2-3 days per week onsite at MacDill AFB, FL Clearance: Top Secret (TS) Elder Research is seeking Data Engineers to support a U.S. national security client at MacDill AFB in Tampa, FL. In this mission-focused role, you will apply advanced data engineering techniques to enable intelligence analysts to uncover hidden patterns, enhance decision-making, and drive intelligence innovation in support of national security. This hybrid role offers the opportunity to work at the cutting edge of analytics and defense, directly impacting military operations across the U.S. Intelligence and Defense community. Our team integrates expertise in data science, AI/ML, and intelligence operations to deliver data-driven solutions for the U.S. National Security enterprise. The work directly contributes to decision-making, mission readiness, and the ability of operators to succeed in a complex global battlespace. Position Requirements: * Education: Bachelors degree in a technical field (e.g., Engineering, Mathematics, Statistics, Physics, Computer Science, IT, or related discipline). * Clearance: active Top Secret (TS) * Years of Experience: 3+ years * Experience with: * Python, SQL, no SQL, Cypher, POSTGRES * SQLAlchemy, Swagger, Spark, Hadoop, Kafka, Hive, R, * Apache Storm, Neo4J, MongoDB * Cloud platforms (AWS, Azure, GCP, or similar) * Ability to work independently as well as within cross-functional teams. * Strong communication, problem-solving, and critical-thinking skills. Preferred Skills and Qualifications: * Active TS/SCI clearance. * Experience supporting the intelligence domain, particularly Intelligence, Surveillance, and Reconnaissance (ISR). * Previous work supporting Special Operations Forces (SOF) missions or U.S. national security customers. Why apply to this position at Elder Research? * Competitive Salary and Benefits * Important Work - Make a Difference supporting U.S. national security. * Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract. * People-Focused Culture: we prioritize work-life-balance and provide a supportive, positive, and collaborative work environment as well as opportunities for professional growth and advancement. * Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits.
    $79k-112k yearly est. 60d+ ago
  • Data Engineer (Hybrid)

    Elder Research 3.9company rating

    Tampa, FL jobs

    Job Title: Data Engineer Workplace: Hybrid, 2-3 days per week onsite at MacDill AFB, FL Clearance: Top Secret (TS) Elder Research is seeking Data Engineers to support a U.S. national security client at MacDill AFB in Tampa, FL. In this mission-focused role, you will apply advanced data engineering techniques to enable intelligence analysts to uncover hidden patterns, enhance decision-making, and drive intelligence innovation in support of national security. This hybrid role offers the opportunity to work at the cutting edge of analytics and defense, directly impacting military operations across the U.S. Intelligence and Defense community. Our team integrates expertise in data science, AI/ML, and intelligence operations to deliver data-driven solutions for the U.S. National Security enterprise. The work directly contributes to decision-making, mission readiness, and the ability of operators to succeed in a complex global battlespace. Position Requirements: Education: Bachelor s degree in a technical field (e.g., Engineering, Mathematics, Statistics, Physics, Computer Science, IT, or related discipline). Clearance: active Top Secret (TS) Years of Experience: 3+ years Experience with: Python, SQL, no SQL, Cypher, POSTGRES SQLAlchemy, Swagger, Spark, Hadoop, Kafka, Hive, R, Apache Storm, Neo4J, MongoDB Cloud platforms (AWS, Azure, GCP, or similar) Ability to work independently as well as within cross-functional teams. Strong communication, problem-solving, and critical-thinking skills. Preferred Skills and Qualifications: Active TS/SCI clearance. Experience supporting the intelligence domain, particularly Intelligence, Surveillance, and Reconnaissance (ISR). Previous work supporting Special Operations Forces (SOF) missions or U.S. national security customers. Why apply to this position at Elder Research? Competitive Salary and Benefits Important Work - Make a Difference supporting U.S. national security. Job Stability: Elder Research is not a typical government contractor, we hire you for a career not just a contract. People-Focused Culture: we prioritize work-life-balance and provide a supportive, positive, and collaborative work environment as well as opportunities for professional growth and advancement. Company Stock Ownership: all employees are provided with shares of the company each year based on company value and profits. About Elder Research, Inc People Centered. Data Driven Elder Research is a fast growing consulting firm specializing in predictive analytics. Being in the data mining business almost 30 years, we pride ourselves in our ability to find creative, cutting edge solutions to real-world problems. We work hard to provide the best value to our clients and allow each person to contribute their ideas and put their skills to use immediately. Our team members are passionate, curious, life-long learners. We value humility, servant-leadership, teamwork, and integrity. We seek to serve our clients and our teammates to the best of our abilities. In keeping with our entrepreneurial spirit, we want candidates who are self-motivated with an innate curiosity and strong team work. Elder Research believes in continuous learning and community - each week the entire company attends a Tech Talk and each office location provides lunch. Elder Research provides a supportive work environment with established parental, bereavement, and PTO policies. By prioritizing a healthy work-life balance - with reasonable hours, solid pay, low travel, and extremely flexible time off - Elder Research enables and encourages its employees to serve others and enjoy their lives. Elder Research, Inc. is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability. Elder Research is a Government contractor and many of our positions require US Citizenship.
    $79k-112k yearly est. 60d+ ago
  • Google Cloud Data & AI Engineer

    Slalom 4.6company rating

    Columbus, OH jobs

    Who You'll Work With As a modern technology company, our Slalom Technologists are disrupting the market and bringing to life the art of the possible for our clients. We have passion for building strategies, solutions, and creative products to help our clients solve their most complex and interesting business problems. We surround our technologists with interesting challenges, innovative minds, and emerging technologies You will collaborate with cross-functional teams, including Google Cloud architects, data scientists, and business units, to design and implement Google Cloud data and AI solutions. As a Consultant, Senior Consultant or Principal at Slalom, you will be a part of a team of curious learners who lean into the latest technologies to innovate and build impactful solutions for our clients. What You'll Do * Design, build, and operationalize large-scale enterprise data and AI solutions using Google Cloud services such as BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub and more. * Implement cloud-based data solutions for data ingestion, transformation, and storage; and AI solutions for model development, deployment, and monitoring, ensuring both areas meet performance, scalability, and compliance needs. * Develop and maintain comprehensive architecture plans for data and AI solutions, ensuring they are optimized for both data processing and AI model training within the Google Cloud ecosystem. * Provide technical leadership and guidance on Google Cloud best practices for data engineering (e.g., ETL pipelines, data pipelines) and AI engineering (e.g., model deployment, MLOps). * Conduct assessments of current data architectures and AI workflows, and develop strategies for modernizing, migrating, or enhancing data systems and AI models within Google Cloud. * Stay current with emerging Google Cloud data and AI technologies, such as BigQuery ML, AutoML, and Vertex AI, and lead efforts to integrate new innovations into solutions for clients. * Mentor and develop team members to enhance their skills in Google Cloud data and AI technologies, while providing leadership and training on both data pipeline optimization and AI/ML best practices. What You'll Bring * Proven experience as a Cloud Data and AI Engineer or similar role, with hands-on experience in Google Cloud tools and services (e.g., BigQuery, Vertex AI, Dataflow, Cloud Storage, Pub/Sub, etc.). * Strong knowledge of data engineering concepts, such as ETL processes, data warehousing, data modeling, and data governance. * Proficiency in AI engineering, including experience with machine learning models, model training, and MLOps pipelines using tools like Vertex AI, BigQuery ML, and AutoML. * Strong problem-solving and decision-making skills, particularly with large-scale data systems and AI model deployment. * Strong communication and collaboration skills to work with cross-functional teams, including data scientists, business stakeholders, and IT teams, bridging data engineering and AI efforts. * Experience with agile methodologies and project management tools in the context of Google Cloud data and AI projects. * Ability to work in a fast-paced environment, managing multiple Google Cloud data and AI engineering projects simultaneously. * Knowledge of security and compliance best practices as they relate to data and AI solutions on Google Cloud. * Google Cloud certifications (e.g., Professional Data Engineer, Professional Database Engineer, Professional Machine Learning Engineer) or willingness to obtain certification within a defined timeframe. About Us Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all. Compensation and Benefits Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance. Slalom is committed to fair and equitable compensation practices. For this position the target base salaries are listed below. In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The target salary pay range is subject to change and may be modified at any time. East Bay, San Francisco, Silicon Valley: * Consultant $114,000-$171,000 * Senior Consultant: $131,000-$196,500 * Principal: $145,000-$217,500 San Diego, Los Angeles, Orange County, Seattle, Houston, New Jersey, New York City, Westchester, Boston, Washington DC: * Consultant $105,000-$157,500 * Senior Consultant: $120,000-$180,000 * Principal: $133,000-$199,500 All other locations: * Consultant: $96,000-$144,000 * Senior Consultant: $110,000-$165,000 * Principal: $122,000-$183,000 We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************. EEO and Accommodations Slalom is an equal opportunity employer and is committed to inclusion, diversity, and equity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team if you require accommodations during the interview process. We are accepting applications until the role is filled.
    $145k-217.5k yearly Easy Apply 5d ago
  • Senior Data Engineer

    Apidel Technologies 4.1company rating

    Blue Ash, OH jobs

    Job Description The Engineer is responsible for staying on track with key milestones in Customer Platform / Customer Data Acceleration, work will be on the new Customer Platform Analytics system in Databricks. The Engineer has overall responsibility in the technical design process. Leads and participates in the application technical design process and completes estimates and work plans for design, development, implementation, and rollout tasks. The Engineer also communicates with the appropriate teams to ensure that assignments are delivered with the highest of quality and in accordance to standards. The Engineer strives to continuously improve the software delivery processes and practices. Role model and demonstrate the companys core values of respect, honesty, integrity, diversity, inclusion and safety of others. Current tools and technologies include: Databricks and Netezza Key Responsibilities Lead and participate in the design and implementation of large and/or architecturally significant applications. Champion company standards and best practices. Work to continuously improve software delivery processes and practices. Build partnerships across the application, business and infrastructure teams. Setting up new customer data platforms from Netezza to Databricks Complete estimates and work plans independently as appropriate for design, development, implementation and rollout tasks. Communicate with the appropriate teams to ensure that assignments are managed appropriately and that completed assignments are of the highest quality. Support and maintain applications utilizing required tools and technologies. May direct the day-to-day work activities of other team members. Must be able to perform the essential functions of this position with or without reasonable accommodation. Work quickly with the team to implement new platform. Be onsite with development team when necessary. Behaviors/Skills: Puts the Customer First - Anticipates customer needs, champions for the customer, acts with customers in mind, exceeds customers expectations, gains customers trust and respect. Communicates effectively and candidly - Communicates clearly and directly, approachable, relates well to others, engages people and helps them understand change, provides and seeks feedback, articulates clearly, actively listens. Achieves results through teamwork Is open to diverse ideas, works inclusively and collaboratively, holds self and others accountable, involves others to accomplish individual and team goals Note to Vendors Length of Contract 9 months Top skills Databricks, Netezza Soft Skills Needed collaborating well with others, working in a team dynamic Project person will be supporting - staying on track with key milestones in Customer Platform / Customer Data Acceleration, Work will be on the new Customer Platform Analytics system in Databricks that will replace Netezza Team details ie. size, dynamics, locations most of the team is located in Cincinnati, working onsite at the BTD Work Location (in office, hybrid, remote) Onsite at BTD when necessary, approximately 2-3 days a week Is travel required - No Max Rate if applicable best market rate Required Working Hours 8-5 est Interview process and when will it start Starting with one interview, process may change Prescreening Details standard questions. Scores will carry over. When do you want this person to start Looking to hire quickly, the team is looking to move fast.
    $79k-114k yearly est. 12d ago

Learn more about ALTA IT Services jobs

Most common jobs at ALTA IT Services