Post job

Data engineer jobs in North Hempstead, NY

- 4,637 jobs
All
Data Engineer
Data Warehouse Developer
Data Architect
Senior Data Architect
  • Data Engineer

    Company 3.0company rating

    Data engineer job in Fort Lee, NJ

    The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance. Responsibilities Analyze, structure, and interpret raw data. Build and maintain datasets for business use. Design and optimize database tables, schemas, and data structures. Enhance data accuracy, consistency, and overall efficiency. Develop views, functions, and stored procedures. Write efficient SQL queries to support application integration. Create database triggers to support automation processes. Oversee data quality, integrity, and database security. Translate complex data into clear, actionable insights. Collaborate with cross-functional teams on multiple projects. Present data through graphs, infographics, dashboards, and other visualization methods. Define and track KPIs to measure the impact of business decisions. Prepare reports and presentations for management based on analytical findings. Conduct daily system maintenance and troubleshoot issues across all platforms. Perform additional ad hoc analysis and tasks as needed. Qualification Bachelor's Degree in Information Technology or relevant 4+ years of experience as a Data Analyst or Data Engineer, including database design experience. Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations. Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views. Excellent written, verbal, and interpersonal communication skills. Ability to manage multiple tasks in a fast-paced and evolving environment. Strong work ethic, professionalism, and integrity. Advanced proficiency in Microsoft Office applications.
    $93k-132k yearly est. 3d ago
  • Senior Data Engineer

    Godel Terminal

    Data engineer job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 23h ago
  • Lead Data Engineer

    APN Consulting, Inc. 4.5company rating

    Data engineer job in New York, NY

    Job title: Lead Software Engineer Duration: Fulltime/Contract to Hire Role description: The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives. Key Responsibilities: Manage and support HR business applications, including problem resolution and issue ownership Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance Develop and maintain data ingestion processes using Informatica, Python, and related technologies Support data analytics and data science initiatives with optimized data structures and AI/ML tools Manage vendor products and their integrations with internal/external applications Gather requirements and translate functional needs into technical specifications Perform QA testing and impact analysis across the BI ecosystem Maintain system documentation and knowledge repositories Provide technical guidance and manage stakeholder communications Required Skills & Experience: Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space. Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming. Exposure to AI & ML technologies and experience tuning models and building LLM integrations. Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics. Extensive database development experience in MS SQL Server/ Oracle and SQL scripting. Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins Proficiency in using collaboration tools like Confluence, SharePoint, JIRA Analytical skills to model business functions, processes and dataflow within or between systems. Strong problem-solving skills to debug complex, time-critical production incidents. Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams. Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
    $93k-133k yearly est. 3d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data engineer job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 1d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data engineer job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 23h ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data engineer job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 23h ago
  • Market Data Engineer

    Harrington Starr

    Data engineer job in New York, NY

    🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 1d ago
  • Data Engineer

    Mastech Digital 4.7company rating

    Data engineer job in Jersey City, NJ

    Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices. Role: Data Engineer Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ Duration: Full-Time/W2 Job Description: Must-Haves: Python for running ETL batch jobs Heavy SQL for data analysis, validation and querying AWS and the ability to move the data through the data stages and into their target databases. The Postgres database is the target, so that is required. Nice to haves: Snowflake Java for API development is a nice to have (will teach this) Experience in asset management for domain knowledge. Production support debugging and processing of vendor data The Expertise and Skills You Bring A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience Extensive experience with ETL technologies Design and develop ETL reporting and analytics solutions. Knowledge of Data Warehousing methodologies and concepts - preferred Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required RMDS experience (Snowflake, PostgreSQL ) - required Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required Designing and developing low to moderate complex data integration solution - required Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred Expert in SQL and Stored Procedures on any Relational databases Good in debugging, analyzing and Production Support Application Development based on JIRA stories (Agile environment) Demonstrable experience with ETL tools (Informatica, Snaplogic) Experience in working with Python in an AWS environment Create, update, and maintain technical documentation for software-based projects and products. Solving production issues. Interact effectively with business partners to understand business requirements and assist in generation of technical requirements. Participate in architecture, technical design, and product implementation discussions. Working Knowledge of Unix/Linux operating systems and shell scripting Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis. Excellent interpersonal and communication skills Ability to work with global Agile teams Proven ability to deal with ambiguity and work in fast paced environment Ability to mentor junior data engineers. The Value You Deliver The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack. Strong experience of working in large teams and proven technical leadership capabilities Knowledge of enterprise-level implementations like data warehouses and automated solutions. Ability to negotiate, influence and work with business peers and management. Ability to develop and drive a strategy as per the needs of the team Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
    $81k-105k yearly est. 4d ago
  • C++ Market Data Engineer

    TBG | The Bachrach Group

    Data engineer job in Stamford, CT

    We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making. What You'll Do: Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design Ensure reliable data delivery with failover, gap recovery, and replay mechanisms Collaborate with researchers and engineers to align data formats for trading and simulation Instrument and test systems for continuous performance improvements What We're Looking For: 3+ years of C++ development experience (low-latency, high-throughput systems) Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH) Strong knowledge of concurrency, memory models, and compiler optimizations Python scripting skills for testing and automation Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
    $84k-114k yearly est. 5d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    Data engineer job in New York, NY

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 1d ago
  • Data Architect

    Pyramid Consulting, Inc. 4.1company rating

    Data engineer job in Ridgefield, NJ

    Immediate need for a talented Data Architect. This is a 12 month contract opportunity with long-term potential and is located in Basking Ridge, NJ (Hybrid). Please review the job description below and contact me ASAP if you are interested. Job ID:25-93859 Pay Range: $110 - $120/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Requirements and Technology Experience: Key Skills; ETL, LTMC, SaaS . 5 years as a Data Architect 5 years in ETL 3 years in LTMC Our client is a leading Telecom Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $110-120 hourly 23h ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data engineer job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: • Supports review of controls to ensure sufficient protection of enterprise data. • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. • Updates logical or physical data models based on new use cases. • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. • Adds to team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). • Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills • Formal training or certification on data engineering concepts and 2+ years applied experience • Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases • Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis • Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. • Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. • Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs • Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar • Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills • Knowledge of data governance and security best practices. • Experience in carrying out data analysis to support business insights. • Strong Python and Spark
    $79k-111k yearly est. 1d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data engineer job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 23h ago
  • Sr. Azure Data Engineer

    Synechron 4.4company rating

    Data engineer job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below). The Role Responsibilities: Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing and streaming architectures. Experience with Snowflake data warehouse platform: data loading, performance tuning, and management. Proficiency in Python scripting and programming for data manipulation and automation. Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams). Knowledge of SQL, data modelling, and ETL/ELT processes. Understanding of cloud platforms (AWS, Azure, GCP) is a plus. Domain Knowledge in any of the below area: Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.). Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes. Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $130k-140k yearly 3d ago
  • Data Engineer

    Haptiq

    Data engineer job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 2d ago
  • Data Architect

    Beaconfire Inc.

    Data engineer job in New York, NY

    Hi, I hope you are doing well! We have an opportunity for Data Architect with one of our clients for NYC, NY. Please see the job details below and let me know if you would be interested in this role. If interested, please send me a copy of your resume, contact details, availability, and a good time to connect with you. Title: Data Architect Location: New York, New York - Onsite Terms: Long Term Contract Job Details: Primary Skills: SQL,ORACLE,Snowflake,12+ years of experience in data technology At least 5 years as a Data Engineer with hands-on experience in cloud environments 8+ years of Python programming focused on data processing and distributed systems 8+ years working with relational databases, dimensional modeling, and DBT 8+ years designing and administering cloud-based data warehousing solutions (e.g., Databricks) 8+ years' experience with Kafka or other streaming platforms Exposure to AI based advance techniques and tools Strong understanding of database fundamentals, including data modeling, advanced SQL development and optimization, ELT/ETL processes and DBT. Experience with Java, MS SQL Server, Druid, Qlik/Golden Gate CDC, and Power BI is a plus Responsibilities: Architect streaming data ingestion and integration with downstream systems Implement AI-driven controller to orchestrate tens of millions of streams and micro-batches Design AI-powered onboarding of new data sources Develop AI-powered compute engine and data serving semantic layer Deliver scalable cloud data services and APIs with sub-second response times over petabytes of data Develop a unified alerting and monitoring framework supporting streaming transformations and compute across thousands of institutional clients and hundreds of external data sources Build a self-service data management and operations platform Implement a data quality monitoring framework Qualifications: Bachelor's degree in Computer Science, related field; advanced degree preferred 12+ years of experience in data technology At least 5 years as a Data Engineer with hands-on experience in cloud environments 8+ years of Python programming focused on data processing and distributed systems 8+ years working with relational databases, SQL, dimensional modeling, and DBT 8+ years designing and administering cloud-based data warehousing solutions (e.g., Snowflake, Databricks) 8+ years experience with Kafka or other streaming platforms Exposure to AI based advance techniques and tools Strong understanding of database fundamentals, including data modeling, advanced SQL development and optimization, ELT/ETL processes and DBT. Experience with Java, Oracle, MS SQL Server, Druid, Qlik/Golden Gate CDC, and Power BI is a plus Strong leadership abilities and excellent communication skills. Thanks Amit Jha Senior Recruiter at BeaconFire Inc. Email : ***********************
    $93k-127k yearly est. 3d ago
  • Data Engineer

    Neenopal Inc.

    Data engineer job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 2d ago
  • Big Data Developer

    Infocepts 3.7company rating

    Data engineer job in Jersey City, NJ

    Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing Implementing Spark processing based ETL frameworks Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption Modifying the Informatica-Teradata & Unix based data pipeline Enhancing the Talend-Hive/Spark & Unix based data pipelines Develop and Deploy Scala/Python based Spark Jobs for ETL processing Strong SQL & DWH concepts
    $75k-98k yearly est. 2d ago
  • SAP Data Migration Developer

    Numeric Technologies 4.5company rating

    Data engineer job in Englewood, NJ

    SAP S4 Data Migration Developer Duration: 6 Months Rate: Competitive Market Rate This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone. KEY RESPONSIBILITIES - Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification. Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments. Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope. Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs. Demonstrate capabilities with performance tuning, handling large data sets. Understand SAP tables, fields & load processes into SAP S4, MDG systems Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation Be a problem solver and build robust conversion, validation per requirements. SKILLS AND EXPERIENCE 6-8 years of experience in SAP Data Services application as a developer At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance Good communication skills, ability to deliver key objects on time and support with testing, mock cycles. 4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward Taking ownership and ensuring high quality results Active in seeking feedback and making necessary changes Specific previous experience - Proven experience in implementing SAP Data Services in a multinational environment. Experience in design of data loads of large volumes to SAP S4 from SAP ECC Must have used HANA Staging tables Experience in developing Information Steward for Data Reconciliation & Validation (not profiling) REQUIREMENTS Adhere to work availability schedule as noted above, be on time for meeting Written and verbal communication in English
    $78k-98k yearly est. 3d ago
  • Senior Data Architect

    Mphasis

    Data engineer job in New York, NY

    About the Company Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis' Service Transformation approach helps ‘shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. About the Role Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica. This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management. Responsibilities Hands-on data analytics experience with Databricks on AWS, Pyspark and Python. Must have prior experience with migrating a data asset to the cloud using a GenAI automation option. Experience in migrating data from on-premises to AWS. Expertise in developing data models, delivering data-driven insights for business solutions. Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs). Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights. Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance. Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets. Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks. Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work. Qualifications 15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS. Required Skills Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources. Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability. Preferred Skills Ability to manage in tight circumstances, very pro-active with risk & issue management. Requirement Clarification & Communication: Interact directly with colleagues to clarify objectives, challenge assumptions. Documentation & Best Practices: Maintain clear, concise documentation of data workflows, coding standards, and analytical methodologies to support knowledge transfer and scalability. Collaboration & Stakeholder Engagement: Work closely with colleagues who provide data, raising questions about data validity, sharing insights, and co-creating solutions that address evolving needs. Excellent communication skills for engaging with colleagues, clarifying requirements, and conveying analytical results in a meaningful, non-technical manner. Demonstrated critical thinking skills, including the willingness to question assumptions, evaluate data quality, and recommend alternative approaches when necessary. A self-directed, resourceful problem-solver who collaborates well with others while confidently managing tasks and priorities independently.
    $104k-141k yearly est. 4d ago

Learn more about data engineer jobs

How much does a data engineer earn in North Hempstead, NY?

The average data engineer in North Hempstead, NY earns between $79,000 and $141,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in North Hempstead, NY

$106,000

What are the biggest employers of Data Engineers in North Hempstead, NY?

The biggest employers of Data Engineers in North Hempstead, NY are:
  1. Ernst & Young
  2. Innovative Software Technologies Inc.
Job type you want
Full Time
Part Time
Internship
Temporary