Post job

Data engineer jobs in Uniondale, NY

- 4,803 jobs
All
Data Engineer
Data Warehouse Developer
Data Scientist
Data Modeler
Senior Data Architect
Lead Data Architect
Data Architect
Software Applications Engineer
Associate Software Engineer
  • Data Scientist

    Strategic Employment Partners (Sep 4.5company rating

    Data engineer job in New York, NY

    Senior Data Scientist - Sports & Entertainment Our client, a premier Sports, Entertainment, and Hospitality organization, is hiring a Senior Data Scientist. In this position you will own high-impact analytics projects that redefine how predictive analytics influence business strategy. This is a pivotal role where you will build and deploy machine learning solutions-ranging from Bayesian engagement scoring to purchase-propensity and lifetime-value models-to drive fan acquisition and revenue growth. Requirements: Experience: 8+ years of professional experience using data science to solve complex business problems, preferably as a solo contributor or team lead. Education: Bachelor's degree in Data Science, Statistics, Computer Science, or a related quantitative field (Master's or PhD preferred). Tech Stack: Hands-on expertise in Python, SQL/PySpark, and ML frameworks (scikit-learn, XGBoost, TensorFlow, or PyTorch). Infrastructure: Proficiency with cloud platforms (AWS preferred) and modern data stacks like Snowflake, Databricks, or Dataiku. MLOps: Strong experience in productionizing models, including version control (Git), CI/CD, and model monitoring/governance. Location: Brooklyn, NY (4 days onsite per week) Compensation: $100,000 - $150,000 + Bonus Benefits: Comprehensive medical/dental/vision, 401k match, competitive PTO, and unique access to live entertainment and sports events.
    $89k-130k yearly est. 1d ago
  • Data Engineer

    Brooksource 4.1company rating

    Data engineer job in New York, NY

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 3d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data engineer job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 3d ago
  • Senior Data Engineer

    Godel Terminal

    Data engineer job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 2d ago
  • C++ Market Data Engineer

    TBG | The Bachrach Group

    Data engineer job in Stamford, CT

    We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making. What You'll Do: Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design Ensure reliable data delivery with failover, gap recovery, and replay mechanisms Collaborate with researchers and engineers to align data formats for trading and simulation Instrument and test systems for continuous performance improvements What We're Looking For: 3+ years of C++ development experience (low-latency, high-throughput systems) Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH) Strong knowledge of concurrency, memory models, and compiler optimizations Python scripting skills for testing and automation Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
    $84k-114k yearly est. 2d ago
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    Data engineer job in New York, NY

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 5d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data engineer job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 2d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data engineer job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 2d ago
  • Cloud Data Engineer

    Gotham Technology Group 4.5company rating

    Data engineer job in New York, NY

    Title: Enterprise Data Management - Data Cloud, Senior Developer I Duration: FTE/Permanent Salary: 130-165k The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools. Responsibilities: Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures. Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications. Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools. Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency. Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation. Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications. Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access. Champion data engineering standards, best practices, and governance across the enterprise. Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning. Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions. Qualifications: Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent. 10+ years in data engineering, cloud platform development, or analytics engineering. Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies. Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers). Preferred: Experience containerizing data workloads with Docker and Kubernetes. Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios. Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products. Bonus: Background in data mesh designs and automated data access systems. Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow. Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines. Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling. Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role. Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
    $86k-120k yearly est. 3d ago
  • Lead Data Engineer with Banking

    Synechron 4.4company rating

    Data engineer job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking an experienced Lead Data Engineer to spearhead our data infrastructure initiatives. The ideal candidate will have a strong background in building scalable data pipelines, with hands-on expertise in Kafka, Snowflake, and Python. As a key technical leader, you will design and maintain robust streaming and batch data architectures, optimize data loads in Snowflake, and drive automation and best practices across our data platform. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $140k/year & benefits (see below). The Role Responsibilities: Design, develop, and maintain reliable, scalable data pipelines leveraging Kafka, Snowflake, and Python. Lead the implementation of distributed data processing and real-time streaming solutions. Manage Snowflake data warehouse environments, including data loading, tuning, and optimization for performance and cost-efficiency. Develop and automate data workflows and transformations using Python scripting. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Monitor, troubleshoot, and optimize data pipelines and platform performance. Ensure data quality, governance, and security standards are upheld. Guide and mentor junior team members and foster best practices in data engineering. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing frameworks and streaming architectures. Hands-on experience with Snowflake data warehouse platform, including data ingestion, performance tuning, and management. Proficiency in Python for data manipulation, automation, and scripting. Familiarity with Kafka ecosystem tools such as Confluent, Kafka Connect, and Kafka Streams. Solid understanding of SQL, data modeling, and ETL/ELT processes. Knowledge of cloud platforms (AWS, Azure, GCP) is advantageous. Strong troubleshooting skills and ability to optimize data workflows. Excellent communication and collaboration skills. Preferred, but not required: Bachelor's or Master's degree in Computer Science, Information Systems, or related field. Experience with containerization (Docker, Kubernetes) is a plus. Knowledge of data security best practices and GDPR compliance. Certifications related to cloud platforms or data engineering preferred. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-140k yearly 4d ago
  • Data Architect

    Pyramid Consulting, Inc. 4.1company rating

    Data engineer job in Ridgefield, NJ

    Immediate need for a talented Data Architect. This is a 12 month contract opportunity with long-term potential and is located in Basking Ridge, NJ (Hybrid). Please review the job description below and contact me ASAP if you are interested. Job ID:25-93859 Pay Range: $110 - $120/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location). Key Requirements and Technology Experience: Key Skills; ETL, LTMC, SaaS . 5 years as a Data Architect 5 years in ETL 3 years in LTMC Our client is a leading Telecom Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration. Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
    $110-120 hourly 2d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data engineer job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 2d ago
  • Sr Data Modeler with Capital Markets/ Custody

    Ltimindtree

    Data engineer job in Jersey City, NJ

    LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ******************* Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets Work Location Jersey City, NJ (Onsite, 5 days / week) Job Description: We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain. The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model. A software development background is not required. Key Responsibilities Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics). Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model. Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model. Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality. Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity. Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage. Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation. Champion a detail-oriented and documentation-first culture within the data modeling team. Qualifications Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services. Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics). Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures. Strong skills in data analysis using SQL and/or other data analysis tools. Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation. Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation. Ability to mentor and lead junior team members in data modeling best practices. Passion for detail, documentation, and continuous improvement. Software development background is not required. Preferred Skills Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer). Familiarity with capital markets, business processes and data flows. Knowledge of regulatory and compliance requirements in financial data management. Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases). Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $79k-111k yearly est. 1d ago
  • Data Engineer

    Haptiq

    Data engineer job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 4d ago
  • Big Data Developer

    Infocepts 3.7company rating

    Data engineer job in Jersey City, NJ

    Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing Implementing Spark processing based ETL frameworks Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption Modifying the Informatica-Teradata & Unix based data pipeline Enhancing the Talend-Hive/Spark & Unix based data pipelines Develop and Deploy Scala/Python based Spark Jobs for ETL processing Strong SQL & DWH concepts
    $75k-98k yearly est. 4d ago
  • Lead Data Platform Architect

    Sibitalent Corp

    Data engineer job in Melville, NY

    We are growing our data platform team and are seeking an experienced Data Platform Architect with deep cloud data platform expertise to drive the overall architecture and design of a modern, scalable data platform. This role is responsible for defining and advancing data platform architecture to support a data-driven organization, ensuring solutions are efficient, reusable, and aligned with long-term business and technology objectives. This position carries architectural and strategic responsibility for the design and implementation of the enterprise data platform. The role will support multiple initiatives across the data ecosystem, including data lake design, data engineering, analytics, data architecture, AI/ML, streaming and batch processing, metadata management, and service integrations. DUTIES AND RESPONSIBILITIES: • Lead technical assessments of the current data platform and define the architectural roadmap forward • Collaborate on strategic direction and prioritize data platform architecture to support business and technical objectives • Partner with enterprise and solution architects to ensure consistent standards and best practices across the data platform • Architect and design end-to-end data platform solutions on cloud infrastructure, emphasizing scalability, performance, and reusable design patterns • Design cloud-first, cost-effective data platform architectures • Architect batch, real-time, and unstructured ingestion frameworks with scale and reliability • Enable semantic interoperability of data across multiple sources and structures • Implement automation for lineage, orchestration, and data flows to streamline platform operations • Design and maintain metadata management frameworks to support current and future tools • Continually enhance automation and CI/CD frameworks across the data platform • Architect solutions with security-by-design principles • Monitor industry trends and emerging technologies to continuously improve the data platform architecture • Provide technical leadership and guidance to data platform engineers executing against the roadmap • Own and maintain data platform architecture documentation DUTIES AND RESPONSIBILITIES (CONTINUED): • Support a wide range of data platform use cases, including data engineering, business intelligence, real-time analytics, visualization, AI/ML, and service integrations • Collaborate with third-party vendors and partners on data platform integrations EDUCATION AND EXPERIENCE: • Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field required • Minimum of 10 years of experience designing high-availability data platform architectures • Minimum of 8 years of experience implementing modern cloud-based data platforms • Strong experience with Google Cloud Platform services, including BigQuery, Google Cloud Storage, and Cloud Composer • Minimum of 5 years of experience designing data lake architectures • Deep expertise across modern data platform, database, and streaming technologies (e.g., Kafka, Spark) • Experience with source control and CI/CD pipelines • Experience operationalizing AI/ML models preferred • Experience working with unstructured data preferred • Experience operating within Agile delivery models • Minimum of 3 years of experience with infrastructure as code (Terraform preferred) REQUIRED TECHNICAL EXPERIENCE (ADDED): • Hands-on experience designing and operating data platforms on Google Cloud Platform (GCP) • Strong experience with Databricks for large-scale data processing and analytics • Experience integrating data from IoT devices and machine monitoring systems is highly preferred • Familiarity with industrial, sensor-based, or operational technology (OT) data pipelines is a plus SKILLS: • Strong cross-functional communication and collaboration skills • Excellent organizational, time management, verbal, and written communication skills • Expertise across modern data platform technologies and best practices (BigQuery, Kafka, Hadoop, Spark) • Strong understanding of semantic layers and data interoperability (e.g., LookML, dbt) • Proven ability to design reusable, automated data platform patterns • Demonstrated leadership in distributed or remote environments • Track record of delivering data platform solutions at enterprise scale • Ability to write testable code and promote solutions into production environments • Experience with Google Cloud Composer or Apache Airflow preferred • Ability to quickly understand complex business systems and data flows • Strong analytical judgment and decision-making capabilities OTHER REQUIREMENTS: • Ability to travel up to 10 percent as required • This role may require access to regulated or controlled information
    $92k-124k yearly est. 1d ago
  • Senior Data Architect

    Mphasis

    Data engineer job in New York, NY

    About the Company Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis' Service Transformation approach helps ‘shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. About the Role Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica. This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management. Responsibilities Hands-on data analytics experience with Databricks on AWS, Pyspark and Python. Must have prior experience with migrating a data asset to the cloud using a GenAI automation option. Experience in migrating data from on-premises to AWS. Expertise in developing data models, delivering data-driven insights for business solutions. Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs). Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights. Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance. Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets. Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks. Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work. Qualifications 15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS. Required Skills Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources. Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability. Preferred Skills Ability to manage in tight circumstances, very pro-active with risk & issue management. Requirement Clarification & Communication: Interact directly with colleagues to clarify objectives, challenge assumptions. Documentation & Best Practices: Maintain clear, concise documentation of data workflows, coding standards, and analytical methodologies to support knowledge transfer and scalability. Collaboration & Stakeholder Engagement: Work closely with colleagues who provide data, raising questions about data validity, sharing insights, and co-creating solutions that address evolving needs. Excellent communication skills for engaging with colleagues, clarifying requirements, and conveying analytical results in a meaningful, non-technical manner. Demonstrated critical thinking skills, including the willingness to question assumptions, evaluate data quality, and recommend alternative approaches when necessary. A self-directed, resourceful problem-solver who collaborates well with others while confidently managing tasks and priorities independently.
    $104k-141k yearly est. 1d ago
  • SAP Data Migration Developer

    Numeric Technologies 4.5company rating

    Data engineer job in Englewood, NJ

    SAP S4 Data Migration Developer Duration: 6 Months Rate: Competitive Market Rate This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone. KEY RESPONSIBILITIES - Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification. Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments. Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope. Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs. Demonstrate capabilities with performance tuning, handling large data sets. Understand SAP tables, fields & load processes into SAP S4, MDG systems Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation Be a problem solver and build robust conversion, validation per requirements. SKILLS AND EXPERIENCE 6-8 years of experience in SAP Data Services application as a developer At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance Good communication skills, ability to deliver key objects on time and support with testing, mock cycles. 4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward Taking ownership and ensuring high quality results Active in seeking feedback and making necessary changes Specific previous experience - Proven experience in implementing SAP Data Services in a multinational environment. Experience in design of data loads of large volumes to SAP S4 from SAP ECC Must have used HANA Staging tables Experience in developing Information Steward for Data Reconciliation & Validation (not profiling) REQUIREMENTS Adhere to work availability schedule as noted above, be on time for meeting Written and verbal communication in English
    $78k-98k yearly est. 5d ago
  • Senior Data Engineer

    The Cypress Group 3.9company rating

    Data engineer job in New York, NY

    Our client is a growing Fintech software company Headquarted in New York, NY. They have several hundred employees and are in growth mode. They are currently looking for a Senior Data Engineer w/ 6+ years of overall professional experience. Qualified candidates will have hands-on experience with Python (6 years), SQL (6 years), DBT (3 years), AWS (Lambda, Glue), Airflow and Snowflake (3 years). BSCS and good CS fundamentals. The Senior Data Engineer will work in a collaborative team environment and will be responsible for building, optimizing and scaling ETL Data Pipelines, DBT models and Datawarehousing. Excellent communication and organizational skills are expected. This role features competitive base salary, equity, 401(k) with company match and many other attractive perks. Please send your resume to ******************* for immediate consideration.
    $98k-129k yearly est. 1d ago
  • Associate Software Engineer

    JSR Tech Consulting 4.0company rating

    Data engineer job in Newark, NJ

    Associate Software Engineer (Entry Level) We're looking for an Associate Software Engineer to join our technology team and help build and improve modern applications. This is a great opportunity for recent graduates or engineers with 0-2 years of experience who want to grow their skills in a collaborative, fast-moving environment. You'll work closely with product managers, designers, and senior engineers to build, test, and enhance software using Java, Python, AWS, and React. Industry experience is not required - we value strong fundamentals, curiosity, and a willingness to learn. Candidates must have permanent work authorization in the United States. What You'll Do Build, test, and maintain applications using Java, Python, JavaScript, and React Develop clean, well-documented code following best practices Work with AWS services for cloud-based development and deployment Collaborate with team members to understand requirements and deliver features Write unit and integration tests and help troubleshoot issues Learn new tools and technologies and apply them in real projects Participate in Agile development processes Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field 0-2 years of software development experience (internships and projects count) Basic experience or coursework with: Java and/or Python JavaScript and React AWS (cloud fundamentals) Understanding of object-oriented programming concepts Strong problem-solving and communication skills Eagerness to learn and grow as a software engineer Nice to Have (Not Required) Experience with frameworks such as Spring Boot, Node.js, Flask, or Django Exposure to APIs (REST/JSON) Familiarity with Git and basic DevOps concepts Knowledge of databases (SQL or NoSQL) Interest or exposure to AI-assisted development tools (e.g., GitHub Copilot, Claude) Financial or insurance industry experience (a plus, not required) Why This Role Entry-level friendly with strong mentorship Hands-on experience with modern tech stacks Opportunity to grow your skills in cloud, full-stack development, and software engineering best practices Inclusive, collaborative team environment
    $76k-99k yearly est. 1d ago

Learn more about data engineer jobs

How much does a data engineer earn in Uniondale, NY?

The average data engineer in Uniondale, NY earns between $79,000 and $141,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Uniondale, NY

$106,000

What are the biggest employers of Data Engineers in Uniondale, NY?

The biggest employers of Data Engineers in Uniondale, NY are:
  1. Ernst & Young
  2. D'Addario
  3. Innovative Software Technologies Inc.
Job type you want
Full Time
Part Time
Internship
Temporary