Post job

Data engineer jobs in Islip, NY

- 3,755 jobs
All
Data Engineer
Data Architect
Software Engineer
Senior Data Architect
Lead Data Architect
Data Scientist
Software Applications Engineer
Lead Data Technician
Data Warehouse Developer
  • Data Governance Lead - Data Architecture & Governance

    Addison Group 4.6company rating

    Data engineer job in New York, NY

    Job Title: Data Governance Lead - Data Architecture & Governance Employment Type: Full-Time Base Salary: $220K to $250K (based on experience) + Bonus is eligible for medical, dental, vision About the Role: We are seeking an Experienced Data Governance Lead to join a dynamic data and analytics team in New York. This role will design and oversee the organization's data governance framework, stewardship model, and data quality approach across financial services business lines, ensuring trusted and well-defined data for reporting and analytics across Databricks lakehouse, CRM, management reporting, data science teams, and GenAI initiatives. Primary Responsibilities: Design, implement, and refine enterprise-wide data governance framework, including policies, standards, and roles for data ownership and stewardship. Lead the design of data quality monitoring, dashboards, reporting, and exception-handling processes, coordinating remediation with stewards and technology teams. Drive communication and change management for governance policies and standards, making them practical and understandable for business stakeholders. Define governance processes for critical data domains (e.g., companies, contacts, funds, deals, clients, sponsors) to ensure consistency, compliance, and business value. Identify and onboard business data owners and stewards across business teams. Partner with Data Solution Architects and business stakeholders to align definitions, semantics, and survivorship rules, including support for DealCloud implementations. Define and prioritize data quality rules and metrics for key data domains. Develop training and onboarding materials for stewards and users to reinforce governance practices and improve reporting, risk management, and analytics outcomes. Qualifications: 6-8 years in data governance, data management, or related roles, preferably within financial services. Strong understanding of data governance concepts, including stewardship models, data quality management, and issue-resolution processes. Familiarity with CRM or deal management platforms (e.g., DealCloud, Salesforce) and modern data platforms (e.g., Databricks or similar). Proficiency in SQL for data investigation, ad hoc analysis, and validation of data quality rules. Comfortable working with Databricks, Jupyter notebooks, Excel, and BI tools. Python skills for automation, data wrangling, profiling, and validation are strongly preferred. Exposure to investment banking, equities, or private markets data is a plus. Excellent written and verbal communication skills with the ability to lead cross-functional discussions and influence senior stakeholders. Highly organized, proactive, and able to balance strategic governance framework design with hands-on execution.
    $220k-250k yearly 3d ago
  • Data Engineer

    Brooksource 4.1company rating

    Data engineer job in New York, NY

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 3d ago
  • Data Engineer

    The Phoenix Group 4.8company rating

    Data engineer job in Fairfield, CT

    Data Engineer - Vice President Greenwich, CT About the Firm We are a global investment firm focused on applying financial theory to practical investment decisions. Our goal is to deliver long-term results by analyzing market data and identifying what truly matters. Technology is central to our approach, enabling insights across both traditional and alternative strategies. The Team A new Data Engineering team is being established to work with large-scale datasets across the organization. This team partners directly with researchers and business teams to build and maintain infrastructure for ingesting, validating, and provisioning large volumes of structured and unstructured data. Your Role As a Data Engineer, you will help design and build an enterprise data platform used by research teams to manage and analyze large datasets. You will also create tools to validate data, support back-testing, and extract actionable insights. You will work closely with researchers, portfolio managers, and other stakeholders to implement business requirements for new and ongoing projects. The role involves working with big data technologies and cloud platforms to create scalable, extensible solutions for data-intensive applications. What You'll Bring 6+ years of relevant experience in data engineering or software development Bachelor's, Master's, or PhD in Computer Science, Engineering, or related field Strong coding, debugging, and analytical skills Experience working directly with business stakeholders to design and implement solutions Knowledge of distributed data systems and large-scale datasets Familiarity with big data frameworks such as Spark or Hadoop Interest in quantitative research (no prior finance or trading experience required) Exposure to cloud platforms is a plus Experience with Python, NumPy, pandas, or similar data analysis tools is a plus Familiarity with AI/ML frameworks is a plus Who You Are Thoughtful, collaborative, and comfortable in a fast-paced environment Hard-working, intellectually curious, and eager to learn Committed to transparency, integrity, and innovation Motivated by leveraging technology to solve complex problems and create impact Compensation & Benefits Salary range: $190,000 - $260,000 (subject to experience, skills, and location) Eligible for annual discretionary bonus Comprehensive benefits including paid time off, medical/dental/vision insurance, 401(k), and other applicable benefits We are an Equal Opportunity Employer. EEO/VET/DISABILITY The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
    $190k-260k yearly 4d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data engineer job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 3d ago
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    Data engineer job in New York, NY

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 5d ago
  • Lead Data Engineer

    APN Consulting, Inc. 4.5company rating

    Data engineer job in New York, NY

    Job title: Lead Software Engineer Duration: Fulltime/Contract to Hire Role description: The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives. Key Responsibilities: Manage and support HR business applications, including problem resolution and issue ownership Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance Develop and maintain data ingestion processes using Informatica, Python, and related technologies Support data analytics and data science initiatives with optimized data structures and AI/ML tools Manage vendor products and their integrations with internal/external applications Gather requirements and translate functional needs into technical specifications Perform QA testing and impact analysis across the BI ecosystem Maintain system documentation and knowledge repositories Provide technical guidance and manage stakeholder communications Required Skills & Experience: Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space. Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming. Exposure to AI & ML technologies and experience tuning models and building LLM integrations. Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics. Extensive database development experience in MS SQL Server/ Oracle and SQL scripting. Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins Proficiency in using collaboration tools like Confluence, SharePoint, JIRA Analytical skills to model business functions, processes and dataflow within or between systems. Strong problem-solving skills to debug complex, time-critical production incidents. Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams. Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
    $93k-133k yearly est. 5d ago
  • Senior Data Engineer

    Godel Terminal

    Data engineer job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 2d ago
  • C++ Market Data Engineer

    TBG | The Bachrach Group

    Data engineer job in Stamford, CT

    We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making. What You'll Do: Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design Ensure reliable data delivery with failover, gap recovery, and replay mechanisms Collaborate with researchers and engineers to align data formats for trading and simulation Instrument and test systems for continuous performance improvements What We're Looking For: 3+ years of C++ development experience (low-latency, high-throughput systems) Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH) Strong knowledge of concurrency, memory models, and compiler optimizations Python scripting skills for testing and automation Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
    $84k-114k yearly est. 2d ago
  • Data Architect

    Green Key Resources 4.6company rating

    Data engineer job in New York, NY

    Data Solutions Architect The Data Solutions Architect will play a pivotal role in advancing organizational data and artificial intelligence (AI) initiatives. Leveraging statistical analysis, machine learning (ML), and large language models (LLMs), this role focuses on extracting insights and supporting decision-making across diverse business operations and professional service practices. The architect will collaborate with innovation teams, technical resources, and stakeholders to design and implement data-driven solutions that enhance service delivery and operational efficiency. Staying current with emerging technologies and best practices, the Data Solutions Architect will integrate cutting-edge techniques into projects, offering a unique opportunity to shape the future of data and AI within the professional services sector. Principal Duties and Responsibilities Partner with operational and practice teams to identify challenges and opportunities for workflow improvement. Translate complex domain logic into actionable data requirements and AI use cases. Design, build, and maintain scalable data pipelines and infrastructure to support AI and BI initiatives. Utilize SQL, Python, R, and other analytics tools to analyze, model, and visualize data trends. Collaborate with technology teams to refine and maintain data pipelines, warehouses, and databases. Develop tools and processes to transform raw data into user-friendly formats for self-service analytics. Apply advanced quantitative methods, including ML and NLP, to identify patterns and build predictive models. Design and deploy systems for applications such as text analysis, trend analysis, and predictive modeling. Craft, test, and refine prompts for LLMs to generate contextually accurate outputs tailored to research and drafting workflows. Deliver AI-driven solutions from proof of concept through production, addressing cross-functional and practice-specific needs. Continuously monitor advancements in AI, ML, and data science, integrating innovative technologies into organizational projects. Job Specifications Required Education Bachelor's degree in Data Science, Computer Science, Engineering, or related fields. Preferred Education Master's degree in a relevant discipline; coursework in deep learning, NLP, or information retrieval is highly valued. Required Experience Minimum of 3 years of relevant experience, including at least 2 years in data engineering and data science roles. Competencies Demonstrated expertise in data analytics and engineering with a strong focus on data modeling. Proficiency in statistical programming languages (Python, R) and database management (SQL). Hands-on experience with ML, NLP, and data visualization tools. Strong problem-solving and communication skills, with the ability to present complex data to non-technical audiences. Experience in professional services or related environments preferred.
    $106k-149k yearly est. 1d ago
  • Market Data Engineer

    Harrington Starr

    Data engineer job in New York, NY

    🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 3d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data engineer job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 2d ago
  • Big Data Developer

    Capgemini 4.5company rating

    Data engineer job in New York, NY

    We're looking for a seasoned Senior Data Engineer with strong Hadoop to design, build, and scale data pipelines and platforms powering analytics, AI/ML, and business operations. You'll own end-to-end data engineering-from ingestion and transformation to performance optimization-across large-scale distributed systems and modern cloud data platforms. Key Responsibilities Design & Build Data Pipelines: Architect, develop, and maintain robust ETL/ELT pipelines for batch and streaming data using Hadoop ecosystem, Spark, and Airflow. Big Data Architecture: Define and implement scalable big data architectures, ensuring reliability, fault tolerance, and cost efficiency. Data Modeling: Develop and optimize data models for Data Warehouse and Operational Data Store (ODS); ensure conformed dimensions and star/snowflake schemas where appropriate. SQL Expertise: Write, optimize, and review complex SQL/HiveQL queries for large datasets; enforce query standards and patterns. Performance Tuning: Optimize Spark jobs, SQL queries, storage formats (e.g., Parquet/ORC), partitioning, and indexing to improve latency and throughput. Data Quality & Governance: Implement data validation, lineage, cataloging, and security controls across environments. Workflow Orchestration: Build and manage DAGs in Airflow, ensuring observability, retries, alerting, and SLAs. Cross-functional Collaboration: Partner with Data Science, Analytics, and Product teams to deliver reliable datasets and features. Best Practices: Champion coding standards, CI/CD, infrastructure-as-code (IaC), and documentation across the data platform. Required Qualifications 7+ years of hands-on data engineering experience building production-grade pipelines. Strong experience with Hadoop (HDFS, YARN), Hive SQL/HiveQL, Spark (Scala/Java/PySpark), and Airflow. Expert-level SQL skills with the ability to write and tune complex queries on large datasets. Solid understanding of Big Data architecture patterns (e.g., lakehouse, data lake + warehouse, CDC). Deep knowledge of ETL/ELT and DW/ODS concepts (slowly changing dimensions, partitioning, columnar storage, incremental loads). Proven track record in performance tuning for large-scale systems (Spark jobs, shuffle optimizations, broadcast joins, skew handling). Strong programming background in Java and/or Scala (Python is a plus). Preferred Skills Experience with AI-driven data processing (feature engineering pipelines, ML-ready datasets, model data dependencies). Hands-on with cloud data platforms (AWS, GCP, or Azure)-services like EMR/Dataproc/HDInsight, S3/GCS/ADLS, Glue/Dataflow, BigQuery/Snowflake/Redshift/Synapse. Exposure to NoSQL databases (Cassandra, HBase, DynamoDB, MongoDB). Advanced data governance & security (row/column-level security, tokenization, encryption at rest/in transit, IAM/RBAC, data lineage/catalog). Familiarity with Kafka (topics, partitions, consumer groups, schema registry, stream processing). Experience with CI/CD for data (Git, Jenkins/GitHub Actions, Terraform), containerization (Docker, Kubernetes). Knowledge of metadata management and data observability (Great Expectations, Monte Carlo, OpenLineage). Life at Capgemini: Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer: Flexible work Healthcare including dental, vision, mental health, and well-being programs Financial well-being programs such as 401(k) and Employee Share Ownership Plan Paid time off and paid holidays Paid parental leave Family building benefits like adoption assistance, surrogacy, and cryopreservation Social well-being benefits like subsidized back-up child/elder care and tutoring Mentoring, coaching and learning programs Employee Resource Groups Disaster Relief Disclaimer: Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law. This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship. Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact. Click the following link for more information on your rights as an Applicant **************************************************************************
    $87k-130k yearly est. 4d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    Data engineer job in New York, NY

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 3d ago
  • Lead HPC Architect Cybersecurity - High Performance & Computational Data Ecosystem

    Icahn School of Medicine at Mount Sinai 4.8company rating

    Data engineer job in New York, NY

    The Scientific Computing and Data group at the Icahn School of Medicine at Mount Sinai partners with scientists to accelerate scientific discovery. To achieve these aims, we support a cutting-edge high-performance computing and data ecosystem along with MD/PhD-level support for researchers. The group is composed of a high-performance computing team, a clinical data warehouse team and a data services team. The Lead HPC Architect, Cybersecurity, High Performance Computational and Data Ecosystem, is responsible for designing, implementing, and managing the cybersecurity infrastructure and technical operations of Scientific Computing's computational and data science ecosystem. This ecosystem includes a 25,000+ core and 40+ petabyte usable high-performance computing (HPC) systems, clinical research databases, and a software development infrastructure for local and national projects. The HPC system is the fastest in the world at any academic biomedical center (Top 500 list). To meet Sinai's scientific and clinical goals, the Lead brings a strategic, tactical and customer-focused vision to evolve the ecosystem to be continually more resilient, secure, scalable and productive for basic and translational biomedical research. The Lead combines deep technical expertise in cybersecurity, HPC systems, storage, networking, and software infrastructure with a strong focus on service, collaboration, and strategic planning for researchers and clinicians throughout the organization and beyond. The Lead is an expert troubleshooter, productive partner and leader of projects. The lead will work with stakeholders to make sure the HPC infrastructure is in compliance with governmental funding agency requirements and to promote efficient resource utilizations for researchers This position reports to the Director for HPC and Data Ecosystem in Scientific Computing and Data. Key Responsibilities: HPC Cybersecurity & System Administration: Design, implement, and manage all cybersecurity operations within the HPC environment, ensuring alignment with industry standards (NIST, ISO, GDPR, HIPAA, CMMC, NYC Cyber Command, etc.). Implement best practices for data security, including but not limited to encryption (at rest, in transit, and in use), audit logging, access control, authentication control, configuration managements, secure enclaves, and confidential computing. Perform full-spectrum HPC system administration: installation, monitoring, maintenance, usage reporting, troubleshooting, backup and performance tuning across HPC applications, web service, database, job scheduler, networking, storage, computes, and hardware to optimize workload efficiency. Lead resolution of complex cybersecurity and system issues; provide mentorship and technical guidance to team members. Ensure that all designs and implementations meet cybersecurity, performance, scalability, and reliability goals. Ensure that the design and operation of the HPC ecosystem is productive for research. Lead the integration of HPC resources with laboratory equipment for data ingestion aligned with all regulatory such as genomic sequencers, microscopy, clinical system etc. Develop, review and maintain security policies, risk assessments, and compliance documentation accurately and efficiently. Collaborate with institutional IT, compliance, and research teams to ensure all regulatory, Sinai Policy and operational alignment. Design and implement hybrid and cloud-integrated HPC solutions using on-premise and public cloud resources. Partner with other peers regionally, nationally and internationally to discover, propose and deploy a world-class research infrastructure for Mount Sinai. Stay current with emerging HPC, cloud, and cybersecurity technologies to keep the organization's infrastructure up-to-date. Work collaboratively, effectively and productively with other team members within the group and across Mount Sinai. Provide after-hours support as needed. Perform other duties as assigned or requested. Requirements: Bachelor's degree in computer science, engineering or another scientific field. Master's or PhD preferred. 10 years of progressive HPC system administration experience with Enterprise Linux releases including RedHat/CentOS/Rocky Systems, and batch cluster environment. Experience with all aspects of high-throughput HPC including schedulers (LSF or Slurm), networking (Infiniband/Gigabit Ethernet), parallel file systems and storage, configuration management systems (xCAT, Puppet and/or Ansible), etc. Proficient in cybersecurity processes, posture, regulations, approaches, protocols, firewalls, data protection in a regulated environment (e.g. finance, healthcare). In-depth knowledge HIPAA, NIST, FISMA, GDPR and related compliance standards, with prove experience building and maintaining compliant HPC system Experience with secure enclaves and confidential computing. Proven ability to provide mentorship and technical leadership to team members. Proven ability to lead complex projects to completion in collaborative, interdisciplinary settings with minimum guidance. Excellent analytical ability and troubleshooting skills. Excellent communication, documentation, collaboration and interpersonal skills. Must be a team player and customer focused. Scripting and programming experience. Preferred Experience Proficient with cloud services, orchestration tools, openshift/Kubernetes cost optimization and hybrid HPC architectures. Experience with Azure, AWS or Google cloud services. Experience with LSF job scheduler and GPFS Spectrum Scale. Experience in a healthcare environment. Experience in a research environment is highly preferred. Experience with software that enables privacy-preserving linking of PHI. Experience with Globus data transfer. Experience with Web service, SAP HANA, Oracle, SQL, MariaDB and other database technologies. Strength through Unity and Inclusion The Mount Sinai Health System is committed to fostering an environment where everyone can contribute to excellence. We share a common dedication to delivering outstanding patient care. When you join us, you become part of Mount Sinai's unparalleled legacy of achievement, education, and innovation as we work together to transform healthcare. We encourage all team members to actively participate in creating a culture that ensures fair access to opportunities, promotes inclusive practices, and supports the success of every individual. At Mount Sinai, our leaders are committed to fostering a workplace where all employees feel valued, respected, and empowered to grow. We strive to create an environment where collaboration, fairness, and continuous learning drive positive change, improving the well-being of our staff, patients, and organization. Our leaders are expected to challenge outdated practices, promote a culture of respect, and work toward meaningful improvements that enhance patient care and workplace experiences. We are dedicated to building a supportive and welcoming environment where everyone has the opportunity to thrive and advance professionally. Explore this opportunity and be part of the next chapter in our history. About the Mount Sinai Health System: Mount Sinai Health System is one of the largest academic medical systems in the New York metro area, with more than 48,000 employees working across eight hospitals, more than 400 outpatient practices, more than 300 labs, a school of nursing, and a leading school of medicine and graduate education. Mount Sinai advances health for all people, everywhere, by taking on the most complex health care challenges of our time - discovering and applying new scientific learning and knowledge; developing safer, more effective treatments; educating the next generation of medical leaders and innovators; and supporting local communities by delivering high-quality care to all who need it. Through the integration of its hospitals, labs, and schools, Mount Sinai offers comprehensive health care solutions from birth through geriatrics, leveraging innovative approaches such as artificial intelligence and informatics while keeping patients' medical and emotional needs at the center of all treatment. The Health System includes more than 9,000 primary and specialty care physicians; 13 joint-venture outpatient surgery centers throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and more than 30 affiliated community health centers. We are consistently ranked by U.S. News & World Report's Best Hospitals, receiving high "Honor Roll" status. Equal Opportunity Employer The Mount Sinai Health System is an equal opportunity employer, complying with all applicable federal civil rights laws. We do not discriminate, exclude, or treat individuals differently based on race, color, national origin, age, religion, disability, sex, sexual orientation, gender, veteran status, or any other characteristic protected by law. We are deeply committed to fostering an environment where all faculty, staff, students, trainees, patients, visitors, and the communities we serve feel respected and supported. Our goal is to create a healthcare and learning institution that actively works to remove barriers, address challenges, and promote fairness in all aspects of our organization.
    $89k-116k yearly est. 1d ago
  • Sr. Azure Data Engineer

    Synechron 4.4company rating

    Data engineer job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below). The Role Responsibilities: Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing and streaming architectures. Experience with Snowflake data warehouse platform: data loading, performance tuning, and management. Proficiency in Python scripting and programming for data manipulation and automation. Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams). Knowledge of SQL, data modelling, and ETL/ELT processes. Understanding of cloud platforms (AWS, Azure, GCP) is a plus. Domain Knowledge in any of the below area: Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.). Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes. Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $130k-140k yearly 5d ago
  • Data Engineer

    Haptiq

    Data engineer job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 4d ago
  • Senior Data Architect

    Mphasis

    Data engineer job in New York, NY

    About the Company Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis' Service Transformation approach helps ‘shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients. About the Role Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica. This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management. Responsibilities Hands-on data analytics experience with Databricks on AWS, Pyspark and Python. Must have prior experience with migrating a data asset to the cloud using a GenAI automation option. Experience in migrating data from on-premises to AWS. Expertise in developing data models, delivering data-driven insights for business solutions. Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs). Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights. Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance. Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets. Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks. Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work. Qualifications 15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS. Required Skills Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources. Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability. Preferred Skills Ability to manage in tight circumstances, very pro-active with risk & issue management. Requirement Clarification & Communication: Interact directly with colleagues to clarify objectives, challenge assumptions. Documentation & Best Practices: Maintain clear, concise documentation of data workflows, coding standards, and analytical methodologies to support knowledge transfer and scalability. Collaboration & Stakeholder Engagement: Work closely with colleagues who provide data, raising questions about data validity, sharing insights, and co-creating solutions that address evolving needs. Excellent communication skills for engaging with colleagues, clarifying requirements, and conveying analytical results in a meaningful, non-technical manner. Demonstrated critical thinking skills, including the willingness to question assumptions, evaluate data quality, and recommend alternative approaches when necessary. A self-directed, resourceful problem-solver who collaborates well with others while confidently managing tasks and priorities independently.
    $104k-141k yearly est. 1d ago
  • Software Engineer

    JSG (Johnson Service Group, Inc.

    Data engineer job in Hauppauge, NY

    JSG is hiring a Software Engineer in Hauppauge, NY. Must be a US Citizen and work onsite. Salary range: $127K-$137K - Bonus Our charter is to develop fuel measurement, management and inserting systems for commercial and defense airframers. The Software Engineering team works closely with the Systems and Electronic Hardware Engineering teams to develop, qualify and certify these technologies as products for our customers in aerospace and industrial markets. Develop embedded software using C and/or model-based tools such as SCADE Develop high level and low level software requirements Create requirements-based test cases and verification procedures Perform software integration testing on target hardware using both real and simulated inputs/outputs Analyze software requirements, design and code to assure compliance to standards and guidelines Perform traceability analysis to customer specification requirements to software code Participate in software certification audits, e.g. stages of involvement (SOI) BS in Software Engineering, Computer Engineering, Computer Science or related field 5+ years of experience performing software development, verification and/or integration Strong technical aptitude with analytical and problem-solving capabilities Excellent interpersonal and communication skills, both verbal and written Ability to work in a team environment, cultivate strong relationships and demonstrate initiative Experience with C programming language Experience with model-based software development using SCADE Experience developing embedded software control systems Experience planning and executing projects using Agile software development methodology Experience managing requirements using DOORS or DOORS Next Gen (DNG) Experience with digital signal processing or digital filter design Experience with ARM microprocessors Experience with serial communication protocols (e.g. CANbus, ARINC, RS-232) Familiarity with aerospace (e.g., DO-178, DO-330, DO-331) and/or industrial (e.g., IEC 61508) software certification requirements Familiarity with functional safety standards such as ISO 13489, IEC 61508, IEC 62061, ISO 26262 or ARP4761Software Engineer to join our team. We are looking for a candidate who has working experience designing, developing and verifying embedded software in aerospace and/or industrial applications. The candidate should be familiar with industry-standard software development and design assurance practices (such as DO-178, ISO 26262, EN 50128, IEC 61508 or IEC 62304) and their application across the entire software development lifecycle. Johnson Service Group (JSG) is an Equal Opportunity Employer. JSG provides equal employment opportunities to all applicants and employees without regard to race, color, religion, sex, age, sexual orientation, gender identity, national origin, disability, marital status, protected veteran status, or any other characteristic protected by law.
    $80k-107k yearly est. 4d ago
  • Data Engineer

    Gotham Technology Group 4.5company rating

    Data engineer job in New York, NY

    Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves. Responsibilities Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries. Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring. Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines. Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift. Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm. Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets. Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users. Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns. Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, or related field. 2-5 years of experience in a similar Data Engineering or Web Scraping role. Capital markets knowledge with familiarity across asset classes and experience supporting trading systems. Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift). Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright). Strong Python programming skills and experience with SQL and NoSQL databases. Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus. Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
    $86k-120k yearly est. 3d ago
  • Lead Data Engineer with Banking

    Synechron 4.4company rating

    Data engineer job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking an experienced Lead Data Engineer to spearhead our data infrastructure initiatives. The ideal candidate will have a strong background in building scalable data pipelines, with hands-on expertise in Kafka, Snowflake, and Python. As a key technical leader, you will design and maintain robust streaming and batch data architectures, optimize data loads in Snowflake, and drive automation and best practices across our data platform. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $140k/year & benefits (see below). The Role Responsibilities: Design, develop, and maintain reliable, scalable data pipelines leveraging Kafka, Snowflake, and Python. Lead the implementation of distributed data processing and real-time streaming solutions. Manage Snowflake data warehouse environments, including data loading, tuning, and optimization for performance and cost-efficiency. Develop and automate data workflows and transformations using Python scripting. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Monitor, troubleshoot, and optimize data pipelines and platform performance. Ensure data quality, governance, and security standards are upheld. Guide and mentor junior team members and foster best practices in data engineering. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing frameworks and streaming architectures. Hands-on experience with Snowflake data warehouse platform, including data ingestion, performance tuning, and management. Proficiency in Python for data manipulation, automation, and scripting. Familiarity with Kafka ecosystem tools such as Confluent, Kafka Connect, and Kafka Streams. Solid understanding of SQL, data modeling, and ETL/ELT processes. Knowledge of cloud platforms (AWS, Azure, GCP) is advantageous. Strong troubleshooting skills and ability to optimize data workflows. Excellent communication and collaboration skills. Preferred, but not required: Bachelor's or Master's degree in Computer Science, Information Systems, or related field. Experience with containerization (Docker, Kubernetes) is a plus. Knowledge of data security best practices and GDPR compliance. Certifications related to cloud platforms or data engineering preferred. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-140k yearly 4d ago

Learn more about data engineer jobs

How much does a data engineer earn in Islip, NY?

The average data engineer in Islip, NY earns between $79,000 and $142,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.

Average data engineer salary in Islip, NY

$106,000

What are the biggest employers of Data Engineers in Islip, NY?

The biggest employers of Data Engineers in Islip, NY are:
  1. United Nations
Job type you want
Full Time
Part Time
Internship
Temporary