Data Governance Lead - Data Architecture & Governance
Data engineer job in New York, NY
Job Title: Data Governance Lead - Data Architecture & Governance
Employment Type: Full-Time
Base Salary: $220K to $250K (based on experience) + Bonus
is eligible for medical, dental, vision
About the Role:
We are seeking an Experienced Data Governance Lead to join a dynamic data and analytics team in New York. This role will design and oversee the organization's data governance framework, stewardship model, and data quality approach across financial services business lines, ensuring trusted and well-defined data for reporting and analytics across Databricks lakehouse, CRM, management reporting, data science teams, and GenAI initiatives.
Primary Responsibilities:
Design, implement, and refine enterprise-wide data governance framework, including policies, standards, and roles for data ownership and stewardship.
Lead the design of data quality monitoring, dashboards, reporting, and exception-handling processes, coordinating remediation with stewards and technology teams.
Drive communication and change management for governance policies and standards, making them practical and understandable for business stakeholders.
Define governance processes for critical data domains (e.g., companies, contacts, funds, deals, clients, sponsors) to ensure consistency, compliance, and business value.
Identify and onboard business data owners and stewards across business teams.
Partner with Data Solution Architects and business stakeholders to align definitions, semantics, and survivorship rules, including support for DealCloud implementations.
Define and prioritize data quality rules and metrics for key data domains.
Develop training and onboarding materials for stewards and users to reinforce governance practices and improve reporting, risk management, and analytics outcomes.
Qualifications:
6-8 years in data governance, data management, or related roles, preferably within financial services.
Strong understanding of data governance concepts, including stewardship models, data quality management, and issue-resolution processes.
Familiarity with CRM or deal management platforms (e.g., DealCloud, Salesforce) and modern data platforms (e.g., Databricks or similar).
Proficiency in SQL for data investigation, ad hoc analysis, and validation of data quality rules.
Comfortable working with Databricks, Jupyter notebooks, Excel, and BI tools.
Python skills for automation, data wrangling, profiling, and validation are strongly preferred.
Exposure to investment banking, equities, or private markets data is a plus.
Excellent written and verbal communication skills with the ability to lead cross-functional discussions and influence senior stakeholders.
Highly organized, proactive, and able to balance strategic governance framework design with hands-on execution.
Data Engineer
Data engineer job in Fort Lee, NJ
The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance.
Responsibilities
Analyze, structure, and interpret raw data.
Build and maintain datasets for business use.
Design and optimize database tables, schemas, and data structures.
Enhance data accuracy, consistency, and overall efficiency.
Develop views, functions, and stored procedures.
Write efficient SQL queries to support application integration.
Create database triggers to support automation processes.
Oversee data quality, integrity, and database security.
Translate complex data into clear, actionable insights.
Collaborate with cross-functional teams on multiple projects.
Present data through graphs, infographics, dashboards, and other visualization methods.
Define and track KPIs to measure the impact of business decisions.
Prepare reports and presentations for management based on analytical findings.
Conduct daily system maintenance and troubleshoot issues across all platforms.
Perform additional ad hoc analysis and tasks as needed.
Qualification
Bachelor's Degree in Information Technology or relevant
4+ years of experience as a Data Analyst or Data Engineer, including database design experience.
Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations.
Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views.
Excellent written, verbal, and interpersonal communication skills.
Ability to manage multiple tasks in a fast-paced and evolving environment.
Strong work ethic, professionalism, and integrity.
Advanced proficiency in Microsoft Office applications.
Data Engineer
Data engineer job in New York, NY
DL Software produces Godel, a financial information and trading terminal.
Role Description
This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities.
Qualifications
Strong proficiency in Data Engineering and Data Modeling
Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes
Strong Python background
Expertise in Extract, Transform, Load (ETL) processes and tools
Experience in designing, managing, and optimizing Data Warehousing solutions
C++ Market Data Engineer
Data engineer job in Stamford, CT
We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making.
What You'll Do:
Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options
Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design
Ensure reliable data delivery with failover, gap recovery, and replay mechanisms
Collaborate with researchers and engineers to align data formats for trading and simulation
Instrument and test systems for continuous performance improvements
What We're Looking For:
3+ years of C++ development experience (low-latency, high-throughput systems)
Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH)
Strong knowledge of concurrency, memory models, and compiler optimizations
Python scripting skills for testing and automation
Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
Senior Data Engineer
Data engineer job in New York, NY
Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance.
We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets.
Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you:
Minimum qualifications:
Able to work out of our Manhattan office minimum 4 days a week
5+ years of experience in a financial or startup environment
5+ years of experience working on live data as well as historical data
3+ years of experience in Java, Python, and SQL
Experience managing multiple production ETL pipelines that reliably store and validate financial data
Experience launching, scaling, and improving backend services in cloud environments
Experience migrating critical data across different databases
Experience owning and improving critical data infrastructure
Experience teaching best practices to junior developers
Preferred qualifications:
5+ years of experience in a fintech startup
5+ years of experience in Java, Kafka, Python, PostgreSQL
5+ years of experience working with Websockets like RXStomp or Socket.io
5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode
2+ years of experience shipping and optimizing Rust applications
Demonstrated experience keeping critical systems online
Demonstrated creativity and resourcefulness under pressure
Experience with corporate debt / bonds and commodities data
Salary range begins at $150,000 and increases with experience
Benefits: Health Insurance, Vision, Dental
To try the product, go to *************************
Machine Learning Engineer / Data Scientist / GenAI
Data engineer job in New York, NY
NYC NY / Hybrid
12+ Months
Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system.
Must have strong experience with:
Llama
Python
Hadoop
MCP
Machine Learning (ML)
They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines.
Thanks and Regards!
Lavkesh Dwivedi
************************
Amtex System Inc.
28 Liberty Street, 6th Floor | New York, NY - 10005
************
********************
Data Architect
Data engineer job in New York, NY
Data Solutions Architect
The Data Solutions Architect will play a pivotal role in advancing organizational data and artificial intelligence (AI) initiatives. Leveraging statistical analysis, machine learning (ML), and large language models (LLMs), this role focuses on extracting insights and supporting decision-making across diverse business operations and professional service practices. The architect will collaborate with innovation teams, technical resources, and stakeholders to design and implement data-driven solutions that enhance service delivery and operational efficiency. Staying current with emerging technologies and best practices, the Data Solutions Architect will integrate cutting-edge techniques into projects, offering a unique opportunity to shape the future of data and AI within the professional services sector.
Principal Duties and Responsibilities
Partner with operational and practice teams to identify challenges and opportunities for workflow improvement.
Translate complex domain logic into actionable data requirements and AI use cases.
Design, build, and maintain scalable data pipelines and infrastructure to support AI and BI initiatives.
Utilize SQL, Python, R, and other analytics tools to analyze, model, and visualize data trends.
Collaborate with technology teams to refine and maintain data pipelines, warehouses, and databases.
Develop tools and processes to transform raw data into user-friendly formats for self-service analytics.
Apply advanced quantitative methods, including ML and NLP, to identify patterns and build predictive models.
Design and deploy systems for applications such as text analysis, trend analysis, and predictive modeling.
Craft, test, and refine prompts for LLMs to generate contextually accurate outputs tailored to research and drafting workflows.
Deliver AI-driven solutions from proof of concept through production, addressing cross-functional and practice-specific needs.
Continuously monitor advancements in AI, ML, and data science, integrating innovative technologies into organizational projects.
Job Specifications
Required Education
Bachelor's degree in Data Science, Computer Science, Engineering, or related fields.
Preferred Education
Master's degree in a relevant discipline; coursework in deep learning, NLP, or information retrieval is highly valued.
Required Experience
Minimum of 3 years of relevant experience, including at least 2 years in data engineering and data science roles.
Competencies
Demonstrated expertise in data analytics and engineering with a strong focus on data modeling.
Proficiency in statistical programming languages (Python, R) and database management (SQL).
Hands-on experience with ML, NLP, and data visualization tools.
Strong problem-solving and communication skills, with the ability to present complex data to non-technical audiences.
Experience in professional services or related environments preferred.
Azure Data Engineer
Data engineer job in Weehawken, NJ
· Expert level skills writing and optimizing complex SQL
· Experience with complex data modelling, ETL design, and using large databases in a business environment
· Experience with building data pipelines and applications to stream and process datasets at low latencies
· Fluent with Big Data technologies like Spark, Kafka and Hive
· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
· Designing and building of data pipelines using API ingestion and Streaming ingestion methods
· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
· Experience in developing NO SQL solutions using Azure Cosmos DB is essential
· Thorough understanding of Azure and AWS Cloud Infrastructure offerings
· Working knowledge of Python is desirable
· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco
Data engineer job in New York, NY
Are you a data engineer who loves building systems that power real impact in the world?
A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups.
In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications.
To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
Senior Data Engineer - Investment & Portfolio Data (PE / Alternatives)
Data engineer job in New York, NY
About the Opportunity
Our client is a global alternative investment firm in a high-growth phase, investing heavily in modernizing its enterprise data platform. With multiple investment strategies and operations across several geographies, the firm is building a scalable, front-to-back investment data environment to support portfolio management, performance reporting, and executive decision-making.
This is a hands-on, senior individual contributor role for an engineer who has worked close to investment teams and understands financial and portfolio data, not just generic SaaS analytics.
Who This Role Is For
This role is ideal for data engineers who have experience in or alongside Private Equity, Hedge Funds, Asset Management, or Capital Markets environments and are comfortable owning complex financial data pipelines end-to-end.
This is not a traditional BI, marketing, or consumer data role.
Candidates coming purely from ad-tech, healthcare, or non-financial SaaS backgrounds may not find this a fit.
What You'll Be Doing
Design, build, and maintain scalable data pipelines supporting investment, portfolio, and fund-level data
Partner closely with technology leadership and investment stakeholders to translate business and investment use cases into technical solutions
Contribute to the buildout of a modern data lake / lakehouse architecture (medallion-style or similar)
Integrate data across the full investment lifecycle, including:
Deal and transaction data
Portfolio company metrics
Fund performance, AUM, and reporting data
Ensure data quality, lineage, and reliability across multiple strategies and entities
Operate as a senior, hands-on engineer - designing, building, and troubleshooting in the weeds when needed
Required Experience
7+ years of experience as a Data Engineer or similar role
Strong background supporting financial services data, ideally within:
Private Equity
Hedge Funds
Asset Management
Investment Banking / Capital Markets
Experience working with complex, multi-entity datasets tied to investments, portfolios, or funds
Strong SQL skills and experience building production-grade data pipelines
Experience with modern cloud data platforms and architectures
Comfortable working in a fast-moving, evolving environment with senior stakeholders
Nice to Have
Experience in environments similar to global PE firms, hedge funds, or institutional asset managers
Exposure to front-to-back investment data (from source systems through reporting)
Experience with Microsoft-centric data stacks (e.g., Azure, Fabric) or comparable cloud platforms
Familiarity with performance, valuation, or risk-related datasets
Work Environment & Compensation
Hybrid role with regular collaboration in the New York office
Competitive compensation aligned with senior financial services engineering talent
Opportunity to help shape a firm-wide data platform during a critical growth phase
Market Data Engineer
Data engineer job in New York, NY
🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment
I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide.
What You'll Do
Own large-scale, time-sensitive market data capture + normalization pipelines
Improve internal data formats and downstream datasets used by research and quantitative teams
Partner closely with infrastructure to ensure reliability of packet-capture systems
Build robust validation, QA, and monitoring frameworks for new market data sources
Provide production support, troubleshoot issues, and drive quick, effective resolutions
What You Bring
Experience building or maintaining large-scale ETL pipelines
Strong proficiency in Python + Bash, with familiarity in C++
Solid understanding of networking fundamentals
Experience with workflow/orchestration tools (Airflow, Luigi, Dagster)
Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.)
Bonus Skills
Experience working with binary market data protocols (ITCH, MDP3, etc.)
Understanding of high-performance filesystems and columnar storage formats
Data Engineer (Web Scraping technologies)
Data engineer job in New York, NY
Title: Data Engineer (Web Scraping technologies)
Duration: FTE/Perm
Salary: 125-190k plus bonus
Responsibilities:
Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability
Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users
Fielding Questions from users about the scrapes and websites
Coordinating with Compliance on approvals and TOU reviews
Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift
Normalizing/standardizing vendor data, firm data for firm consumption
Implement data quality checks to ensure reliability and accuracy of scraped data
Coordinate with Internal teams on delivery, access, requests, support
Promote Data Engineering best practices
Required Skills and Qualifications:
Bachelor's degree in computer science, Engineering, Mathematics or related field
2-5 experience in a similar role
Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds)
Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems
AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.)
Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.)
Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools
Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.)
Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation)
Strong communication skills to work with stakeholders across technology, investment, and operations teams.
Sr. Azure Data Engineer
Data engineer job in New York, NY
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our challenge
We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below).
The Role
Responsibilities:
Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance.
Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake.
Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery.
Evaluates technical performance challenges and recommend tuning solutions.
Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python.
Requirements:
Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python.
Strong expertise in distributed data processing and streaming architectures.
Experience with Snowflake data warehouse platform: data loading, performance tuning, and management.
Proficiency in Python scripting and programming for data manipulation and automation.
Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams).
Knowledge of SQL, data modelling, and ETL/ELT processes.
Understanding of cloud platforms (AWS, Azure, GCP) is a plus.
Domain Knowledge in any of the below area:
Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.).
Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes.
Funding Support, Planning & Analysis, Regulatory reporting & Compliance.
Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management.
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
S YNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Azure Data Engineer
Data engineer job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Data Architect
Data engineer job in Ridgefield, NJ
Immediate need for a talented Data Architect. This is a 12 month contract opportunity with long-term potential and is located in Basking Ridge, NJ (Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-93859
Pay Range: $110 - $120/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Key Skills; ETL, LTMC, SaaS .
5 years as a Data Architect
5 years in ETL
3 years in LTMC
Our client is a leading Telecom Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Lead HPC Architect Cybersecurity - High Performance & Computational Data Ecosystem
Data engineer job in New York, NY
The Scientific Computing and Data group at the Icahn School of Medicine at Mount Sinai partners with scientists to accelerate scientific discovery. To achieve these aims, we support a cutting-edge high-performance computing and data ecosystem along with MD/PhD-level support for researchers. The group is composed of a high-performance computing team, a clinical data warehouse team and a data services team.
The Lead HPC Architect, Cybersecurity, High Performance Computational and Data Ecosystem, is responsible for designing, implementing, and managing the cybersecurity infrastructure and technical operations of Scientific Computing's computational and data science ecosystem. This ecosystem includes a 25,000+ core and 40+ petabyte usable high-performance computing (HPC) systems, clinical research databases, and a software development infrastructure for local and national projects. The HPC system is the fastest in the world at any academic biomedical center (Top 500 list).
To meet Sinai's scientific and clinical goals, the Lead brings a strategic, tactical and customer-focused vision to evolve the ecosystem to be continually more resilient, secure, scalable and productive for basic and translational biomedical research. The Lead combines deep technical expertise in cybersecurity, HPC systems, storage, networking, and software infrastructure with a strong focus on service, collaboration, and strategic planning for researchers and clinicians throughout the organization and beyond. The Lead is an expert troubleshooter, productive partner and leader of projects. The lead will work with stakeholders to make sure the HPC infrastructure is in compliance with governmental funding agency requirements and to promote efficient resource utilizations for researchers
This position reports to the Director for HPC and Data Ecosystem in Scientific Computing and Data.
Key Responsibilities:
HPC Cybersecurity & System Administration:
Design, implement, and manage all cybersecurity operations within the HPC environment, ensuring alignment with industry standards (NIST, ISO, GDPR, HIPAA, CMMC, NYC Cyber Command, etc.).
Implement best practices for data security, including but not limited to encryption (at rest, in transit, and in use), audit logging, access control, authentication control, configuration managements, secure enclaves, and confidential computing.
Perform full-spectrum HPC system administration: installation, monitoring, maintenance, usage reporting, troubleshooting, backup and performance tuning across HPC applications, web service, database, job scheduler, networking, storage, computes, and hardware to optimize workload efficiency.
Lead resolution of complex cybersecurity and system issues; provide mentorship and technical guidance to team members.
Ensure that all designs and implementations meet cybersecurity, performance, scalability, and reliability goals. Ensure that the design and operation of the HPC ecosystem is productive for research.
Lead the integration of HPC resources with laboratory equipment for data ingestion aligned with all regulatory such as genomic sequencers, microscopy, clinical system etc.
Develop, review and maintain security policies, risk assessments, and compliance documentation accurately and efficiently.
Collaborate with institutional IT, compliance, and research teams to ensure all regulatory, Sinai Policy and operational alignment.
Design and implement hybrid and cloud-integrated HPC solutions using on-premise and public cloud resources.
Partner with other peers regionally, nationally and internationally to discover, propose and deploy a world-class research infrastructure for Mount Sinai.
Stay current with emerging HPC, cloud, and cybersecurity technologies to keep the organization's infrastructure up-to-date.
Work collaboratively, effectively and productively with other team members within the group and across Mount Sinai.
Provide after-hours support as needed.
Perform other duties as assigned or requested.
Requirements:
Bachelor's degree in computer science, engineering or another scientific field. Master's or PhD preferred.
10 years of progressive HPC system administration experience with Enterprise Linux releases including RedHat/CentOS/Rocky Systems, and batch cluster environment.
Experience with all aspects of high-throughput HPC including schedulers (LSF or Slurm), networking (Infiniband/Gigabit Ethernet), parallel file systems and storage, configuration management systems (xCAT, Puppet and/or Ansible), etc.
Proficient in cybersecurity processes, posture, regulations, approaches, protocols, firewalls, data protection in a regulated environment (e.g. finance, healthcare).
In-depth knowledge HIPAA, NIST, FISMA, GDPR and related compliance standards, with prove experience building and maintaining compliant HPC system
Experience with secure enclaves and confidential computing.
Proven ability to provide mentorship and technical leadership to team members.
Proven ability to lead complex projects to completion in collaborative, interdisciplinary settings with minimum guidance.
Excellent analytical ability and troubleshooting skills.
Excellent communication, documentation, collaboration and interpersonal skills. Must be a team player and customer focused.
Scripting and programming experience.
Preferred Experience
Proficient with cloud services, orchestration tools, openshift/Kubernetes cost optimization and hybrid HPC architectures.
Experience with Azure, AWS or Google cloud services.
Experience with LSF job scheduler and GPFS Spectrum Scale.
Experience in a healthcare environment.
Experience in a research environment is highly preferred.
Experience with software that enables privacy-preserving linking of PHI.
Experience with Globus data transfer.
Experience with Web service, SAP HANA, Oracle, SQL, MariaDB and other database technologies.
Strength through Unity and Inclusion
The Mount Sinai Health System is committed to fostering an environment where everyone can contribute to excellence. We share a common dedication to delivering outstanding patient care. When you join us, you become part of Mount Sinai's unparalleled legacy of achievement, education, and innovation as we work together to transform healthcare. We encourage all team members to actively participate in creating a culture that ensures fair access to opportunities, promotes inclusive practices, and supports the success of every individual.
At Mount Sinai, our leaders are committed to fostering a workplace where all employees feel valued, respected, and empowered to grow. We strive to create an environment where collaboration, fairness, and continuous learning drive positive change, improving the well-being of our staff, patients, and organization. Our leaders are expected to challenge outdated practices, promote a culture of respect, and work toward meaningful improvements that enhance patient care and workplace experiences. We are dedicated to building a supportive and welcoming environment where everyone has the opportunity to thrive and advance professionally. Explore this opportunity and be part of the next chapter in our history.
About the Mount Sinai Health System:
Mount Sinai Health System is one of the largest academic medical systems in the New York metro area, with more than 48,000 employees working across eight hospitals, more than 400 outpatient practices, more than 300 labs, a school of nursing, and a leading school of medicine and graduate education. Mount Sinai advances health for all people, everywhere, by taking on the most complex health care challenges of our time - discovering and applying new scientific learning and knowledge; developing safer, more effective treatments; educating the next generation of medical leaders and innovators; and supporting local communities by delivering high-quality care to all who need it. Through the integration of its hospitals, labs, and schools, Mount Sinai offers comprehensive health care solutions from birth through geriatrics, leveraging innovative approaches such as artificial intelligence and informatics while keeping patients' medical and emotional needs at the center of all treatment. The Health System includes more than 9,000 primary and specialty care physicians; 13 joint-venture outpatient surgery centers throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and more than 30 affiliated community health centers. We are consistently ranked by U.S. News & World Report's Best Hospitals, receiving high "Honor Roll" status.
Equal Opportunity Employer
The Mount Sinai Health System is an equal opportunity employer, complying with all applicable federal civil rights laws. We do not discriminate, exclude, or treat individuals differently based on race, color, national origin, age, religion, disability, sex, sexual orientation, gender, veteran status, or any other characteristic protected by law. We are deeply committed to fostering an environment where all faculty, staff, students, trainees, patients, visitors, and the communities we serve feel respected and supported. Our goal is to create a healthcare and learning institution that actively works to remove barriers, address challenges, and promote fairness in all aspects of our organization.
Sr Data Modeler with Capital Markets/ Custody
Data engineer job in Jersey City, NJ
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit *******************
Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets
Work Location
Jersey City, NJ (Onsite, 5 days / week)
Job Description:
We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain.
The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model.
A software development background is not required.
Key Responsibilities
Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics).
Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model.
Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model.
Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality.
Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity.
Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage.
Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation.
Champion a detail-oriented and documentation-first culture within the data modeling team.
Qualifications
Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services.
Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics).
Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures.
Strong skills in data analysis using SQL and/or other data analysis tools.
Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation.
Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation.
Ability to mentor and lead junior team members in data modeling best practices.
Passion for detail, documentation, and continuous improvement.
Software development background is not required.
Preferred Skills
Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer).
Familiarity with capital markets, business processes and data flows.
Knowledge of regulatory and compliance requirements in financial data management.
Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases).
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Big Data Developer
Data engineer job in Jersey City, NJ
Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
Implementing Spark processing based ETL frameworks
Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
Modifying the Informatica-Teradata & Unix based data pipeline
Enhancing the Talend-Hive/Spark & Unix based data pipelines
Develop and Deploy Scala/Python based Spark Jobs for ETL processing
Strong SQL & DWH concepts
Senior Data Architect
Data engineer job in New York, NY
About the Company
Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis' Service Transformation approach helps ‘shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients.
About the Role
Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica. This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management.
Responsibilities
Hands-on data analytics experience with Databricks on AWS, Pyspark and Python.
Must have prior experience with migrating a data asset to the cloud using a GenAI automation option.
Experience in migrating data from on-premises to AWS.
Expertise in developing data models, delivering data-driven insights for business solutions.
Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs).
Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights.
Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance.
Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets.
Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks.
Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work.
Qualifications
15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS.
Required Skills
Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources.
Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability.
Preferred Skills
Ability to manage in tight circumstances, very pro-active with risk & issue management.
Requirement Clarification & Communication: Interact directly with colleagues to clarify objectives, challenge assumptions.
Documentation & Best Practices: Maintain clear, concise documentation of data workflows, coding standards, and analytical methodologies to support knowledge transfer and scalability.
Collaboration & Stakeholder Engagement: Work closely with colleagues who provide data, raising questions about data validity, sharing insights, and co-creating solutions that address evolving needs.
Excellent communication skills for engaging with colleagues, clarifying requirements, and conveying analytical results in a meaningful, non-technical manner.
Demonstrated critical thinking skills, including the willingness to question assumptions, evaluate data quality, and recommend alternative approaches when necessary.
A self-directed, resourceful problem-solver who collaborates well with others while confidently managing tasks and priorities independently.
SAP Data Migration Developer
Data engineer job in Englewood, NJ
SAP S4 Data Migration Developer
Duration: 6 Months
Rate: Competitive Market Rate
This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone.
KEY RESPONSIBILITIES -
Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification.
Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments.
Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope.
Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform
Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs.
Demonstrate capabilities with performance tuning, handling large data sets.
Understand SAP tables, fields & load processes into SAP S4, MDG systems
Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation
Be a problem solver and build robust conversion, validation per requirements.
SKILLS AND EXPERIENCE
6-8 years of experience in SAP Data Services application as a developer
At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance
Good communication skills, ability to deliver key objects on time and support with testing, mock cycles.
4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward
Taking ownership and ensuring high quality results
Active in seeking feedback and making necessary changes
Specific previous experience -
Proven experience in implementing SAP Data Services in a multinational environment.
Experience in design of data loads of large volumes to SAP S4 from SAP ECC
Must have used HANA Staging tables
Experience in developing Information Steward for Data Reconciliation & Validation (not profiling)
REQUIREMENTS
Adhere to work availability schedule as noted above, be on time for meeting
Written and verbal communication in English