Data Scientist
Data engineer job in New York, NY
Senior Data Scientist - Sports & Entertainment
Our client, a premier Sports, Entertainment, and Hospitality organization, is hiring a Senior Data Scientist. In this position you will own high-impact analytics projects that redefine how predictive analytics influence business strategy. This is a pivotal role where you will build and deploy machine learning solutions-ranging from Bayesian engagement scoring to purchase-propensity and lifetime-value models-to drive fan acquisition and revenue growth.
Requirements:
Experience: 8+ years of professional experience using data science to solve complex business problems, preferably as a solo contributor or team lead.
Education: Bachelor's degree in Data Science, Statistics, Computer Science, or a related quantitative field (Master's or PhD preferred).
Tech Stack: Hands-on expertise in Python, SQL/PySpark, and ML frameworks (scikit-learn, XGBoost, TensorFlow, or PyTorch).
Infrastructure: Proficiency with cloud platforms (AWS preferred) and modern data stacks like Snowflake, Databricks, or Dataiku.
MLOps: Strong experience in productionizing models, including version control (Git), CI/CD, and model monitoring/governance.
Location: Brooklyn, NY (4 days onsite per week)
Compensation: $100,000 - $150,000 + Bonus
Benefits: Comprehensive medical/dental/vision, 401k match, competitive PTO, and unique access to live entertainment and sports events.
Data & Performance Analytics (Hedge Fund)
Data engineer job in New York, NY
Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing.
Responsibilities
This role will be responsible for Compiling periodical fund performance analyses
Review and analyze portfolio performance data, benchmark performance and risk statistics
Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner
Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems
Apply queries, pivot tables, filters and other tools to analyze data.
Maintain client relationship management database and providing reports to Directors on a regular basis
Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis
Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization
Provide various ad hoc data research and analysis as needed.
Desired Skills and Experience
Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience
Proficiency in Microsoft Office, particularly Excel Modeling
Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint
Outstanding communication skills, proven ability to effectively work with all levels of Managment
Comfortable working in a fast-paced, dead-line driven dynamic environment
Innovative and creative thinker
Must be detail oriented
Analyst, Data Scientist (Ref: 194313)
Data engineer job in New York, NY
Job Title: Analyst, Data Scientist
Salary: $70,000-$90,000
Contact: ********************************
is not available for C2C or C2H and the client is unable to sponsor at this time.
About the Company
We are partnering with a leading organization in the textiles and apparel sector, known for its commitment to innovation, quality, and operational excellence within the retail industry. The company is focused on enhancing business processes and delivering exceptional value to its customers through data-driven decision-making.
Role Overview
The Analyst, Data Scientist plays a critical role in developing and maintaining reporting tools, including metrics, dashboards, and analytical platforms. This position supports strategic and operational decision-making by applying data analysis to identify insights, maintain process controls, and drive continuous improvement across the organization.
Key Responsibilities
Manage reporting requests from field operations by defining technical requirements and developing reports, metrics, and dashboards using SQL and Power BI
Design and build advanced Power BI dashboards leveraging DAX, Power Query, and other advanced features
Perform data cleaning and analysis, translating complex analytical results into clear, actionable insights
Collaborate with business stakeholders and IT teams to align on project objectives and deliverables
Support strategic initiative planning through prioritization, estimation, and analysis
Gather, document, and maintain detailed business and technical requirements
Participate in problem-solving sessions with business users and leadership to address analytical challenges
Lead change management efforts and deliver training to end users as needed
Serve as a liaison between business teams and IT to resolve system issues and improve processes
Provide regular project updates and communicate issue resolution status to stakeholders
Qualifications
Bachelor's degree in Engineering, Mathematics, Computer Science, or a related field
1-2 years of experience developing business or technology solutions
Strong proficiency in Power BI, including DAX and Power Query
Solid understanding of data warehousing and business intelligence concepts
Ability to read and write SQL
Familiarity with R, Python, and machine learning concepts (theoretical or practical) is a plus
Advanced skills in Microsoft Office tools, including Excel, PowerPoint, Word, Visio, and SharePoint
Experience in a corporate retail environment is preferred
This role is ideal for a proactive, analytical professional who thrives in a fast-paced environment. The successful candidate will demonstrate strong communication and collaboration skills and the ability to work effectively across cross-functional teams.
Data Engineer
Data engineer job in New York, NY
About Beauty by Imagination:
Beauty by Imagination is a global haircare company dedicated to boosting self-confidence with imaginative solutions for every hair moment. We are a platform company of diverse, market-leading brands, including Wet Brush, Goody, Bio Ionic, and Ouidad - all of which are driven to be the most trusted choice for happy, healthy hair. Our talented team is passionate about delivering high-performing products for consumers and salon professionals alike.
Position Overview:
We are looking for a skilled Data Engineer to design, build, and maintain our enterprise Data Warehouse (DWH) and analytics ecosystem - with a growing focus on enabling AI-driven insights, automation, and enterprise-grade AI usage. In this role, you will architect scalable pipelines, improve data quality and reliability, and help lay the foundational data structures that power tools like Microsoft Copilot, Copilot for Power BI, and AI-assisted analytics across the business.
You'll collaborate with business stakeholders, analysts, and IT teams to modernize our data environment, integrate complex data sources, and support advanced analytics initiatives. Your work will directly influence decision-making, enterprise reporting, and next-generation AI capabilities built on top of our Data Warehouse.
Key Responsibilities
Design, develop, and maintain Data Warehouse architecture, including ETL/ELT pipelines, staging layers, and data marts.
Build and manage ETL workflows using SQL Server Integration Services (SSIS) and other data integration tools.
Integrate and transform data from multiple systems, including ERP platforms such as NetSuite.
Develop and optimize SQL scripts, stored procedures, and data transformations for performance and scalability.
Support and enhance Power BI dashboards and other BI/reporting systems.
Implement data quality checks, automation, and process monitoring.
Collaborate with business and analytics teams to translate requirements into scalable data solutions.
Contribute to data governance, standardization, and documentation practices.
Support emerging AI initiatives by ensuring model-ready data quality, accessibility, and semantic alignment with Copilot and other AI tools.
Required Qualifications
Proven experience with Data Warehouse design and development (ETL/ELT, star schema, SCD, staging, data marts).
Hands-on experience with SSIS (SQL Server Integration Services) for building and managing ETL workflows.
Strong SQL skills and experience with Microsoft SQL Server.
Proficiency in Power BI or other BI tools (Tableau, Looker, Qlik).
Understanding of data modeling, performance optimization, and relational database design.
Familiarity with Python, Airflow, or Azure Data Factory for data orchestration and automation.
Excellent analytical and communication skills.
Preferred Qualifications
Experience with cloud data platforms (Azure, AWS, or GCP).
Understanding of data security, governance, and compliance (GDPR, SOC2).
Experience with API integrations and real-time data ingestion.
Background in finance, supply chain, or e-commerce analytics.
Experience with NetSuite ERP or other ERP systems (SAP, Oracle, Dynamics, etc.).
AI Focused Preferred Skills:
Experience implementing AI-driven analytics or automation inside Data Warehouses.
Hands-on experience using Microsoft Copilot, Copilot for Power BI, or Copilot Studio to accelerate SQL, DAX, data modeling, documentation, or insights.
Familiarity with building RAG (Retrieval-Augmented Generation) or AI-assisted query patterns using SQL Server, Synapse, or Azure SQL.
Understanding of how LLMs interact with enterprise data, including grounding, semantic models, and data security considerations (Purview, RBAC).
Experience using AI tools to optimize ETL/ELT workflows, generate SQL scripts, or streamline data mapping/design.
Exposure to AI-driven data quality monitoring, anomaly detection, or pipeline validation tools.
Experience with Microsoft Fabric, semantic models, or ML-integrated analytics environments.
Soft Skills
Strong analytical and problem-solving mindset.
Ability to communicate complex technical concepts to business stakeholders.
Detail-oriented, organized, and self-motivated.
Collaborative team player with a growth mindset.
Impact
You will play a key role in shaping the company's modern data infrastructure - building scalable pipelines, enabling advanced analytics, and empowering the organization to safely and effectively adopt AI-powered insights across all business functions.
Our Tech Stack
SQL Server, SSIS, Azure Synapse
Python, Airflow, Azure Data Factory
Power BI, NetSuite ERP, REST APIs
CI/CD (Azure DevOps, GitHub)
What We Offer
Location: New York, NY (Hybrid work model)
Employment Type: Full-time
Compensation: Competitive salary based on experience
Benefits: Health insurance, 401(k), paid time off
Opportunities for professional growth and participation in enterprise AI modernization initiatives
Data Engineer
Data engineer job in Fort Lee, NJ
The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance.
Responsibilities
Analyze, structure, and interpret raw data.
Build and maintain datasets for business use.
Design and optimize database tables, schemas, and data structures.
Enhance data accuracy, consistency, and overall efficiency.
Develop views, functions, and stored procedures.
Write efficient SQL queries to support application integration.
Create database triggers to support automation processes.
Oversee data quality, integrity, and database security.
Translate complex data into clear, actionable insights.
Collaborate with cross-functional teams on multiple projects.
Present data through graphs, infographics, dashboards, and other visualization methods.
Define and track KPIs to measure the impact of business decisions.
Prepare reports and presentations for management based on analytical findings.
Conduct daily system maintenance and troubleshoot issues across all platforms.
Perform additional ad hoc analysis and tasks as needed.
Qualification
Bachelor's Degree in Information Technology or relevant
4+ years of experience as a Data Analyst or Data Engineer, including database design experience.
Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations.
Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views.
Excellent written, verbal, and interpersonal communication skills.
Ability to manage multiple tasks in a fast-paced and evolving environment.
Strong work ethic, professionalism, and integrity.
Advanced proficiency in Microsoft Office applications.
Senior Data Engineer
Data engineer job in New York, NY
Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance.
We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets.
Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you:
Minimum qualifications:
Able to work out of our Manhattan office minimum 4 days a week
5+ years of experience in a financial or startup environment
5+ years of experience working on live data as well as historical data
3+ years of experience in Java, Python, and SQL
Experience managing multiple production ETL pipelines that reliably store and validate financial data
Experience launching, scaling, and improving backend services in cloud environments
Experience migrating critical data across different databases
Experience owning and improving critical data infrastructure
Experience teaching best practices to junior developers
Preferred qualifications:
5+ years of experience in a fintech startup
5+ years of experience in Java, Kafka, Python, PostgreSQL
5+ years of experience working with Websockets like RXStomp or Socket.io
5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode
2+ years of experience shipping and optimizing Rust applications
Demonstrated experience keeping critical systems online
Demonstrated creativity and resourcefulness under pressure
Experience with corporate debt / bonds and commodities data
Salary range begins at $150,000 and increases with experience
Benefits: Health Insurance, Vision, Dental
To try the product, go to *************************
Data Engineer
Data engineer job in New York, NY
DL Software produces Godel, a financial information and trading terminal.
Role Description
This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities.
Qualifications
Strong proficiency in Data Engineering and Data Modeling
Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes
Strong Python background
Expertise in Extract, Transform, Load (ETL) processes and tools
Experience in designing, managing, and optimizing Data Warehousing solutions
Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco
Data engineer job in New York, NY
Are you a data engineer who loves building systems that power real impact in the world?
A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups.
In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications.
To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
Senior Data Engineer
Data engineer job in New York, NY
Title: Senior Data Engineer
Duration: 12-15 months (possibilities of conversion)
W2 Candidates only.
Our client, is seeking a Senior Data Engineer to join their team in New York (preferred, Downtown WTC) or Boston. This is a long-term contract position with the potential to convert to a full-time employee (FTE) role. The role requires focusing on overseeing third-party fund accounting administration, client life cycle, valuation automation, and driving data-related initiatives.
Key Responsibilities:
• Lead the design, development, and optimization of data architecture, modeling, and pipelines to support fund accounting administration transitions to third parties.
• Oversee and manage third-party vendors, ensuring seamless integration and efficiency in data processes.
• Collaborate with business units (BUs) and stakeholders to gather requirements, refine processes, and implement data solutions.
• Build and maintain robust CI/CD pipelines to ensure scalable and reliable data workflows.
• Utilize Snowflake and advanced SQL to manage and query large datasets effectively.
• Drive data engineering best practices, ensuring high-quality, efficient, and secure data systems.
• Communicate complex technical concepts to non-technical stakeholders, ensuring alignment and clarity.
Must-Have Qualifications:
• Experience: 10+ years in data engineering or related roles.
• Technical Expertise:
o Advanced proficiency in Python and SQL for data processing and pipeline development.
o Experience with additional cloud-based AWS data platforms or tools.
o Strong experience in data architecture, data modeling, and CI/CD pipeline implementation.
o Hands-on expertise with Snowflake for data warehousing and analytics.
• Domain Knowledge: Extensive experience in asset management is mandatory.
• Communication: Exceptional verbal and written communication skills, with the ability to engage effectively with business units and stakeholders.
Nice-to-Have Qualifications:
• Prior experience leading or overseeing third-party vendors in a data-related capacity.
• Familiarity with advanced data orchestration tools or frameworks.
C++ Market Data Engineer
Data engineer job in Stamford, CT
We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making.
What You'll Do:
Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options
Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design
Ensure reliable data delivery with failover, gap recovery, and replay mechanisms
Collaborate with researchers and engineers to align data formats for trading and simulation
Instrument and test systems for continuous performance improvements
What We're Looking For:
3+ years of C++ development experience (low-latency, high-throughput systems)
Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH)
Strong knowledge of concurrency, memory models, and compiler optimizations
Python scripting skills for testing and automation
Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
Data Engineer (Web Scraping technologies)
Data engineer job in New York, NY
Title: Data Engineer (Web Scraping technologies)
Duration: FTE/Perm
Salary: 125-190k plus bonus
Responsibilities:
Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability
Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users
Fielding Questions from users about the scrapes and websites
Coordinating with Compliance on approvals and TOU reviews
Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift
Normalizing/standardizing vendor data, firm data for firm consumption
Implement data quality checks to ensure reliability and accuracy of scraped data
Coordinate with Internal teams on delivery, access, requests, support
Promote Data Engineering best practices
Required Skills and Qualifications:
Bachelor's degree in computer science, Engineering, Mathematics or related field
2-5 experience in a similar role
Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds)
Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems
AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.)
Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.)
Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools
Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.)
Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation)
Strong communication skills to work with stakeholders across technology, investment, and operations teams.
Azure Data Engineer
Data engineer job in Weehawken, NJ
· Expert level skills writing and optimizing complex SQL
· Experience with complex data modelling, ETL design, and using large databases in a business environment
· Experience with building data pipelines and applications to stream and process datasets at low latencies
· Fluent with Big Data technologies like Spark, Kafka and Hive
· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
· Designing and building of data pipelines using API ingestion and Streaming ingestion methods
· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
· Experience in developing NO SQL solutions using Azure Cosmos DB is essential
· Thorough understanding of Azure and AWS Cloud Infrastructure offerings
· Working knowledge of Python is desirable
· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Data Architect
Data engineer job in Ridgefield, NJ
Immediate need for a talented Data Architect. This is a 12 month contract opportunity with long-term potential and is located in Basking Ridge, NJ (Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-93859
Pay Range: $110 - $120/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Key Skills; ETL, LTMC, SaaS .
5 years as a Data Architect
5 years in ETL
3 years in LTMC
Our client is a leading Telecom Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Sr. Azure Data Engineer
Data engineer job in New York, NY
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our challenge
We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below).
The Role
Responsibilities:
Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance.
Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake.
Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery.
Evaluates technical performance challenges and recommend tuning solutions.
Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python.
Requirements:
Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python.
Strong expertise in distributed data processing and streaming architectures.
Experience with Snowflake data warehouse platform: data loading, performance tuning, and management.
Proficiency in Python scripting and programming for data manipulation and automation.
Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams).
Knowledge of SQL, data modelling, and ETL/ELT processes.
Understanding of cloud platforms (AWS, Azure, GCP) is a plus.
Domain Knowledge in any of the below area:
Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.).
Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes.
Funding Support, Planning & Analysis, Regulatory reporting & Compliance.
Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management.
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
S YNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Data Engineer
Data engineer job in New York, NY
Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets.
Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage.
The Opportunity
As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations.
Responsibilities and Duties
Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms).
Ensure consistent data hygiene, normalization, and enrichment across source systems.
Develop and maintain data models and data warehouses optimized for analytics and operational reporting.
Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights.
Own the documentation of data schemas, definitions, lineage, and data quality controls.
Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets.
Monitor pipeline performance and proactively resolve data discrepancies or failures.
Contribute to architectural decisions related to internal data infrastructure and tools.
Requirements
3-5 years of experience as a data engineer, analytics engineer, or similar role.
Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt).
Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday.
Proficiency in Python or another scripting language for data manipulation.
Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment).
Strong understanding of data governance, documentation, and schema management.
Excellent communication skills and ability to work cross-functionally.
Benefits
Flexible work arrangements (including hybrid mode)
Great Paid Time Off (PTO) policy
Comprehensive benefits package (Medical / Dental / Vision / Disability / Life)
Healthcare and Dependent Care Flexible Spending Accounts (FSAs)
401(k) retirement plan
Access to HSA-compatible plans
Pre-tax commuter benefits
Employee Assistance Program (EAP)
Opportunities for professional growth and development.
A supportive, dynamic, and inclusive work environment.
Why Join Us?
We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.
The compensation range for this role is $75,000 to $80,000 USD
Data Center Architect
Data engineer job in New York, NY
Seeking an experienced Data Center Architect to lead enterprise-scale data center architecture, modernization, and transformation initiatives. This role serves as the technical authority across design, migration, operations transition, and stakeholder engagement in highly regulated environments.
Key Responsibilities
Lead end-to-end data center architecture for design, build, migration, and operational transition
Architect and modernize compute, storage/SAN, network/WAN, backup, voice, and physical data center facilities
Eliminate single points of failure and modernize legacy environments
Drive data center modernization, including legacy-to-modern migrations (e.g., tape to Commvault, UNIX transitions, hybrid/cloud models)
Design physical data center layouts including racks, power, cooling, cabling, grounding, and space planning
Own project lifecycle: requirements, architecture, RFPs, financial modeling, installation, commissioning, cutover, and migrations
Develop CapEx/OpEx forecasts, cost models, and executive-level business cases
Ensure operational readiness, documentation, lifecycle management, and smooth handoff to operations teams
Design and enhance monitoring, observability, automation, KPIs, and alerting strategies
Ensure compliance with security, audit, and regulatory standards
Lead capacity planning, roadmap development, and SLA/KPI definition
Act as a trusted SME, collaborating with infrastructure, application, operations, vendors, and facilities teams
Required Skills & Experience
Enterprise data center architecture leadership experience
Strong expertise in compute (physical/virtual), storage/SAN, networking, backup, and facilities design
Hands-on experience with data center modernization and hybrid/cloud integration (AWS preferred)
Strong understanding of monitoring, automation, and DevOps-aligned infrastructure practices
Proven experience with financial planning, cost modeling, and executive presentations
Excellent communication and stakeholder management skills
Experience working in regulated or public-sector environments preferred
Lead Data Platform Architect
Data engineer job in Melville, NY
We are growing our data platform team and are seeking an experienced Data Platform Architect with deep cloud data platform expertise to drive the overall architecture and design of a modern, scalable data platform. This role is responsible for defining and advancing data platform architecture to support a data-driven organization, ensuring solutions are efficient, reusable, and aligned with long-term business and technology objectives.
This position carries architectural and strategic responsibility for the design and implementation of the enterprise data platform. The role will support multiple initiatives across the data ecosystem, including data lake design, data engineering, analytics, data architecture, AI/ML, streaming and batch processing, metadata management, and service integrations.
DUTIES AND RESPONSIBILITIES:
• Lead technical assessments of the current data platform and define the architectural roadmap forward
• Collaborate on strategic direction and prioritize data platform architecture to support business and technical objectives
• Partner with enterprise and solution architects to ensure consistent standards and best practices across the data platform
• Architect and design end-to-end data platform solutions on cloud infrastructure, emphasizing scalability, performance, and reusable design patterns
• Design cloud-first, cost-effective data platform architectures
• Architect batch, real-time, and unstructured ingestion frameworks with scale and reliability
• Enable semantic interoperability of data across multiple sources and structures
• Implement automation for lineage, orchestration, and data flows to streamline platform operations
• Design and maintain metadata management frameworks to support current and future tools
• Continually enhance automation and CI/CD frameworks across the data platform
• Architect solutions with security-by-design principles
• Monitor industry trends and emerging technologies to continuously improve the data platform architecture
• Provide technical leadership and guidance to data platform engineers executing against the roadmap
• Own and maintain data platform architecture documentation
DUTIES AND RESPONSIBILITIES (CONTINUED):
• Support a wide range of data platform use cases, including data engineering, business intelligence, real-time analytics, visualization, AI/ML, and service integrations
• Collaborate with third-party vendors and partners on data platform integrations
EDUCATION AND EXPERIENCE:
• Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field required
• Minimum of 10 years of experience designing high-availability data platform architectures
• Minimum of 8 years of experience implementing modern cloud-based data platforms
• Strong experience with Google Cloud Platform services, including BigQuery, Google Cloud Storage, and Cloud Composer
• Minimum of 5 years of experience designing data lake architectures
• Deep expertise across modern data platform, database, and streaming technologies (e.g., Kafka, Spark)
• Experience with source control and CI/CD pipelines
• Experience operationalizing AI/ML models preferred
• Experience working with unstructured data preferred
• Experience operating within Agile delivery models
• Minimum of 3 years of experience with infrastructure as code (Terraform preferred)
REQUIRED TECHNICAL EXPERIENCE (ADDED):
• Hands-on experience designing and operating data platforms on Google Cloud Platform (GCP)
• Strong experience with Databricks for large-scale data processing and analytics
• Experience integrating data from IoT devices and machine monitoring systems is highly preferred
• Familiarity with industrial, sensor-based, or operational technology (OT) data pipelines is a plus
SKILLS:
• Strong cross-functional communication and collaboration skills
• Excellent organizational, time management, verbal, and written communication skills
• Expertise across modern data platform technologies and best practices (BigQuery, Kafka, Hadoop, Spark)
• Strong understanding of semantic layers and data interoperability (e.g., LookML, dbt)
• Proven ability to design reusable, automated data platform patterns
• Demonstrated leadership in distributed or remote environments
• Track record of delivering data platform solutions at enterprise scale
• Ability to write testable code and promote solutions into production environments
• Experience with Google Cloud Composer or Apache Airflow preferred
• Ability to quickly understand complex business systems and data flows
• Strong analytical judgment and decision-making capabilities
OTHER REQUIREMENTS:
• Ability to travel up to 10 percent as required
• This role may require access to regulated or controlled information
Senior Data Architect
Data engineer job in New York, NY
About the Company
Mphasis applies next-generation technology to help enterprises transform businesses globally. Customer centricity is foundational to Mphasis and is reflected in the Mphasis' Front2Back™ Transformation approach. Front2Back™ uses the exponential power of cloud and cognitive to provide hyper-personalized (C=X2C2TM=1) digital experience to clients and their end customers. Mphasis' Service Transformation approach helps ‘shrink the core' through the application of digital technologies across legacy environments within an enterprise, enabling businesses to stay ahead in a changing world. Mphasis' core reference architectures and tools, speed and innovation with domain expertise and specialization are key to building strong relationships with marquee clients.
About the Role
Senior Level Data Architect with data analytics experience, Databricks, Pyspark, Python, ETL tools like Informatica. This is a key role that requires senior/lead with great communication skills who is very proactive with risk & issue management.
Responsibilities
Hands-on data analytics experience with Databricks on AWS, Pyspark and Python.
Must have prior experience with migrating a data asset to the cloud using a GenAI automation option.
Experience in migrating data from on-premises to AWS.
Expertise in developing data models, delivering data-driven insights for business solutions.
Experience in pretraining, fine-tuning, augmenting and optimizing large language models (LLMs).
Experience in Designing and implementing database solutions, developing PySpark applications to extract, transform, and aggregate data, generating insights.
Data Collection & Integration: Identify, gather, and consolidate data from diverse sources, including internal databases and spreadsheets ensuring data integrity and relevance.
Data Cleaning & Transformation: Apply thorough data quality checks, cleaning processes, and transformations using Python (Pandas) and SQL to prepare datasets.
Automation & Scalability: Develop and maintain scripts that automate repetitive data preparation tasks.
Autonomy & Proactivity: Operate with minimal supervision, demonstrating initiative in problem-solving, prioritizing tasks, and continuously improving the quality and impact of your work.
Qualifications
15+ years of experience as Data Analyst / Data Engineer with Databricks on AWS expertise in designing and implementing scalable, secure, and cost-efficient data solutions on AWS.
Required Skills
Strong proficiency in Python (Pandas, Scikit-learn, Matplotlib) and SQL, with experience working across various data formats and sources.
Proven ability to automate data workflows, implement code-based best practices, and maintain documentation to ensure reproducibility and scalability.
Preferred Skills
Ability to manage in tight circumstances, very pro-active with risk & issue management.
Requirement Clarification & Communication: Interact directly with colleagues to clarify objectives, challenge assumptions.
Documentation & Best Practices: Maintain clear, concise documentation of data workflows, coding standards, and analytical methodologies to support knowledge transfer and scalability.
Collaboration & Stakeholder Engagement: Work closely with colleagues who provide data, raising questions about data validity, sharing insights, and co-creating solutions that address evolving needs.
Excellent communication skills for engaging with colleagues, clarifying requirements, and conveying analytical results in a meaningful, non-technical manner.
Demonstrated critical thinking skills, including the willingness to question assumptions, evaluate data quality, and recommend alternative approaches when necessary.
A self-directed, resourceful problem-solver who collaborates well with others while confidently managing tasks and priorities independently.
Senior Data Engineer
Data engineer job in New York, NY
Our client is a growing Fintech software company Headquarted in New York, NY. They have several hundred employees and are in growth mode.
They are currently looking for a Senior Data Engineer w/ 6+ years of overall professional experience. Qualified candidates will have hands-on experience with Python (6 years), SQL (6 years), DBT (3 years), AWS (Lambda, Glue), Airflow and Snowflake (3 years). BSCS and good CS fundamentals.
The Senior Data Engineer will work in a collaborative team environment and will be responsible for building, optimizing and scaling ETL Data Pipelines, DBT models and Datawarehousing. Excellent communication and organizational skills are expected.
This role features competitive base salary, equity, 401(k) with company match and many other attractive perks. Please send your resume to ******************* for immediate consideration.
SAP Data Migration Developer
Data engineer job in Englewood, NJ
SAP S4 Data Migration Developer
Duration: 6 Months
Rate: Competitive Market Rate
This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone.
KEY RESPONSIBILITIES -
Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification.
Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments.
Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope.
Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform
Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs.
Demonstrate capabilities with performance tuning, handling large data sets.
Understand SAP tables, fields & load processes into SAP S4, MDG systems
Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation
Be a problem solver and build robust conversion, validation per requirements.
SKILLS AND EXPERIENCE
6-8 years of experience in SAP Data Services application as a developer
At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance
Good communication skills, ability to deliver key objects on time and support with testing, mock cycles.
4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward
Taking ownership and ensuring high quality results
Active in seeking feedback and making necessary changes
Specific previous experience -
Proven experience in implementing SAP Data Services in a multinational environment.
Experience in design of data loads of large volumes to SAP S4 from SAP ECC
Must have used HANA Staging tables
Experience in developing Information Steward for Data Reconciliation & Validation (not profiling)
REQUIREMENTS
Adhere to work availability schedule as noted above, be on time for meeting
Written and verbal communication in English