Data Engineer
Data engineer job in New York, NY
Data Engineer (Contract) - Sports Tech & Entertainment | NYC (Hybrid)
6 month contract (option for extention)
New York, NY
$55-75/hr (Negotiable, depends on experience)
We're seeking a Data Engineer to support our Customer Experience team at a leading sports tech and entertainment company. This role will focus on modernizing and optimizing existing data workflows as we transition from Redshift to Databricks.
Minimum Qualifications:
2+ years enterprise experience
Strong proficiency in SQL, Python, and SOQL
Hands-on experience with Redshift, Databricks, and DataGrip
Familiarity with Salesforce and Tableau
Experience building or modernizing analytics workflows in Databricks
Responsibilities:
Modify and rebuild existing data pipelines and workflows from Redshift to Databricks.
Support pre-processing of Salesforce data into structures for Tableau dashboards & adhoc analytics initiatives
Analyze existing workflows to identify opportunities for consolidation and optimization
Enhance performance and scalability by leveraging Databricks best practices
Collaborate with business stakeholders to understand data and reporting requirements
Identify and prioritize dependencies on data sources not yet migrated to Databricks
What's in it for you?
Medical, Dental & Vision Health Benefits
Paid Holidays & Paid Sick Days
Technical Data Lead
Data engineer job in New York, NY
End Client: New York City Police Pension Fund (PPF)
Job Title: Technical Data Lead
Duration: 24 Months
Contract
Number of Hours: 35 Hours a Week
Interview Type: Webcam/In person
Ceipal ID: NYC_DATA114_AK
Position ID: 114
Description:
Manages the technical execution of data conversion activities, ensuring accurate transformation and validation of data from legacy systems and external sources.
Responsibilities:
Develop and execute data conversion scripts and test environments.
Define data extraction architecture and transformation logic.
Conduct validation and reconciliation during PAS implementation.
Create automated comparison reports and conversion audit documentation.
Resolve data issues during and after conversion.
Support bridging and mini-conversion efforts between parallel processing environments.
Required Certifications:
Oracle Data Migration-related certification (e.g., Oracle Certified Expert - Data Integration)
Preferred Qualifications:
Experience with automated ETL processes, data reconciliation, and performance tuning.
Familiarity with Oracle 19C, Vitech V3, and data bridging techniques.
Strong scripting skills in SQL, PL/SQL, and data transformation tools.
Experience in large-scale data migration projects.
Ability to lead technical teams and collaborate with cross-functional stakeholders.
V Group Inc. is a NJ-based IT Services and Products Company with its business strategically categorized in various Business Units including Public Sector, Enterprise Solutions, Professional Services, Ecommerce, Projects, and Products. Within Public Sector business unit, we cater IT Professional Services to Federal, State and Local. We have multiple awards/ contracts with 30+ states, including but not limited to NY, CA, FL, GA, MD, MI, NC, OH, OR, CO, CT, TN, PA, TX, VA, NM, VT, and WA.
If you are considering applying for a position with V Group, or in partnering with us on a position, please feel free to contact me for any questions you may have regarding our services and the advantages we can offer you as a consultant.
Please share my contact information with others working in Information Technology.
Website: **************************************
LinkedIn: *****************************************
Facebook: *********************************
Twitter: *********************************
Data Engineer
Data engineer job in New York, NY
DL Software produces Godel, a financial information and trading terminal.
Role Description
This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities.
Qualifications
Strong proficiency in Data Engineering and Data Modeling
Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes
Strong Python background
Expertise in Extract, Transform, Load (ETL) processes and tools
Experience in designing, managing, and optimizing Data Warehousing solutions
Data & Performance Analytics (Hedge Fund)
Data engineer job in New York, NY
Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing.
Responsibilities
This role will be responsible for Compiling periodical fund performance analyses
Review and analyze portfolio performance data, benchmark performance and risk statistics
Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner
Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems
Apply queries, pivot tables, filters and other tools to analyze data.
Maintain client relationship management database and providing reports to Directors on a regular basis
Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis
Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization
Provide various ad hoc data research and analysis as needed.
Desired Skills and Experience
Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience
Proficiency in Microsoft Office, particularly Excel Modeling
Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint
Outstanding communication skills, proven ability to effectively work with all levels of Managment
Comfortable working in a fast-paced, dead-line driven dynamic environment
Innovative and creative thinker
Must be detail oriented
C++ Market Data Engineer
Data engineer job in Stamford, CT
We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making.
What You'll Do:
Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options
Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design
Ensure reliable data delivery with failover, gap recovery, and replay mechanisms
Collaborate with researchers and engineers to align data formats for trading and simulation
Instrument and test systems for continuous performance improvements
What We're Looking For:
3+ years of C++ development experience (low-latency, high-throughput systems)
Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH)
Strong knowledge of concurrency, memory models, and compiler optimizations
Python scripting skills for testing and automation
Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
Lead Data Engineer (Marketing Technology)
Data engineer job in New York, NY
required
)
About the job:
We're seeking a Lead Data Engineer to drive innovation and excellence across our Marketing Technology data ecosystem. You thrive in dynamic, fast-paced environments and are comfortable navigating both legacy systems and modern data architectures. You balance long-term strategic planning with short-term urgency, responding to challenges with clarity, speed, and purpose.
You take initiative, quickly familiarize yourself with source systems, ingestion pipelines, and operational processes, and integrate seamlessly into agile work rhythms. Above all, you bring a solution-oriented, win-win mindset-owning outcomes and driving progress.
What you will do at Sogeti:
Rapidly onboard into our Martech data ecosystem-understanding source systems, ingestion flows, and operational processes.
Build and maintain scalable data pipelines across Martech, Loyalty, and Engineering teams.
Balance long-term projects with short-term reactive tasks, including urgent bug fixes and business-critical issues.
Identify gaps in data infrastructure or workflows and proactively propose and implement solutions.
Collaborate with product managers, analysts, and data scientists to ensure data availability and quality.
Participate in agile ceremonies and contribute to backlog grooming, sprint planning, and team reviews.
What you will bring:
7+ years of experience in data engineering, with a strong foundation in ETL design, cloud platforms, and real-time data processing.
Deep expertise in Snowflake, Airflow, dbt, Fivetran, AWS S3, Lambda, Python, SQL.
Previous experience integrating data from multiple retail and ecommerce source systems.
Experience with implementation and data management for loyalty platforms, customer data platforms, marketing automation systems, and ESPs.
Deep expertise in data modeling with dbt.
Demonstrated ability to lead critical and complex platform migrations and new deployments.
Strong communication and stakeholder management skills.
Self-driven, adaptable, and proactive problem solver
Education:
Bachelor's or Master's degree in Computer Science, Software Engineering, Information Systems, Business Administration, or a related field.
Life at Sogeti: Sogeti supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work options
401(k) with 150% match up to 6%
Employee Share Ownership Plan
Medical, Prescription, Dental & Vision Insurance
Life Insurance
100% Company-Paid Mobile Phone Plan
3 Weeks PTO + 7 Paid Holidays
Paid Parental Leave
Adoption, Surrogacy & Cryopreservation Assistance
Subsidized Back-up Child/Elder Care & Tutoring
Career Planning & Coaching
$5,250 Tuition Reimbursement & 20,000+ Online Courses
Employee Resource Groups
Counseling & Support for Physical, Financial, Emotional & Spiritual Well-being
Disaster Relief Programs
About Sogeti
Part of the Capgemini Group, Sogeti makes business value through technology for organizations that need to implement innovation at speed and want a local partner with global scale. With a hands-on culture and close proximity to its clients, Sogeti implements solutions that will help organizations work faster, better, and smarter. By combining its agility and speed of implementation through a DevOps approach, Sogeti delivers innovative solutions in quality engineering, cloud and application development, all driven by AI, data and automation.
Become Your Best | *************
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodation during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Please be aware that Capgemini may capture your image (video or screenshot) during the interview process and that image may be used for verification, including during the hiring and onboarding process.
Click the following link for more information on your rights as an Applicant **************************************************************************
Applicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $125,000 - $175,000.
This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Data Engineer
Data engineer job in New York, NY
About Beauty by Imagination:
Beauty by Imagination is a global haircare company dedicated to boosting self-confidence with imaginative solutions for every hair moment. We are a platform company of diverse, market-leading brands, including Wet Brush, Goody, Bio Ionic, and Ouidad - all of which are driven to be the most trusted choice for happy, healthy hair. Our talented team is passionate about delivering high-performing products for consumers and salon professionals alike.
Position Overview:
We are looking for a skilled Data Engineer to design, build, and maintain our enterprise Data Warehouse (DWH) and analytics ecosystem - with a growing focus on enabling AI-driven insights, automation, and enterprise-grade AI usage. In this role, you will architect scalable pipelines, improve data quality and reliability, and help lay the foundational data structures that power tools like Microsoft Copilot, Copilot for Power BI, and AI-assisted analytics across the business.
You'll collaborate with business stakeholders, analysts, and IT teams to modernize our data environment, integrate complex data sources, and support advanced analytics initiatives. Your work will directly influence decision-making, enterprise reporting, and next-generation AI capabilities built on top of our Data Warehouse.
Key Responsibilities
Design, develop, and maintain Data Warehouse architecture, including ETL/ELT pipelines, staging layers, and data marts.
Build and manage ETL workflows using SQL Server Integration Services (SSIS) and other data integration tools.
Integrate and transform data from multiple systems, including ERP platforms such as NetSuite.
Develop and optimize SQL scripts, stored procedures, and data transformations for performance and scalability.
Support and enhance Power BI dashboards and other BI/reporting systems.
Implement data quality checks, automation, and process monitoring.
Collaborate with business and analytics teams to translate requirements into scalable data solutions.
Contribute to data governance, standardization, and documentation practices.
Support emerging AI initiatives by ensuring model-ready data quality, accessibility, and semantic alignment with Copilot and other AI tools.
Required Qualifications
Proven experience with Data Warehouse design and development (ETL/ELT, star schema, SCD, staging, data marts).
Hands-on experience with SSIS (SQL Server Integration Services) for building and managing ETL workflows.
Strong SQL skills and experience with Microsoft SQL Server.
Proficiency in Power BI or other BI tools (Tableau, Looker, Qlik).
Understanding of data modeling, performance optimization, and relational database design.
Familiarity with Python, Airflow, or Azure Data Factory for data orchestration and automation.
Excellent analytical and communication skills.
Preferred Qualifications
Experience with cloud data platforms (Azure, AWS, or GCP).
Understanding of data security, governance, and compliance (GDPR, SOC2).
Experience with API integrations and real-time data ingestion.
Background in finance, supply chain, or e-commerce analytics.
Experience with NetSuite ERP or other ERP systems (SAP, Oracle, Dynamics, etc.).
AI Focused Preferred Skills:
Experience implementing AI-driven analytics or automation inside Data Warehouses.
Hands-on experience using Microsoft Copilot, Copilot for Power BI, or Copilot Studio to accelerate SQL, DAX, data modeling, documentation, or insights.
Familiarity with building RAG (Retrieval-Augmented Generation) or AI-assisted query patterns using SQL Server, Synapse, or Azure SQL.
Understanding of how LLMs interact with enterprise data, including grounding, semantic models, and data security considerations (Purview, RBAC).
Experience using AI tools to optimize ETL/ELT workflows, generate SQL scripts, or streamline data mapping/design.
Exposure to AI-driven data quality monitoring, anomaly detection, or pipeline validation tools.
Experience with Microsoft Fabric, semantic models, or ML-integrated analytics environments.
Soft Skills
Strong analytical and problem-solving mindset.
Ability to communicate complex technical concepts to business stakeholders.
Detail-oriented, organized, and self-motivated.
Collaborative team player with a growth mindset.
Impact
You will play a key role in shaping the company's modern data infrastructure - building scalable pipelines, enabling advanced analytics, and empowering the organization to safely and effectively adopt AI-powered insights across all business functions.
Our Tech Stack
SQL Server, SSIS, Azure Synapse
Python, Airflow, Azure Data Factory
Power BI, NetSuite ERP, REST APIs
CI/CD (Azure DevOps, GitHub)
What We Offer
Location: New York, NY (Hybrid work model)
Employment Type: Full-time
Compensation: Competitive salary based on experience
Benefits: Health insurance, 401(k), paid time off
Opportunities for professional growth and participation in enterprise AI modernization initiatives
Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco
Data engineer job in New York, NY
Are you a data engineer who loves building systems that power real impact in the world?
A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups.
In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications.
To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
Market Data Engineer
Data engineer job in New York, NY
🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment
I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide.
What You'll Do
Own large-scale, time-sensitive market data capture + normalization pipelines
Improve internal data formats and downstream datasets used by research and quantitative teams
Partner closely with infrastructure to ensure reliability of packet-capture systems
Build robust validation, QA, and monitoring frameworks for new market data sources
Provide production support, troubleshoot issues, and drive quick, effective resolutions
What You Bring
Experience building or maintaining large-scale ETL pipelines
Strong proficiency in Python + Bash, with familiarity in C++
Solid understanding of networking fundamentals
Experience with workflow/orchestration tools (Airflow, Luigi, Dagster)
Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.)
Bonus Skills
Experience working with binary market data protocols (ITCH, MDP3, etc.)
Understanding of high-performance filesystems and columnar storage formats
Lead Data Engineer
Data engineer job in New York, NY
We are seeking a Lead Data Engineer to design, build, and scale our enterprise data platform supporting analytics, reporting, and investment decision-making across asset classes. This role combines technical leadership with hands-on development, focusing on data architecture, pipeline design, and governance for high-quality, reliable financial data. You will be acting as a liaison between business stakeholders and the engineering team.
Responsibilities:
Lead the design and implementation of modern data pipelines and ETL/ELT frameworks using cloud and on-prem environments.
Architect and manage data models, warehouses, and lakehouses to support analytics, reporting, and machine learning.
Collaborate with investment, risk, and technology teams to define data requirements and ensure consistency across systems.
Drive best practices in data quality, lineage, and governance.
Mentor engineers and set standards for performance, security, and scalability.
Evaluate and implement new data technologies to improve efficiency and insights delivery.
Qualifications:
7+ years of experience in data engineering, with 2+ in a technical leadership role.
Strong expertise in SQL, Python, and modern data platforms (e.g., Azure Data Factory, Databricks, Snowflake, Synapse, AWS Glue, or GCP BigQuery).
Experience with data modeling, ETL/ELT pipelines, and data warehousing principles.
Familiarity with financial or investment data (portfolio, risk, performance, or accounting) preferred.
Strong understanding of CI/CD, version control (Git), and DevOps for data workflows.
Excellent communication and cross-functional collaboration skills.
Cloud Data Engineer
Data engineer job in New York, NY
Title: Enterprise Data Management - Data Cloud, Senior Developer I
Duration: FTE/Permanent
Salary: 130-165k
The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools.
Responsibilities:
Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures.
Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications.
Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools.
Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency.
Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation.
Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications.
Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access.
Champion data engineering standards, best practices, and governance across the enterprise.
Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning.
Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions.
Qualifications:
Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent.
10+ years in data engineering, cloud platform development, or analytics engineering.
Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies.
Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers).
Preferred: Experience containerizing data workloads with Docker and Kubernetes.
Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios.
Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products.
Bonus: Background in data mesh designs and automated data access systems.
Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow.
Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines.
Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling.
Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role.
Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
Lead Data Scientist (Ref: 190351)
Data engineer job in New York, NY
Industry: Retail
Salary: $150,000-$175,000 + Bonus
Contact: ********************************
Our client is a prominent player in the Apparel sector, committed to fusing fashion with cutting-edge data and technological solutions. Situated in New York, this organization is on the lookout for a Data Science Manager to drive their Operations Intelligence initiatives within the Data & Analytics department. This critical role is pivotal in leveraging advanced analytics, predictive modeling, and state-of-the-art Generative AI technologies to bolster decision-making across key operational areas such as Planning, Supply Chain, Sourcing, Sales, and Logistics.
The selected candidate will oversee the integration of data science methodologies into essential operational workflows, aiming to automate processes, improve visibility, accurately forecast business dynamics, and facilitate strategic planning through insightful data analysis.
Requirements
A minimum of 6 years of experience in the field of data science, with at least 2 years in a leadership or product-related role.
Proven ability to apply analytics in complex operational environments such as planning, supply chain, and sourcing.
Strong expertise in Python and SQL, along with a solid grasp of cloud-based data ecosystems.
Experience with advanced modeling techniques, including forecasting, optimization, and classification.
Familiarity with Generative AI technologies or LLMs, combined with a keen interest in leveraging these for practical business applications.
Excellent business acumen and communication skills, facilitating effective collaboration between data insights and strategic goals.
Senior Data Engineer [Strong Financial/Banking domain, Local candidate only]
Data engineer job in New York, NY
We are:
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our Challenge:
We are seeking a skilled Data Engineer with expertise in Databricks, Snowflake, Python, Pyspark, SQL, and Release Management to join our dynamic team. The ideal candidate will have a strong background in the banking domain and will be responsible for designing, developing, and maintaining robust data pipelines and systems to support our banking operations and analytics.
Additional Information*
The base salary for this position will vary based on geography and other factors.
In accordance with law, the base salary for this role if filled within NYC, NY is $140K - $150K/year & benefits (see below).
Responsibilities:
Design, develop, and maintain scalable and efficient data pipelines using Snowflake, Pyspark, and SQL.
Write optimized and complex SQL queries to extract, transform, and load data.
Develop and implement data models, schemas, and architecture that support banking domain requirements.
Collaborate with data analysts, data scientists, and business stakeholders to gather data requirements.
Automate data workflows and ensure data quality, accuracy, and integrity.
Manage and coordinate release processes for data pipelines and analytics solutions.
Monitor, troubleshoot, and optimize the performance of data systems.
Ensure compliance with data governance, security, and privacy standards within the banking domain.
Maintain documentation of data architecture, pipelines, and processes.
Stay updated with the latest industry trends and incorporate best practices.
Requirements:
10+ years of proven experience as a Data Engineer or in a similar role with a focus on Snowflake, Python, Pyspark, and SQL.
Strong understanding of data warehousing concepts and cloud data platforms, especially Snowflake.
Hands-on experience with release management, deployment, and version control practices.
Solid understanding of banking and financial services industry data and compliance requirements.
Proficiency in Python scripting and Pyspark for data processing and automation.
Experience with ETL/ELT processes and tools.
Knowledge of data governance, security, and privacy standards.
Excellent problem-solving and analytical skills.
Strong communication and collaboration abilities.
Preferred, but not required:
Good Knowledge in Azure and Databricks in highly preferred.
Knowledge of Apache Kafka or other streaming technologies.
Familiarity with DevOps practices and CI/CD pipelines.
Prior experience working in the banking or financial services industry.
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
S YNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Lead HPC Architect Cybersecurity - High Performance & Computational Data Ecosystem
Data engineer job in New York, NY
The Scientific Computing and Data group at the Icahn School of Medicine at Mount Sinai partners with scientists to accelerate scientific discovery. To achieve these aims, we support a cutting-edge high-performance computing and data ecosystem along with MD/PhD-level support for researchers. The group is composed of a high-performance computing team, a clinical data warehouse team and a data services team.
The Lead HPC Architect, Cybersecurity, High Performance Computational and Data Ecosystem, is responsible for designing, implementing, and managing the cybersecurity infrastructure and technical operations of Scientific Computing's computational and data science ecosystem. This ecosystem includes a 25,000+ core and 40+ petabyte usable high-performance computing (HPC) systems, clinical research databases, and a software development infrastructure for local and national projects. The HPC system is the fastest in the world at any academic biomedical center (Top 500 list).
To meet Sinai's scientific and clinical goals, the Lead brings a strategic, tactical and customer-focused vision to evolve the ecosystem to be continually more resilient, secure, scalable and productive for basic and translational biomedical research. The Lead combines deep technical expertise in cybersecurity, HPC systems, storage, networking, and software infrastructure with a strong focus on service, collaboration, and strategic planning for researchers and clinicians throughout the organization and beyond. The Lead is an expert troubleshooter, productive partner and leader of projects. The lead will work with stakeholders to make sure the HPC infrastructure is in compliance with governmental funding agency requirements and to promote efficient resource utilizations for researchers
This position reports to the Director for HPC and Data Ecosystem in Scientific Computing and Data.
Key Responsibilities:
HPC Cybersecurity & System Administration:
Design, implement, and manage all cybersecurity operations within the HPC environment, ensuring alignment with industry standards (NIST, ISO, GDPR, HIPAA, CMMC, NYC Cyber Command, etc.).
Implement best practices for data security, including but not limited to encryption (at rest, in transit, and in use), audit logging, access control, authentication control, configuration managements, secure enclaves, and confidential computing.
Perform full-spectrum HPC system administration: installation, monitoring, maintenance, usage reporting, troubleshooting, backup and performance tuning across HPC applications, web service, database, job scheduler, networking, storage, computes, and hardware to optimize workload efficiency.
Lead resolution of complex cybersecurity and system issues; provide mentorship and technical guidance to team members.
Ensure that all designs and implementations meet cybersecurity, performance, scalability, and reliability goals. Ensure that the design and operation of the HPC ecosystem is productive for research.
Lead the integration of HPC resources with laboratory equipment for data ingestion aligned with all regulatory such as genomic sequencers, microscopy, clinical system etc.
Develop, review and maintain security policies, risk assessments, and compliance documentation accurately and efficiently.
Collaborate with institutional IT, compliance, and research teams to ensure all regulatory, Sinai Policy and operational alignment.
Design and implement hybrid and cloud-integrated HPC solutions using on-premise and public cloud resources.
Partner with other peers regionally, nationally and internationally to discover, propose and deploy a world-class research infrastructure for Mount Sinai.
Stay current with emerging HPC, cloud, and cybersecurity technologies to keep the organization's infrastructure up-to-date.
Work collaboratively, effectively and productively with other team members within the group and across Mount Sinai.
Provide after-hours support as needed.
Perform other duties as assigned or requested.
Requirements:
Bachelor's degree in computer science, engineering or another scientific field. Master's or PhD preferred.
10 years of progressive HPC system administration experience with Enterprise Linux releases including RedHat/CentOS/Rocky Systems, and batch cluster environment.
Experience with all aspects of high-throughput HPC including schedulers (LSF or Slurm), networking (Infiniband/Gigabit Ethernet), parallel file systems and storage, configuration management systems (xCAT, Puppet and/or Ansible), etc.
Proficient in cybersecurity processes, posture, regulations, approaches, protocols, firewalls, data protection in a regulated environment (e.g. finance, healthcare).
In-depth knowledge HIPAA, NIST, FISMA, GDPR and related compliance standards, with prove experience building and maintaining compliant HPC system
Experience with secure enclaves and confidential computing.
Proven ability to provide mentorship and technical leadership to team members.
Proven ability to lead complex projects to completion in collaborative, interdisciplinary settings with minimum guidance.
Excellent analytical ability and troubleshooting skills.
Excellent communication, documentation, collaboration and interpersonal skills. Must be a team player and customer focused.
Scripting and programming experience.
Preferred Experience
Proficient with cloud services, orchestration tools, openshift/Kubernetes cost optimization and hybrid HPC architectures.
Experience with Azure, AWS or Google cloud services.
Experience with LSF job scheduler and GPFS Spectrum Scale.
Experience in a healthcare environment.
Experience in a research environment is highly preferred.
Experience with software that enables privacy-preserving linking of PHI.
Experience with Globus data transfer.
Experience with Web service, SAP HANA, Oracle, SQL, MariaDB and other database technologies.
Strength through Unity and Inclusion
The Mount Sinai Health System is committed to fostering an environment where everyone can contribute to excellence. We share a common dedication to delivering outstanding patient care. When you join us, you become part of Mount Sinai's unparalleled legacy of achievement, education, and innovation as we work together to transform healthcare. We encourage all team members to actively participate in creating a culture that ensures fair access to opportunities, promotes inclusive practices, and supports the success of every individual.
At Mount Sinai, our leaders are committed to fostering a workplace where all employees feel valued, respected, and empowered to grow. We strive to create an environment where collaboration, fairness, and continuous learning drive positive change, improving the well-being of our staff, patients, and organization. Our leaders are expected to challenge outdated practices, promote a culture of respect, and work toward meaningful improvements that enhance patient care and workplace experiences. We are dedicated to building a supportive and welcoming environment where everyone has the opportunity to thrive and advance professionally. Explore this opportunity and be part of the next chapter in our history.
About the Mount Sinai Health System:
Mount Sinai Health System is one of the largest academic medical systems in the New York metro area, with more than 48,000 employees working across eight hospitals, more than 400 outpatient practices, more than 300 labs, a school of nursing, and a leading school of medicine and graduate education. Mount Sinai advances health for all people, everywhere, by taking on the most complex health care challenges of our time - discovering and applying new scientific learning and knowledge; developing safer, more effective treatments; educating the next generation of medical leaders and innovators; and supporting local communities by delivering high-quality care to all who need it. Through the integration of its hospitals, labs, and schools, Mount Sinai offers comprehensive health care solutions from birth through geriatrics, leveraging innovative approaches such as artificial intelligence and informatics while keeping patients' medical and emotional needs at the center of all treatment. The Health System includes more than 9,000 primary and specialty care physicians; 13 joint-venture outpatient surgery centers throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and more than 30 affiliated community health centers. We are consistently ranked by U.S. News & World Report's Best Hospitals, receiving high "Honor Roll" status.
Equal Opportunity Employer
The Mount Sinai Health System is an equal opportunity employer, complying with all applicable federal civil rights laws. We do not discriminate, exclude, or treat individuals differently based on race, color, national origin, age, religion, disability, sex, sexual orientation, gender, veteran status, or any other characteristic protected by law. We are deeply committed to fostering an environment where all faculty, staff, students, trainees, patients, visitors, and the communities we serve feel respected and supported. Our goal is to create a healthcare and learning institution that actively works to remove barriers, address challenges, and promote fairness in all aspects of our organization.
Data Engineer
Data engineer job in New York, NY
Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets.
Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage.
The Opportunity
As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations.
Responsibilities and Duties
Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms).
Ensure consistent data hygiene, normalization, and enrichment across source systems.
Develop and maintain data models and data warehouses optimized for analytics and operational reporting.
Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights.
Own the documentation of data schemas, definitions, lineage, and data quality controls.
Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets.
Monitor pipeline performance and proactively resolve data discrepancies or failures.
Contribute to architectural decisions related to internal data infrastructure and tools.
Requirements
3-5 years of experience as a data engineer, analytics engineer, or similar role.
Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt).
Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday.
Proficiency in Python or another scripting language for data manipulation.
Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment).
Strong understanding of data governance, documentation, and schema management.
Excellent communication skills and ability to work cross-functionally.
Benefits
Flexible work arrangements (including hybrid mode)
Great Paid Time Off (PTO) policy
Comprehensive benefits package (Medical / Dental / Vision / Disability / Life)
Healthcare and Dependent Care Flexible Spending Accounts (FSAs)
401(k) retirement plan
Access to HSA-compatible plans
Pre-tax commuter benefits
Employee Assistance Program (EAP)
Opportunities for professional growth and development.
A supportive, dynamic, and inclusive work environment.
Why Join Us?
We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.
The compensation range for this role is $75,000 to $80,000 USD
Data Architect
Data engineer job in New York, NY
Hi,
I hope you are doing well!
We have an opportunity for Data Architect with one of our clients for NYC, NY.
Please see the job details below and let me know if you would be interested in this role.
If interested, please send me a copy of your resume, contact details, availability, and a good time to connect with you.
Title: Data Architect
Location: New York, New York - Onsite
Terms: Long Term Contract
Job Details:
Primary Skills:
SQL,ORACLE,Snowflake,12+ years of experience in data technology At least 5 years as a Data Engineer with hands-on experience in cloud environments 8+ years of Python programming focused on data processing and distributed systems 8+ years working with relational databases, dimensional modeling, and DBT 8+ years designing and administering cloud-based data warehousing solutions (e.g., Databricks) 8+ years' experience with Kafka or other streaming platforms Exposure to AI based advance techniques and tools Strong understanding of database fundamentals, including data modeling, advanced SQL development and optimization, ELT/ETL processes and DBT. Experience with Java, MS SQL Server, Druid, Qlik/Golden Gate CDC, and Power BI is a plus
Responsibilities:
Architect streaming data ingestion and integration with downstream systems
Implement AI-driven controller to orchestrate tens of millions of streams and micro-batches
Design AI-powered onboarding of new data sources
Develop AI-powered compute engine and data serving semantic layer
Deliver scalable cloud data services and APIs with sub-second response times over petabytes of data
Develop a unified alerting and monitoring framework supporting streaming transformations and compute across thousands of institutional clients and hundreds of external data sources
Build a self-service data management and operations platform
Implement a data quality monitoring framework
Qualifications:
Bachelor's degree in Computer Science, related field; advanced degree preferred
12+ years of experience in data technology
At least 5 years as a Data Engineer with hands-on experience in cloud environments
8+ years of Python programming focused on data processing and distributed systems
8+ years working with relational databases, SQL, dimensional modeling, and DBT
8+ years designing and administering cloud-based data warehousing solutions (e.g., Snowflake, Databricks)
8+ years experience with Kafka or other streaming platforms
Exposure to AI based advance techniques and tools
Strong understanding of database fundamentals, including data modeling, advanced SQL development and optimization, ELT/ETL processes and DBT.
Experience with Java, Oracle, MS SQL Server, Druid, Qlik/Golden Gate CDC, and Power BI is a plus
Strong leadership abilities and excellent communication skills.
Thanks
Amit Jha
Senior Recruiter at BeaconFire Inc.
Email : ***********************
Data Scientist
Data engineer job in West Haven, CT
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience
Synthetic Data Engineer (Observability & DevOps)
Data engineer job in New York, NY
About the Role: We're building a large-scale synthetic data generation engine to produce realistic observability datasets - metrics, logs, and traces - to support AI/ML training and benchmarking. You will design, implement, and scale pipelines that simulate complex production environments and emit controllable, parameterized telemetry data.
🧠 What You'll Do \t•\tDesign and implement generators for metrics (CPU, latency, throughput) and logs (structured/unstructured).
\t•\tBuild configurable pipelines to control data rate, shape, and anomaly injection.
\t•\tDevelop reproducible workload simulations and system behaviors (microservices, failures, recoveries).
\t•\tIntegrate synthetic data storage with Prometheus, ClickHouse, or Elasticsearch.
\t•\tCollaborate with ML researchers to evaluate realism and coverage of generated datasets.
\t•\tOptimize for scale and reproducibility using Docker containers.
✅ Who You Are \t•\tStrong programming skills in Python.
\t•\tFamiliarity with observability tools (Grafana, Prometheus, ELK, OpenTelemetry).
\t•\tSolid understanding of distributed systems metrics and log structures.
\t•\tExperience building data pipelines or synthetic data generators.
\t•\t(Bonus) Knowledge of anomaly detection, time-series analysis, or generative ML models.
💸 Pay $50 - 75/hr depending on experience Remote, flexible hours Project timeline: 5-6 weeks
Applications Integration Engineer
Data engineer job in New York, NY
About the Role:
A global professional services organization is seeking a Senior Applications Integration Lead to join its Enterprise Technology group. This position plays a critical role in designing, integrating, and supporting enterprise applications that power business operations worldwide. The ideal candidate brings a balance of hands-on engineering expertise, solution architecture experience, and a passion for system optimization in secure, hybrid cloud environments.
Key Responsibilities:
Architect, deploy, and maintain enterprise application solutions across cloud and on-premises environments.
Lead integrations between core business applications and productivity platforms to enhance user experience and operational efficiency.
Configure operating system settings, group policies, and endpoint management tools to ensure stability, performance, and compliance.
Troubleshoot and resolve complex application and system issues, collaborating with vendors and internal teams as needed.
Partner with IT leadership to shape the roadmap for end-user computing and enterprise application strategy.
Mentor technical staff and document best practices, deployment processes, and operational runbooks.
Monitor system health and performance, proactively identifying optimization opportunities.
Qualifications:
Bachelor's degree in Computer Science, Information Systems, or a related technical field (or equivalent experience).
5-10 years of experience in systems engineering, enterprise application management, or integration roles within large-scale environments.
Proven experience with Microsoft 365, Active Directory, Group Policy, Intune, and Windows Server administration.
Familiarity with enterprise platforms such as document management, CRM, ERP, or workflow automation tools.
Strong understanding of virtualization and cloud technologies (VMware, Azure Virtual Desktop, Citrix, Hyper-V).
Advanced scripting and automation skills in PowerShell and SQL for deployment, monitoring, and system optimization.
Experience managing performance analysis tools such as Perfmon, ControlUp, Wireshark, or Windows Performance Analyzer.
Knowledge of user profile and application personalization tools (FSLogix, Ivanti, or comparable platforms).
Strong analytical mindset, excellent documentation habits, and a proactive approach to technology problem-solving.
The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
Data Scientist
Data engineer job in Valley Stream, NY
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience