AI ML Engineer
Data engineer job in New York, NY
dv01 is lifting the curtain on the largest financial market in the world: structured finance. The $16+ trillion market is the backbone of everyday activities that empower financial freedom, from consolidating credit card debt and refinancing student loans, to buying a home and starting a small business.
dv01's data analytics platform brings unparalleled transparency into investment performance and risk for lenders and Wall Street investors in structured products. As a data-first company, we wrangle critical loan data and build modern analytical tools that enable strategic decision-making for responsible lending. In a nutshell, we're helping prevent a repeat of the 2008 global financial crisis by offering the data and tools required to make smarter data-driven decisions resulting in a safer world for all of us.
More than 400 of the largest financial institutions use dv01 for our coverage of over 75 million loans spanning mortgages, personal loans, auto, buy-now-pay-later programs, small business, and student loans. dv01 continues to expand coverage of new markets, adding loans monthly, and developing new technologies for the structured products universe.
YOU WILL:
Design and architect state-of-the-art AI/ML solutions to unlock valuable insights from unstructured banking documents: You will build document parsers to assist with the analysis of challenging document types to better inform customers of market conditions. You will build document classifiers to better improve the reliability and speed of our data ingestion processes and to provide enhanced intelligence for downstream users.
Build, deploy and maintain new and existing AI/ML solutions: You will participate in the end-to-end software development of new feature functionality and design capabilities. You will write clean, testable, and maintainable code. You will also establish meaningful criteria for evaluating algorithm performance and suitability, and be prepared to optimize processes and make informed tradeoffs across speed, performance, cost-effectiveness, and accuracy.
Interact with a diverse team: We operate in a highly collaborative environment, which means you'll interact with internal domain experts, product managers, engineers, and designers on a daily basis.
Keep up to date with AI/ML best practices and evolving open-source frameworks: You will regularly seek out innovation and continuous improvement, finding efficiency in all assigned tasks.
YOU ARE:
Experienced with production systems: You have 3+ years of professional experience writing production-ready code in a language such as Python.
Experienced with MLOps: Familiar with building end to end AI/ML systems with MLOps and DevOps practices (CI/CD, continuous training, evaluation, and performance tracking) with hands-on experience using MLflow.
Experienced with the latest AI/ML developments: You also will have at least 3 years experience in hands-on development of machine learning models using frameworks like Pytorch and tensorflow, with at least 1 year focused on generative AI using techniques such as RAG, Agentic AI, and Prompt Engineering. Familiarity with Claude and/or Gemini is desired.
A highly thoughtful AI engineer: You have strong communication skills and experience collaborating cross-functionally with product, design, and engineering.
Experienced with Cloud & APIs: Proficient working in cloud environments (GCP preferred), as well as in contributing to containerized applications (Docker, Kubernetes) and creating APIs using FastAPI, Flask, or other frameworks.
Experienced with Big Data Systems: You will have hands-on experience with Databricks, BigQuery or comparable big data systems. Strong SQL skills, bonus points for familiarity with dbt
Forward-thinking: Proactive and innovative, with the ability to explore uncharted solutions and tackle challenges that don't have predefined answers.
In good faith, our salary range for this role is $145,000-$160,000 but are not tied to it. Final offer amount will be at the company's sole discretion and determined by multiple factors, including years and depth of experience, expertise, and other business considerations. Our community is fueled by diverse people who welcome differing points of view and the opportunity to learn from each other. Our team is passionate about building a product people love and a culture where everyone can innovate and thrive.
BENEFITS & PERKS:
Unlimited PTO. Unplug and rejuvenate, however you want-whether that's vacationing on the beach or at home on a mental-health day.
$1,000 Learning & Development Fund. No matter where you are in your career, always invest in your future. We encourage you to attend conferences, take classes, and lead workshops. We also host hackathons, brunch & learns, and other employee-led learning opportunities.
Remote-First Environment. People thrive in a flexible and supportive environment that best invigorates them. You can work from your home, cafe, or hotel. You decide.
Health Care and Financial Planning. We offer a comprehensive medical, dental, and vision insurance package for you and your family. We also offer a 401(k) for you to contribute.
Stay active your way! Get $138/month to put toward your favorite gym or fitness membership - wherever you like to work out. Prefer to exercise at home? You can also use up to $1,650 per year through our Fitness Fund to purchase workout equipment, gear, or other wellness essentials.
New Family Bonding. Primary caregivers can take 16 weeks off 100% paid leave, while secondary caregivers can take 4 weeks. Returning to work after bringing home a new child isn't easy, which is why we're flexible and empathetic to the needs of new parents.
dv01 is an equal opportunity employer and all qualified applicants and employees will receive consideration for employment opportunities without regard to race, color, religion, creed, sex, sexual orientation, gender identity or expression, age, national origin or ancestry, citizenship, veteran status, membership in the uniformed services, disability, genetic information or any other basis protected by applicable law.
Auto-ApplyData Engineer
Data engineer job in Fort Lee, NJ
The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance.
Responsibilities
Analyze, structure, and interpret raw data.
Build and maintain datasets for business use.
Design and optimize database tables, schemas, and data structures.
Enhance data accuracy, consistency, and overall efficiency.
Develop views, functions, and stored procedures.
Write efficient SQL queries to support application integration.
Create database triggers to support automation processes.
Oversee data quality, integrity, and database security.
Translate complex data into clear, actionable insights.
Collaborate with cross-functional teams on multiple projects.
Present data through graphs, infographics, dashboards, and other visualization methods.
Define and track KPIs to measure the impact of business decisions.
Prepare reports and presentations for management based on analytical findings.
Conduct daily system maintenance and troubleshoot issues across all platforms.
Perform additional ad hoc analysis and tasks as needed.
Qualification
Bachelor's Degree in Information Technology or relevant
4+ years of experience as a Data Analyst or Data Engineer, including database design experience.
Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations.
Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views.
Excellent written, verbal, and interpersonal communication skills.
Ability to manage multiple tasks in a fast-paced and evolving environment.
Strong work ethic, professionalism, and integrity.
Advanced proficiency in Microsoft Office applications.
Data Engineer
Data engineer job in New York, NY
DL Software produces Godel, a financial information and trading terminal.
Role Description
This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities.
Qualifications
Strong proficiency in Data Engineering and Data Modeling
Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes
Strong Python background
Expertise in Extract, Transform, Load (ETL) processes and tools
Experience in designing, managing, and optimizing Data Warehousing solutions
Senior Data Engineer (Snowflake)
Data engineer job in Parsippany-Troy Hills, NJ
Senior Data Engineer (Snowflake & Python)
1-Year Contract | $60/hour + Benefit Options
Hybrid: On-site a few days per month (local candidates only)
Work Authorization Requirement
You must be authorized to work for any employer as a W2 employee. This is required for this role.
This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered.
Overview
We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake.
Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement.
What You'll Do
Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake
Participate across the full software development lifecycle - planning, requirements, development, testing, and QA
Partner closely with engineering and data teams to identify and implement optimal technical solutions
Build and maintain high-performance, scalable data pipelines and data warehouse architectures
Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards
Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions
Manage deliverables and priorities effectively in a fast-moving environment
Contribute to data governance practices including metadata management and data lineage
Support analytics and reporting use cases leveraging advanced SQL and analytical functions
Required Skills & Experience
8+ years of experience designing and developing data solutions in an enterprise environment
5+ years of hands-on Snowflake experience
Strong hands-on development skills with SQL and Python
Proven experience designing and developing data warehouses in Snowflake
Ability to diagnose, optimize, and tune SQL queries
Experience with Azure data frameworks (e.g., Azure Data Factory)
Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar
Solid understanding of metadata management and data lineage
Hands-on experience with SQL analytical functions
Working knowledge of Shell scripting and Java scripting
Experience using Git, Confluence, and Jira
Strong problem-solving and troubleshooting skills
Collaborative mindset with excellent communication skills
Nice to Have
Experience supporting Pharma industry data
Exposure to Omni-channel data environments
Why This Opportunity
$60/hour W2 on a long-term 1-year contract
Benefit options available
Hybrid structure with limited on-site requirement
High-impact role supporting enterprise data initiatives
Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
Azure Data Engineer
Data engineer job in Weehawken, NJ
· Expert level skills writing and optimizing complex SQL
· Experience with complex data modelling, ETL design, and using large databases in a business environment
· Experience with building data pipelines and applications to stream and process datasets at low latencies
· Fluent with Big Data technologies like Spark, Kafka and Hive
· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
· Designing and building of data pipelines using API ingestion and Streaming ingestion methods
· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
· Experience in developing NO SQL solutions using Azure Cosmos DB is essential
· Thorough understanding of Azure and AWS Cloud Infrastructure offerings
· Working knowledge of Python is desirable
· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Senior Data Engineer
Data engineer job in New Providence, NJ
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
Job Description
Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems
Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance
Work in tandem with our engineering team to identify and implement the most optimal solutions
Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design
Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures
Able to manage deliverables in fast paced environments
Areas of Expertise
At least 10 years of experience designing and development of data solutions in enterprise environment
At least 5+ years' experience on Snowflake Platform
Strong hands-on SQL and Python development
Experience with designing and developing data warehouses in Snowflake
A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala
Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic
Good understanding on Metadata and data lineage
Hands-on knowledge on SQL Analytical functions
Strong knowledge and hands-on experience in Shell scripting, Java Scripting
Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering.
Good understanding and exposure to Git, Confluence and Jira
Good problem solving and troubleshooting skills.
Team player, collaborative approach and excellent communication skills
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certifiedâ„¢ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
Senior Data Engineer
Data engineer job in New York, NY
Title: Senior Data Engineer
Duration: 12-15 months (possibilities of conversion)
W2 Candidates only.
Our client, is seeking a Senior Data Engineer to join their team in New York (preferred, Downtown WTC) or Boston. This is a long-term contract position with the potential to convert to a full-time employee (FTE) role. The role requires focusing on overseeing third-party fund accounting administration, client life cycle, valuation automation, and driving data-related initiatives.
Key Responsibilities:
• Lead the design, development, and optimization of data architecture, modeling, and pipelines to support fund accounting administration transitions to third parties.
• Oversee and manage third-party vendors, ensuring seamless integration and efficiency in data processes.
• Collaborate with business units (BUs) and stakeholders to gather requirements, refine processes, and implement data solutions.
• Build and maintain robust CI/CD pipelines to ensure scalable and reliable data workflows.
• Utilize Snowflake and advanced SQL to manage and query large datasets effectively.
• Drive data engineering best practices, ensuring high-quality, efficient, and secure data systems.
• Communicate complex technical concepts to non-technical stakeholders, ensuring alignment and clarity.
Must-Have Qualifications:
• Experience: 10+ years in data engineering or related roles.
• Technical Expertise:
o Advanced proficiency in Python and SQL for data processing and pipeline development.
o Experience with additional cloud-based AWS data platforms or tools.
o Strong experience in data architecture, data modeling, and CI/CD pipeline implementation.
o Hands-on expertise with Snowflake for data warehousing and analytics.
• Domain Knowledge: Extensive experience in asset management is mandatory.
• Communication: Exceptional verbal and written communication skills, with the ability to engage effectively with business units and stakeholders.
Nice-to-Have Qualifications:
• Prior experience leading or overseeing third-party vendors in a data-related capacity.
• Familiarity with advanced data orchestration tools or frameworks.
Market Data Engineer
Data engineer job in New York, NY
🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment
I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide.
What You'll Do
Own large-scale, time-sensitive market data capture + normalization pipelines
Improve internal data formats and downstream datasets used by research and quantitative teams
Partner closely with infrastructure to ensure reliability of packet-capture systems
Build robust validation, QA, and monitoring frameworks for new market data sources
Provide production support, troubleshoot issues, and drive quick, effective resolutions
What You Bring
Experience building or maintaining large-scale ETL pipelines
Strong proficiency in Python + Bash, with familiarity in C++
Solid understanding of networking fundamentals
Experience with workflow/orchestration tools (Airflow, Luigi, Dagster)
Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.)
Bonus Skills
Experience working with binary market data protocols (ITCH, MDP3, etc.)
Understanding of high-performance filesystems and columnar storage formats
Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco
Data engineer job in New York, NY
Are you a data engineer who loves building systems that power real impact in the world?
A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups.
In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications.
To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
Hadoop Developer
Data engineer job in Jersey City, NJ
Job Title :: Cloudera/Hadoop Administrator
Rate :: : $65/hr. on C2C
Duration :: 12+ months
Experience :: 9+ Years
Interview Process: 2 rounds Final will be F2F
Looking for:
-Minimum 3 years of hands-on experience as a Cloudera/Hadoop Administrator in production environments.
-Relevant certifications such as Cloudera Certified Administrator for Apache Hadoop are desirable.
-Strong proficiency in Linux/Unix operating systems and command-line tools.
-Proven experience with cluster management tools, particularly Cloudera Manager.
-Solid understanding of security protocols (Kerberos, SSL/TLS) and their implementation in big data environments.
-Excellent problem-solving, analytical, and communication skills.
Seeking an experienced Cloudera Administrator to manage and maintain our enterprise-grade big data platforms. The ideal candidate will ensure the stability, performance, and security of our Cloudera on-premise cluster, collaborating with data engineering, databases, networks, and application teams to support data-driven initiatives.
Main areas of responsibilities:
1. Cluster Management: Deploy, configure, manage, and maintain Cloudera Hadoop/CDP clusters across three environments.
2. Monitoring and Performance Tuning: Monitor cluster connectivity, health, and performance using tools like Cloudera Manager, Grafana, and Splunk. Proactively identify and resolve performance bottlenecks, tune configurations (HDFS, YARN, Spark, Hive, etc.), and conduct capacity planning.
3. Security and Compliance: Implement and manage cluster security configurations, including Kerberos, Sentry/Ranger for authorization, HDFS ACLs, and integrate with enterprise IAM/LDAP.
4. Automation and Scripting: Develop and maintain automation scripts using Python, Bash, or Perl for routine administrative tasks, configuration management, and streamlining operations.
5. Troubleshooting and Support: Provide expert-level troubleshooting and technical support for issues related to the Hadoop ecosystem components and big data applications, including participation in an on-call rotation.
6. Upgrades and Patching: Perform installations, patching, and upgrades of Hadoop software and Cloudera components while ensuring minimal downtime.
Requirements:
• Minimum 3 years of hands-on experience as a Cloudera/Hadoop Administrator in production environments.
• Relevant certifications such as Cloudera Certified Administrator for Apache Hadoop are desirable.
• Strong proficiency in Linux/Unix operating systems and command-line tools.
• Proven experience with cluster management tools, particularly Cloudera Manager.
• Solid understanding of security protocols (Kerberos, SSL/TLS) and their implementation in big data environments.
• Excellent problem-solving, analytical, and communication skills.
Job Responsibilities
1. Cluster Management: Deploy, configure, manage, and maintain Cloudera Hadoop/CDP clusters across three environments.
2. Monitoring and Performance Tuning: Monitor cluster connectivity, health, and performance using tools like Cloudera Manager, Grafana, and Splunk. Proactively identify and resolve performance bottlenecks, tune configurations (HDFS, YARN, Spark, Hive, etc.), and conduct capacity planning.
3. Security and Compliance: Implement and manage cluster security configurations, including Kerberos, Sentry/Ranger for authorization, HDFS ACLs, and integrate with enterprise IAM/LDAP.
4. Automation and Scripting: Develop and maintain automation scripts using Python, Bash, or Perl for routine administrative tasks, configuration management, and streamlining operations.
5. Troubleshooting and Support: Provide expert-level troubleshooting and technical support for issues related to the Hadoop ecosystem components and big data applications, including participation in an on-call rotation.
6. Upgrades and Patching: Perform installations, patching, and upgrades of Hadoop software and Cloudera components while ensuring minimal downtime.
Cloud Data Engineer
Data engineer job in New York, NY
Title: Enterprise Data Management - Data Cloud, Senior Developer I
Duration: FTE/Permanent
Salary: 130-165k
The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools.
Responsibilities:
Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures.
Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications.
Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools.
Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency.
Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation.
Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications.
Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access.
Champion data engineering standards, best practices, and governance across the enterprise.
Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning.
Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions.
Qualifications:
Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent.
10+ years in data engineering, cloud platform development, or analytics engineering.
Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies.
Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers).
Preferred: Experience containerizing data workloads with Docker and Kubernetes.
Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios.
Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products.
Bonus: Background in data mesh designs and automated data access systems.
Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow.
Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines.
Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling.
Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role.
Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
Sr Data Modeler with Capital Markets/ Custody
Data engineer job in Jersey City, NJ
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit *******************
Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets
Work Location
Jersey City, NJ (Onsite, 5 days / week)
Job Description:
We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain.
The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model.
A software development background is not required.
Key Responsibilities
Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics).
Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model.
Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model.
Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality.
Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity.
Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage.
Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation.
Champion a detail-oriented and documentation-first culture within the data modeling team.
Qualifications
Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services.
Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics).
Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures.
Strong skills in data analysis using SQL and/or other data analysis tools.
Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation.
Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation.
Ability to mentor and lead junior team members in data modeling best practices.
Passion for detail, documentation, and continuous improvement.
Software development background is not required.
Preferred Skills
Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer).
Familiarity with capital markets, business processes and data flows.
Knowledge of regulatory and compliance requirements in financial data management.
Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases).
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Azure Data Engineer
Data engineer job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Data Analytics Engineer
Data engineer job in Somerset, NJ
Client: manufacturing company
Type: direct hire
Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets.
This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams.
This role is on-site five days per week in Somerset, NJ.
Key Responsibilities
Power BI Reporting & Administration
Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets
Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions
Develop and maintain data models to ensure accuracy, consistency, and reliability
Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance
Optimize Power BI solutions for performance, scalability, and ease of use
ETL & Data Warehousing
Design and maintain data warehouse structures, including schema and database layouts
Develop and support ETL processes to ensure timely and accurate data ingestion
Integrate data from multiple systems while ensuring quality, consistency, and completeness
Work closely with database administrators to optimize data warehouse performance
Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed
Training & Documentation
Create and maintain technical documentation, including specifications, mappings, models, and architectural designs
Document data warehouse processes for reference, troubleshooting, and ongoing maintenance
Manage data definitions, lineage documentation, and data cataloging for all enterprise data models
Project Management
Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team
Collaborate with key business stakeholders to ensure departmental reporting needs are met
Record meeting notes in Confluence and document project updates in Jira
Data Governance
Implement and enforce data governance policies to ensure data quality, compliance, and security
Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness
Routine IT Functions
Resolve Help Desk tickets related to reporting, dashboards, and BI tools
Support general software and hardware installations when needed
Other Responsibilities
Manage email and phone communication professionally and promptly
Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel
Perform additional assigned duties as needed
Qualifications
Required
Minimum of 3 years of relevant experience
Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience
Experience with cloud-based BI environments (Azure, AWS, etc.)
Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica)
Proficiency in SQL for data extraction, manipulation, and transformation
Strong knowledge of DAX
Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake)
Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools
Strong analytical, problem-solving, and documentation skills
Excellent written and verbal communication abilities
High attention to detail and strong self-review practices
Effective time management and organizational skills; ability to prioritize workload
Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
Data Engineer
Data engineer job in New York, NY
Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets.
Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage.
The Opportunity
As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations.
Responsibilities and Duties
Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms).
Ensure consistent data hygiene, normalization, and enrichment across source systems.
Develop and maintain data models and data warehouses optimized for analytics and operational reporting.
Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights.
Own the documentation of data schemas, definitions, lineage, and data quality controls.
Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets.
Monitor pipeline performance and proactively resolve data discrepancies or failures.
Contribute to architectural decisions related to internal data infrastructure and tools.
Requirements
3-5 years of experience as a data engineer, analytics engineer, or similar role.
Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt).
Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday.
Proficiency in Python or another scripting language for data manipulation.
Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment).
Strong understanding of data governance, documentation, and schema management.
Excellent communication skills and ability to work cross-functionally.
Benefits
Flexible work arrangements (including hybrid mode)
Great Paid Time Off (PTO) policy
Comprehensive benefits package (Medical / Dental / Vision / Disability / Life)
Healthcare and Dependent Care Flexible Spending Accounts (FSAs)
401(k) retirement plan
Access to HSA-compatible plans
Pre-tax commuter benefits
Employee Assistance Program (EAP)
Opportunities for professional growth and development.
A supportive, dynamic, and inclusive work environment.
Why Join Us?
We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.
The compensation range for this role is $75,000 to $80,000 USD
Data Center Architect
Data engineer job in New York, NY
Seeking an experienced Data Center Architect to lead enterprise-scale data center architecture, modernization, and transformation initiatives. This role serves as the technical authority across design, migration, operations transition, and stakeholder engagement in highly regulated environments.
Key Responsibilities
Lead end-to-end data center architecture for design, build, migration, and operational transition
Architect and modernize compute, storage/SAN, network/WAN, backup, voice, and physical data center facilities
Eliminate single points of failure and modernize legacy environments
Drive data center modernization, including legacy-to-modern migrations (e.g., tape to Commvault, UNIX transitions, hybrid/cloud models)
Design physical data center layouts including racks, power, cooling, cabling, grounding, and space planning
Own project lifecycle: requirements, architecture, RFPs, financial modeling, installation, commissioning, cutover, and migrations
Develop CapEx/OpEx forecasts, cost models, and executive-level business cases
Ensure operational readiness, documentation, lifecycle management, and smooth handoff to operations teams
Design and enhance monitoring, observability, automation, KPIs, and alerting strategies
Ensure compliance with security, audit, and regulatory standards
Lead capacity planning, roadmap development, and SLA/KPI definition
Act as a trusted SME, collaborating with infrastructure, application, operations, vendors, and facilities teams
Required Skills & Experience
Enterprise data center architecture leadership experience
Strong expertise in compute (physical/virtual), storage/SAN, networking, backup, and facilities design
Hands-on experience with data center modernization and hybrid/cloud integration (AWS preferred)
Strong understanding of monitoring, automation, and DevOps-aligned infrastructure practices
Proven experience with financial planning, cost modeling, and executive presentations
Excellent communication and stakeholder management skills
Experience working in regulated or public-sector environments preferred
Big Data Developer
Data engineer job in Jersey City, NJ
Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
Implementing Spark processing based ETL frameworks
Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
Modifying the Informatica-Teradata & Unix based data pipeline
Enhancing the Talend-Hive/Spark & Unix based data pipelines
Develop and Deploy Scala/Python based Spark Jobs for ETL processing
Strong SQL & DWH concepts
Data Architect
Data engineer job in Piscataway, NJ
Data Solution Architect : The ideal candidate will have a deep understanding of Microsoft data services, including Azure Databricks, Azure Data Factory (ADF), Azure Synapse, and ETL/ELT processes. This role focuses on designing, developing, and maintaining cloud-based data pipelines and solutions to drive our analytics and business intelligence capabilities.
Develop modern data solutions and architecture for cloud-native data platforms.
Build cost-effective infrastructure in Databricks and orchestrate workflows using Databricks/ADF.
Lead data strategy sessions focused on scalability, performance, and flexibility.
Collaborate with customers to implement solutions for data modernization.
Should have 14+ years of experience with last 4 years in implementing Cloud native end-to-end Data Solutions in Databricks from ingestion to consumption to support variety of needs such as Modern Data warehouse, BI, Insights and Analytics
Should have experience in architecture and implementing End to End Modern Data Solutions using Azure and advance data processing frameworks like Databricks etc.
Experience with Databricks, PySpark, and modern data platforms.
Proficiency in cloud-native architecture and data governance.
Strong experience in migrating from on-premises to cloud solutions (Spark, Hadoop to Databricks).
Good appreciation and at least one implementation experience on processing substrates in Data Engineering - such as ETL Tools, Kafka, ELT techniques
Data Mesh and Data Products designing, and implementation knowledge will be an added advantage.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
SAP Data Migration Developer
Data engineer job in Englewood, NJ
SAP S4 Data Migration Developer
Duration: 6 Months
Rate: Competitive Market Rate
This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone.
KEY RESPONSIBILITIES -
Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification.
Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments.
Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope.
Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform
Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs.
Demonstrate capabilities with performance tuning, handling large data sets.
Understand SAP tables, fields & load processes into SAP S4, MDG systems
Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation
Be a problem solver and build robust conversion, validation per requirements.
SKILLS AND EXPERIENCE
6-8 years of experience in SAP Data Services application as a developer
At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance
Good communication skills, ability to deliver key objects on time and support with testing, mock cycles.
4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward
Taking ownership and ensuring high quality results
Active in seeking feedback and making necessary changes
Specific previous experience -
Proven experience in implementing SAP Data Services in a multinational environment.
Experience in design of data loads of large volumes to SAP S4 from SAP ECC
Must have used HANA Staging tables
Experience in developing Information Steward for Data Reconciliation & Validation (not profiling)
REQUIREMENTS
Adhere to work availability schedule as noted above, be on time for meeting
Written and verbal communication in English
Senior Data Engineer
Data engineer job in New York, NY
Our client is a growing Fintech software company Headquarted in New York, NY. They have several hundred employees and are in growth mode.
They are currently looking for a Senior Data Engineer w/ 6+ years of overall professional experience. Qualified candidates will have hands-on experience with Python (6 years), SQL (6 years), DBT (3 years), AWS (Lambda, Glue), Airflow and Snowflake (3 years). BSCS and good CS fundamentals.
The Senior Data Engineer will work in a collaborative team environment and will be responsible for building, optimizing and scaling ETL Data Pipelines, DBT models and Datawarehousing. Excellent communication and organizational skills are expected.
This role features competitive base salary, equity, 401(k) with company match and many other attractive perks. Please send your resume to ******************* for immediate consideration.