Data Engineer
Data engineer job in Newark, NJ
Data Engineer
Duration: 6 months (with possible extension)
Visas-: USC, GC, GC EAD
Contract type- W2 only (No H1b or H4EAD)
Responsibilities
Prepares data for analytical or operational uses
Builds data pipelines to pull information from different source systems, integrating, consolidating, and cleansing data and structuring it for use in applications
Creates interfaces and mechanisms for the flow and access of information.
Required Skills
ETL
AWS
Data analysis
Multisource data gathering
Data Engineer
Data engineer job in Fort Lee, NJ
The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance.
Responsibilities
Analyze, structure, and interpret raw data.
Build and maintain datasets for business use.
Design and optimize database tables, schemas, and data structures.
Enhance data accuracy, consistency, and overall efficiency.
Develop views, functions, and stored procedures.
Write efficient SQL queries to support application integration.
Create database triggers to support automation processes.
Oversee data quality, integrity, and database security.
Translate complex data into clear, actionable insights.
Collaborate with cross-functional teams on multiple projects.
Present data through graphs, infographics, dashboards, and other visualization methods.
Define and track KPIs to measure the impact of business decisions.
Prepare reports and presentations for management based on analytical findings.
Conduct daily system maintenance and troubleshoot issues across all platforms.
Perform additional ad hoc analysis and tasks as needed.
Qualification
Bachelor's Degree in Information Technology or relevant
4+ years of experience as a Data Analyst or Data Engineer, including database design experience.
Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations.
Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views.
Excellent written, verbal, and interpersonal communication skills.
Ability to manage multiple tasks in a fast-paced and evolving environment.
Strong work ethic, professionalism, and integrity.
Advanced proficiency in Microsoft Office applications.
Data Engineer
Data engineer job in Jersey City, NJ
Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices.
Role: Data Engineer
Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ
Duration: Full-Time/W2
Job Description:
Must-Haves:
Python for running ETL batch jobs
Heavy SQL for data analysis, validation and querying
AWS and the ability to move the data through the data stages and into their target databases.
The Postgres database is the target, so that is required.
Nice to haves:
Snowflake
Java for API development is a nice to have (will teach this)
Experience in asset management for domain knowledge.
Production support debugging and processing of vendor data
The Expertise and Skills You Bring
A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience
Extensive experience with ETL technologies
Design and develop ETL reporting and analytics solutions.
Knowledge of Data Warehousing methodologies and concepts - preferred
Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required
RMDS experience (Snowflake, PostgreSQL ) - required
Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required
Designing and developing low to moderate complex data integration solution - required
Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred
Expert in SQL and Stored Procedures on any Relational databases
Good in debugging, analyzing and Production Support
Application Development based on JIRA stories (Agile environment)
Demonstrable experience with ETL tools (Informatica, Snaplogic)
Experience in working with Python in an AWS environment
Create, update, and maintain technical documentation for software-based projects and products.
Solving production issues.
Interact effectively with business partners to understand business requirements and assist in generation of technical requirements.
Participate in architecture, technical design, and product implementation discussions.
Working Knowledge of Unix/Linux operating systems and shell scripting
Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis.
Excellent interpersonal and communication skills
Ability to work with global Agile teams
Proven ability to deal with ambiguity and work in fast paced environment
Ability to mentor junior data engineers.
The Value You Deliver
The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack.
Strong experience of working in large teams and proven technical leadership capabilities
Knowledge of enterprise-level implementations like data warehouses and automated solutions.
Ability to negotiate, influence and work with business peers and management.
Ability to develop and drive a strategy as per the needs of the team
Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
C++ Market Data Engineer
Data engineer job in Stamford, CT
We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making.
What You'll Do:
Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options
Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design
Ensure reliable data delivery with failover, gap recovery, and replay mechanisms
Collaborate with researchers and engineers to align data formats for trading and simulation
Instrument and test systems for continuous performance improvements
What We're Looking For:
3+ years of C++ development experience (low-latency, high-throughput systems)
Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH)
Strong knowledge of concurrency, memory models, and compiler optimizations
Python scripting skills for testing and automation
Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
Data Architect
Data engineer job in Ridgefield, NJ
Immediate need for a talented Data Architect. This is a 12 month contract opportunity with long-term potential and is located in Basking Ridge, NJ (Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-93859
Pay Range: $110 - $120/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Key Skills; ETL, LTMC, SaaS .
5 years as a Data Architect
5 years in ETL
3 years in LTMC
Our client is a leading Telecom Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Azure Data Engineer
Data engineer job in Weehawken, NJ
· Expert level skills writing and optimizing complex SQL
· Experience with complex data modelling, ETL design, and using large databases in a business environment
· Experience with building data pipelines and applications to stream and process datasets at low latencies
· Fluent with Big Data technologies like Spark, Kafka and Hive
· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
· Designing and building of data pipelines using API ingestion and Streaming ingestion methods
· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
· Experience in developing NO SQL solutions using Azure Cosmos DB is essential
· Thorough understanding of Azure and AWS Cloud Infrastructure offerings
· Working knowledge of Python is desirable
· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Lead Data Engineer
Data engineer job in Roseland, NJ
Job Title: Lead Data Engineer.
Hybrid Role: 3 Times / Week.
Type: 12 Months Contract - Rolling / Extendable Contract.
Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements.
Must haves:
AWS.
Databricks.
Lead experience- this can be supplemented for staff as well.
Python.
Pyspark.
Contact Center Experience is a nice to have.
Job Description:
As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams.
TO SUCCEED IN THIS ROLE:
Ability to define and design complex data integration solutions with general direction and stakeholder access.
Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team.
Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake.
Highly skilled in RDBMS platforms such as Oracle, SQLServer.
Familiarity with NoSQL DB platforms like MongoDB.
Understanding of data modeling and data engineering, including SQL and Python.
Strong understanding of data quality, compliance, governance and security.
Proficiency in languages such as Python, SQL, and PySpark.
Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval.
Ability to design and develop scalable data pipelines.
Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable.
Experience with code repository and project tools such as GitHub, JIRA, and Confluence.
Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace.
Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes.
Detail-oriented with strong analytical, problem-solving, and organizational skills.
Ability to clearly communicate with both technical and business teams.
Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus.
Knowledge of Agile development methodologies is a plus.
Having a Databricks data engineer associate certification is a plus but not mandatory.
Data Engineer Requirements:
Bachelor's degree in computer science, information technology, or a similar field.
8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies.
Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features.
Knowledge of mathematical foundations and statistical analysis.
Strong interpersonal skills.
Excellent communication and presentation skills.
Advanced troubleshooting skills.
Regards,
Purnima Pobbathy
Senior Technical Recruiter
************
| ********************* |Themesoft Inc |
Azure Data Engineer
Data engineer job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Data Engineer
Data engineer job in Jersey City, NJ
ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES
Skillset: Data Engineer
Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3
Nice to Haves: Java, Spark, React Js
Interview Process: Interview Process: 2 rounds, 2nd will be on site
You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you.
As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives.
Job responsibilities:
• Supports review of controls to ensure sufficient protection of enterprise data.
• Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request.
• Updates logical or physical data models based on new use cases.
• Frequently uses SQL and understands NoSQL databases and their niche in the marketplace.
• Adds to team culture of diversity, opportunity, inclusion, and respect.
• Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals).
• Supports review of controls to ensure sufficient protection of enterprise data
Required qualifications, capabilities, and skills
• Formal training or certification on data engineering concepts and 2+ years applied experience
• Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases
• Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
• Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark.
• Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
• Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
• Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs
• Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
• Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift
• Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
• Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools.
Preferred qualifications, capabilities, and skills
• Knowledge of data governance and security best practices.
• Experience in carrying out data analysis to support business insights.
• Strong Python and Spark
Sr Data Modeler with Capital Markets/ Custody
Data engineer job in Jersey City, NJ
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit *******************
Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets
Work Location
Jersey City, NJ (Onsite, 5 days / week)
Job Description:
We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain.
The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model.
A software development background is not required.
Key Responsibilities
Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics).
Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model.
Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model.
Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality.
Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity.
Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage.
Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation.
Champion a detail-oriented and documentation-first culture within the data modeling team.
Qualifications
Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services.
Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics).
Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures.
Strong skills in data analysis using SQL and/or other data analysis tools.
Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation.
Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation.
Ability to mentor and lead junior team members in data modeling best practices.
Passion for detail, documentation, and continuous improvement.
Software development background is not required.
Preferred Skills
Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer).
Familiarity with capital markets, business processes and data flows.
Knowledge of regulatory and compliance requirements in financial data management.
Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases).
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Data Engineer
Data engineer job in Newark, NJ
NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises.
Role Description
This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role.
Key Responsibilities
Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools.
Data Integration: Integrate and transform data using industry-standard tools. Experience required with:
AWS Services: AWS Glue, Data Pipeline, Redshift, and S3.
Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage.
Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift.
Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity.
Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization.
Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions.
Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly.
Required Skills and Experience
Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar).
Integration: Experience integrating data via RESTful / GraphQL APIs.
Programming: Proficient in Python for ETL automation and SQL for database management.
Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) .
Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics.
Integration: Experience integrating data via RESTful APIs.
Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders.
Authorization: Must have valid work authorization in the United States.
Salary Range: $65,000- $80,000 per year
Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company.
Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
Big Data Developer
Data engineer job in Jersey City, NJ
Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
Implementing Spark processing based ETL frameworks
Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
Modifying the Informatica-Teradata & Unix based data pipeline
Enhancing the Talend-Hive/Spark & Unix based data pipelines
Develop and Deploy Scala/Python based Spark Jobs for ETL processing
Strong SQL & DWH concepts
SAP Data Migration Developer
Data engineer job in Englewood, NJ
SAP S4 Data Migration Developer
Duration: 6 Months
Rate: Competitive Market Rate
This key role is responsible for development and configuration of SAP Data Services Platform with in Client's Corporate technology to deliver a successful data conversion and migration from SAP ECC to SAP S4 as part of project Keystone.
KEY RESPONSIBILITIES -
Responsible for SAP Data Services development, design, job creation and execution. Responsible for efficient design, performance tuning and ensuring timely data processing, validation & verification.
Responsible for creating content within SAP Data Services for both master and transaction data conversion (standard SAP and Custom data objects). Responsible for data conversion using Staging tables and work with SAP teams for data loads in SAP S4 and MDG environments.
Responsible for building validation rules, scorecards and data for consumption in Information Steward pursuant to conversion rules as per Functional Specifications. Responsible to adhere to project timelines, deliverables and account for object delivery for teams involved. To take part in meetings, execute plans, design, develop custom solutions within Clients O&T Engineering scope.
Work in all facets of SAP Data Migration projects with focus on SAP S4 Data Migration using SAP Data Services Platform
Hands-on development experience with ETL from legacy SAP ECC environment, conversions and jobs.
Demonstrate capabilities with performance tuning, handling large data sets.
Understand SAP tables, fields & load processes into SAP S4, MDG systems
Build validation rules, customize, and deploy Information Steward scorecards, data reconciliation and validation
Be a problem solver and build robust conversion, validation per requirements.
SKILLS AND EXPERIENCE
6-8 years of experience in SAP Data Services application as a developer
At least 2 SAP S4 Conversion projects with DMC, Staging Tables & updating SAP Master Data Governance
Good communication skills, ability to deliver key objects on time and support with testing, mock cycles.
4-5 Years development experience in SAP Data Services 4.3 Designer, Information Steward
Taking ownership and ensuring high quality results
Active in seeking feedback and making necessary changes
Specific previous experience -
Proven experience in implementing SAP Data Services in a multinational environment.
Experience in design of data loads of large volumes to SAP S4 from SAP ECC
Must have used HANA Staging tables
Experience in developing Information Steward for Data Reconciliation & Validation (not profiling)
REQUIREMENTS
Adhere to work availability schedule as noted above, be on time for meeting
Written and verbal communication in English
BI Engineer (Tableau & Power BI - platforms/server)
Data engineer job in Newark, NJ
Job Title: BI Engineer (Tableau & Power BI - platforms/server)
Duration: 12 months long term project
US citizens and Green Card Holders and those authorized to work in the US are encouraged to apply. We are unable to sponsor
H1b
candidates at this time
Summary of the job
-Extremely technical/hands on skills on Power BI, Python and some Tableau
- Financial, Asset Management, banking background
- FIX Income specifically is a big plus
- Azure Cloud
Job Description:
Our Role:
We are looking for an astute, determined professional like you to fulfil a BI Engineering role within our Technology Solutions Group.
You will showcase your success in a fast-paced environment through collaboration, ownership, and innovation.
Your expertise in emerging trends and practices will evoke stimulating discussions around optimization and change to help keep our competitive edge.
This rewarding opportunity will enable you to make a big impact in our organization, so if this sounds exciting, then might be the place.
Your Impact:
Build and maintain new and existing applications in preparation for a large-scale architectural migration within an Agile function.
Align with the Product Owner and Scrum Master in assessing business needs and transforming them into scalable applications.
Build and maintain code to manage data received from heterogenous data formats including web-based sources, internal/external databases, flat files, heterogenous data formats (binary, ASCII).
Help build new enterprise Datawarehouse and maintain the existing one.
Design and support effective storage and retrieval of very large internal and external data set and be forward think about the convergence strategy with our AWS cloud migration
Assess the impact of scaling up and scaling out and ensure sustained data management and data delivery performance.
Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.
Your Required Skills:
5+ years of hand-on experience in BI Platform administration such as Power BI and Tableau
3+ years of hand-on experience in Power BI/Tableau report development
Experience with both server and desktop-based data visualization tools
Expertise with multiple database platforms including relational databases (ie. SQL Server) as well as cloud-based data warehouses such as Azure
Fluent with SQL for data analysis
Working experience in a Windows based environment
Knowledge of data warehousing, ETL procedures, and BI technologies
Excellent analytical and problem-solving skills with the ability to think quickly and offer alternatives both independently and within teams.
Exposure working in an Agile environment with Scrum Master/Product owner and ability to deliver
Ability to communicate the status and challenges with the team
Demonstrating the ability to learn new skills and work as a team
Strong interpersonal skills
A reasonable, good faith estimate of the minimum and maximum Pay rate for this position is $70/hr. to $80/hr.
Senior Dotnet Developer
Data engineer job in Montvale, NJ
Develop, debug, test, and deploy new and existing applications
Help to ensure that developed application solutions satisfy business and technical requirements and that standard testing procedures have been followed
Design, construct, implement and support client-facing software that meets the business requirements
Perform software construction, unit testing and debugging; software construction may include the preparation of new software, reuse of existing code, modification of existing programs, or integration of purchased solutions
Interface with internal and external technical staff to define application solutions and resolve problems as needed
Design and develop custom solutions in a SharePoint environment using SharePoint Object Model, C#.net, JQuery, Visual Studio, Team Foundation Server, SQL Server Stored Procedures, and SQL Server Analysis Services (SSAS)
Act with integrity, professionalism, and personal responsibility to uphold the firm's respectful and courteous work environment.
Qualifications:
Minimum 10 year of experience in .NET web application development
Bachelors degree in from an accredited college/university or equivalent work experience
Experience in user interface and design of web based products
Knowledge of Object-Oriented Principles, Patterns and Practices and knowledge of Agile (Scrum) software development methodologies and leading practices
Snowflake Senior Developer with Python, DBT Exp
Data engineer job in Jersey City, NJ
The Senior Technical Specialist will be responsible for managing and optimizing Snowflake, DBT, SQL, and Python processes within the organization. They will play a key role in implementing and maintaining efficient data pipelines, ensuring data integrity, and driving data driven decision-making processes.
(1.) Key Responsibilities
1. Develop and maintain snowflake data cloud environment, including optimizing database performance and troubleshooting any issues that arise.
2. Design and implement etl processes using dbt to transform and load data into snowflake for analytics and reporting purposes.
3. Write complex sql queries to extract and manipulate data as required by various business units.
4. Utilize python programming language to automate data processes, build data pipelines, and create data visualizations.
5. Collaborate with cross functional teams to understand data requirements and implement solutions that meet business needs.
6. Ensure data security and governance practices are followed in compliance with industry standards and regulations.
Skill Requirements
1. Proficiency in snowflake data platform, including data warehouse design, implementation, and optimization.
2. Experience with dbt (data build tool) for managing data transformations and modeling within a cloud data warehouse environment.
3. Strong sql skills with the ability to write and optimize complex queries for data extraction and analysis.
4. Proficient in python programming language for data manipulation, automation, and visualization.
5. Knowledge of data governance best practices and experience in implementing data security measures within a cloud environment.
6. Excellent analytical and problem-solving skills with a keen attention to detail.
Certifications: SnowPro Core Certification, DBT Fundamentals Certification (preferred).
Senior Python Developer
Data engineer job in Rutherford, NJ
Hello
Our Client one of the leading Bank is looking to hire for the following role . Please share resume if interested
Title - Senior Python Developer
Duration - Long term - 2 days onsite (Hybrid)
We cannot do 3rd party contracting for this role . W2 role with Iris software
Design, develop, and maintain robust Python applications, APIs, and backend services using modern frameworks.
• Demonstrated ability to participate in a global software engineering team while working closely with product management, quality assurance and business analysts.
• Hands on experience developing with Python and frameworks such as Flask, Django, Gunicorn
• Development experience with AWS
• Experience working with SQL technologies such as PostgreSQL, Oracle or equivalent
About Iris Software Inc.
With 4,000+ associates and offices in India, U.S.A. and Canada, Iris Software delivers technology services and solutions that help clients complete fast, far-reaching digital transformations and achieve their business goals. A strategic partner to Fortune 500 and other top companies in financial services and many other industries, Iris provides a value-driven approach - a unique blend of highly-skilled specialists, software engineering expertise, cutting-edge technology, and flexible engagement models. High customer satisfaction has translated into long-standing relationships and preferred-partner status with many of our clients, who rely on our 30+ years of technical and domain expertise to future-proof their enterprises. Associates of Iris work on mission-critical applications supported by a workplace culture that has won numerous awards in the last few years, including Certified Great Place to Work in India; Top 25 GPW in IT & IT-BPM; Ambition Box Best Place to Work, #3 in IT/ITES; and Top Workplace NJ-USA.
Generative AI Software Engineer
Data engineer job in Jersey City, NJ
Job Title: Generative AI Software Engineer
Duration: 09 months contract (Possible RTH)
Pay Range: $(52.00 - 58.00)/hr on W2 all-inclusive without benefits
We are seeking a highly skilled Generative AI Engineer with a strong background in Machine Learning, LLMs, and modern AI frameworks to design, build, and deploy intelligent systems and generative models. The ideal candidate will have experience developing end-to-end AI solutions, optimizing model performance, and integrating generative capabilities into production applications. This role will work closely with cross-functional teams including product, research, engineering, and data science.
Key Responsibilities
Design, develop, and deploy Generative AI models, including LLMs, diffusion models, transformers, and multimodal architectures.
Build end-to-end AI/ML pipelines, including data ingestion, preprocessing, training, evaluation, and model deployment.
Fine-tune large language models (LLMs) using domain-specific datasets, prompt engineering, and reinforcement learning techniques (RLHF preferred).
Develop scalable backend systems to support inference, API integrations, and real-time generative workloads.
Collaborate with cross-functional teams to translate business requirements into technical solutions.
Conduct POCs and prototype development for new generative AI capabilities.
Optimize model performance for speed, accuracy, latency, and compute efficiency.
Implement best practices for model monitoring, observability, and drift detection.
Work with vector databases, embeddings, and retrieval-augmented generation (RAG) pipelines.
Ensure compliance with security, ethical AI, data privacy, and responsible AI principles.
Stay up to date with emerging research in generative AI, ML, LLMs, and advanced model architectures.
Required Qualifications
Bachelor's or Master's degree in Computer Science, AI/ML, Data Science, Engineering, or related field.
Min 6+ years of professional experience in software engineering, machine learning engineering, or AI development.
Strong hands-on experience with Python, PyTorch, TensorFlow, JAX, or related ML frameworks.
Proven experience building and deploying LLMs, GANs, diffusion models, or transformer-based architectures.
Solid understanding of machine learning fundamentals, deep learning, NLP, and generative modeling.
Experience with cloud platforms (AWS, Azure, or GCP) and scalable AI infrastructure (Kubernetes, Docker, serverless, distributed training).
Proficiency in building RESTful APIs, microservices, and backend integrations.
Strong knowledge of MLOps, CI/CD for ML, and model lifecycle management.
Excellent problem-solving, communication, and collaboration skills.
Preferred Qualifications:
Familiarity with multimodal AI (vision + language models).
Hands-on experience with OpenAI, Google Vertex AI, or Azure OpenAI ecosystems.
Background in data engineering or building large-scale data pipelines.
Contributions to open-source AI/ML projects.
Full Stack Hedge Fund Software Engineer (Java/Angular/React)
Data engineer job in Stamford, CT
Focus Capital Markets is supporting its Connecticut-based hedge fund by providing a unique opportunity for talented a senior software engineer to work across verticals within the organization. In this role, you will assist the business by developing front-end and back-end applications, building and scaling APIs and working with the business to define technical solutions for business requirements. You will work with both sides of the stack, with a Core Java back-end (and C#/.Net) and latest versions of Angular on the front-end.
.
Although the organization is outside of the NYC area, it is just as lucrative and would afford someone with career stability, longevity and growth opportunity within the organization. The parent company is best of breed in the hedge fund industry and there are opportunities to grow from within.
You will work onsite Monday-Thursday.
Requirements:
5+ years of software engineering experience leveraging Angular on the front-end and Core Java or C# on the back-end.
Experience with React is relevant.
Experience with SQL is preferred.
Experience with REST APIs
Bachelors degree or higher in computer science, mathematics or related field.
Must have excellent communication skills
Java Software Engineer
Data engineer job in Jersey City, NJ
*Presently we are unable to sponsor and request applicants to apply who are authorized to work without sponsorship*
Below are the few details of the opportunity.
Job Title: Core Java / Software Engineer
Duration: Contract to Hire
Skills:
Formal training or certification on software engineering concepts and 5+ years' applied experience
Hands-on practical experience in system design, application development, testing, and operational stability
Experience with Core Java, collections, Exception Handling, Generics & Multithreading, Spring boot, JavaScript
Strong understanding of data, data modeling and database methodologies and architecture disciplines
Advanced SQL and data management knowledge
Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
Overall knowledge of the Software Development Life Cycle
Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security
Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)