AWS Cloud Engineer
Requirements engineer job in Raritan, NJ
MatchPoint is a fast-growing, young, energetic global IT-Engineering services company with clients across the US. We provide technology solutions to various clients like Uber, Robinhood, Netflix, Airbnb, Google, Sephora and more! More recently, we have expanded to working internationally in Canada, China, Ireland, UK, Brazil and India. Through our culture of innovation, we inspire, build, and deliver business results, from idea to outcome. We keep our clients on the cutting edge of the latest technologies and provide solutions by using industry specific best practices and expertise.
We are excited to be continuously expanding our team. If you are interested in this position, please send over your updated resume. We look forward to hearing from you!
Skills needed:
AWS Services Architecture
Control Tower
AWS Config
IAM Policies (SCP, IDC, Identity)
CloudFormation
AWS networking and VPC configuration
Dev/Ops skills to deploy and manage a new AWS network
Terraform/IaC skills
Python coding skills
BI Engineer (Tableau & Power BI - platforms/server)
Requirements engineer job in Newark, NJ
Job Title: BI Engineer (Tableau & Power BI - platforms/server)
Duration: 12 months long term project
US citizens and Green Card Holders and those authorized to work in the US are encouraged to apply. We are unable to sponsor
H1b
candidates at this time
Summary of the job
-Extremely technical/hands on skills on Power BI, Python and some Tableau
- Financial, Asset Management, banking background
- FIX Income specifically is a big plus
- Azure Cloud
Job Description:
Our Role:
We are looking for an astute, determined professional like you to fulfil a BI Engineering role within our Technology Solutions Group.
You will showcase your success in a fast-paced environment through collaboration, ownership, and innovation.
Your expertise in emerging trends and practices will evoke stimulating discussions around optimization and change to help keep our competitive edge.
This rewarding opportunity will enable you to make a big impact in our organization, so if this sounds exciting, then might be the place.
Your Impact:
Build and maintain new and existing applications in preparation for a large-scale architectural migration within an Agile function.
Align with the Product Owner and Scrum Master in assessing business needs and transforming them into scalable applications.
Build and maintain code to manage data received from heterogenous data formats including web-based sources, internal/external databases, flat files, heterogenous data formats (binary, ASCII).
Help build new enterprise Datawarehouse and maintain the existing one.
Design and support effective storage and retrieval of very large internal and external data set and be forward think about the convergence strategy with our AWS cloud migration
Assess the impact of scaling up and scaling out and ensure sustained data management and data delivery performance.
Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.
Your Required Skills:
5+ years of hand-on experience in BI Platform administration such as Power BI and Tableau
3+ years of hand-on experience in Power BI/Tableau report development
Experience with both server and desktop-based data visualization tools
Expertise with multiple database platforms including relational databases (ie. SQL Server) as well as cloud-based data warehouses such as Azure
Fluent with SQL for data analysis
Working experience in a Windows based environment
Knowledge of data warehousing, ETL procedures, and BI technologies
Excellent analytical and problem-solving skills with the ability to think quickly and offer alternatives both independently and within teams.
Exposure working in an Agile environment with Scrum Master/Product owner and ability to deliver
Ability to communicate the status and challenges with the team
Demonstrating the ability to learn new skills and work as a team
Strong interpersonal skills
A reasonable, good faith estimate of the minimum and maximum Pay rate for this position is $70/hr. to $80/hr.
GenAI Engineer
Requirements engineer job in Parsippany-Troy Hills, NJ
***This is an architect role. Not a Data Scientist position.*** Local candidates only please. Required three days per week onsite.
GenAI Engineer
GenAI Engineer
Responsibilities:
Design and develop innovative AI/ML solutions in collaboration with Center of Excellence
Develop systems architecture, technical roadmaps, and prototypes
Build, test, and deploy AI models into production leveraging AWS services
Continuously explore new AI techniques and methodologies to drive innovation
Requirements:
Degree in Computer Science, Statistics, or related quantitative field
5+ years experience architecting and developing AI/ML systems, 1+ year experience with GenAI systems
Expertise in Python, SQL, PyTorch, TensorFlow, and other ML libraries/frameworks
Experience deploying solutions on cloud platforms like AWS. Familiarity with MLOps & AIOps principles.
Strong communication, collaboration, and coaching skills
Nice to have: Databricks experience
You can expect:
Courageous collaboration across high-performing teams
Opportunity to deliver AI innovations at epic scale
An environment surrounded by curious lifelong learners
A culture of innovation, ownership, and hands-on creativity
Industry leadership in applying AI/ML to transform HR
Belonging in a company committed to equality, diversity, and inclusion
Let's talk if you're ready to architect the next generation of AI!
Salesforce Engineer
Requirements engineer job in Yardley, PA
🚀 We're Hiring: Salesforce Engineer
📍 Hybrid to Yardley, PA OR Madison, WI OR Boise, ID
Professional Experience
3-5 years of hands-on Salesforce engineering/development experience.
Proven success designing scalable Salesforce solutions for sales & commercial teams.
Expertise in Apex, LWC, Visualforce, SOQL, APIs, and integration tools.
Experience implementing AI solutions (Einstein Analytics, predictive modeling, or third-party AI).
Strong experience integrating Salesforce with Pardot, Marketo, HubSpot, or similar platforms.
Implement Salesforce AgentForce and other AI tools to deliver predictive insights and intelligent automation.
Skills & Competencies
Strong analytical and problem-solving abilities.
Excellent communication skills - ability to clearly articulate work across teams is critical.
Experience working in Agile environments (Scrum/Kanban).
Ability to excel both independently and collaboratively in a fast-paced environment.
Gen AI/ML Engineer
Requirements engineer job in Jersey City, NJ
Gen AI/ML Engineer with Data Engineering Exposure
Experience : 8+ Years Preferred
Employee Type : Full Time with Benefits
Job Description:
We are seeking a highly skilled and experienced AI/ML Engineer with a strong background in Machine Learning (ML), Large Language Models (LLMs), Generative AI (GenAI), and Data Engineering. The ideal candidate will have successfully delivered 3-4 end-to-end AI/ML projects, demonstrating expertise in building scalable ML systems and deploying them in production environments. A solid foundation in Python, SQL, PySpark, and NLP technologies is essential. Experience with cloud platforms such as AWS, Azure, or GCP is highly desirable.
Key Responsibilities
Design, develop, and deploy scalable ML/AI solutions, including robust MLOps pipelines for CI/CD, model monitoring, and governance.
Lead the development of LLM and GenAI applications, including text summarization, conversational AI, and entity recognition.
Build and optimize data pipelines using PySpark and SQL for large-scale data processing and feature engineering.
Architect and implement production-grade ML systems with a focus on performance, scalability, and reliability.
Collaborate with cross-functional teams to align AI initiatives with business goals and drive innovation.
Mentor junior engineers and contribute to team-wide knowledge sharing and best practices.
Required Skills & Qualifications
Bachelor's degree in Computer Science, Data Science, or a related field.
7+ years of hands-on experience in ML/AI solution development and deployment.
Proven track record of working on at least 3-4 AI/ML projects from concept to production.
Programming Languages: Python (Pandas, NumPy, PyTorch, TensorFlow), SQL.
MLOps Tools: MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines.
GenAI & NLP: Expertise in transformer models (e.g., GPT, BERT), Hugging Face, LangChain.
Data Engineering: Strong experience with PySpark and distributed data processing.
Cloud Platforms: Proficiency in AWS, Azure, or GCP.
Strong problem-solving skills and ability to thrive in a fast-paced, collaborative environment.
Life at Capgemini
Capgemini supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work
Healthcare including dental, vision, mental health, and well-being programs
Financial well-being programs such as 401(k) and Employee Share Ownership Plan
Paid time off and paid holidays
Paid parental leave
Family building benefits like adoption assistance, surrogacy, and cryopreservation
Social well-being benefits like subsidized back-up child/elder care and tutoring
Mentoring, coaching and learning programs
Employee Resource Groups
Disaster Relief
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodations during our recruitment process. If you need assistance or accommodation, please get in touch with your recruiting contact.
Click the following link for more information on your rights as an Applicant **************************************************************************
Salary Transparency:
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $103330 - $128656/yearly.
This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Data Analytics Engineer
Requirements engineer job in Somerset, NJ
Client: manufacturing company
Type: direct hire
Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets.
This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams.
This role is on-site five days per week in Somerset, NJ.
Key Responsibilities
Power BI Reporting & Administration
Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets
Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions
Develop and maintain data models to ensure accuracy, consistency, and reliability
Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance
Optimize Power BI solutions for performance, scalability, and ease of use
ETL & Data Warehousing
Design and maintain data warehouse structures, including schema and database layouts
Develop and support ETL processes to ensure timely and accurate data ingestion
Integrate data from multiple systems while ensuring quality, consistency, and completeness
Work closely with database administrators to optimize data warehouse performance
Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed
Training & Documentation
Create and maintain technical documentation, including specifications, mappings, models, and architectural designs
Document data warehouse processes for reference, troubleshooting, and ongoing maintenance
Manage data definitions, lineage documentation, and data cataloging for all enterprise data models
Project Management
Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team
Collaborate with key business stakeholders to ensure departmental reporting needs are met
Record meeting notes in Confluence and document project updates in Jira
Data Governance
Implement and enforce data governance policies to ensure data quality, compliance, and security
Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness
Routine IT Functions
Resolve Help Desk tickets related to reporting, dashboards, and BI tools
Support general software and hardware installations when needed
Other Responsibilities
Manage email and phone communication professionally and promptly
Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel
Perform additional assigned duties as needed
Qualifications
Required
Minimum of 3 years of relevant experience
Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience
Experience with cloud-based BI environments (Azure, AWS, etc.)
Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica)
Proficiency in SQL for data extraction, manipulation, and transformation
Strong knowledge of DAX
Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake)
Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools
Strong analytical, problem-solving, and documentation skills
Excellent written and verbal communication abilities
High attention to detail and strong self-review practices
Effective time management and organizational skills; ability to prioritize workload
Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
Data Engineer
Requirements engineer job in Hamilton, NJ
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
Lead Data Engineer
Requirements engineer job in Roseland, NJ
Job Title: Lead Data Engineer.
Hybrid Role: 3 Times / Week.
Type: 12 Months Contract - Rolling / Extendable Contract.
Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements.
Must haves:
AWS.
Databricks.
Lead experience- this can be supplemented for staff as well.
Python.
Pyspark.
Contact Center Experience is a nice to have.
Job Description:
As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams.
TO SUCCEED IN THIS ROLE:
Ability to define and design complex data integration solutions with general direction and stakeholder access.
Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team.
Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake.
Highly skilled in RDBMS platforms such as Oracle, SQLServer.
Familiarity with NoSQL DB platforms like MongoDB.
Understanding of data modeling and data engineering, including SQL and Python.
Strong understanding of data quality, compliance, governance and security.
Proficiency in languages such as Python, SQL, and PySpark.
Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval.
Ability to design and develop scalable data pipelines.
Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable.
Experience with code repository and project tools such as GitHub, JIRA, and Confluence.
Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace.
Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes.
Detail-oriented with strong analytical, problem-solving, and organizational skills.
Ability to clearly communicate with both technical and business teams.
Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus.
Knowledge of Agile development methodologies is a plus.
Having a Databricks data engineer associate certification is a plus but not mandatory.
Data Engineer Requirements:
Bachelor's degree in computer science, information technology, or a similar field.
8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies.
Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features.
Knowledge of mathematical foundations and statistical analysis.
Strong interpersonal skills.
Excellent communication and presentation skills.
Advanced troubleshooting skills.
Regards,
Purnima Pobbathy
Senior Technical Recruiter
************
| ********************* |Themesoft Inc |
Data Engineer
Requirements engineer job in East Windsor, NJ
🚀 Junior Data Engineer
📝 E-Verified | Visa Sponsorship Available
🔍 About Us:
BeaconFire, based in Central NJ, is a fast-growing company specializing in Software Development, Web Development, and Business Intelligence. We're looking for self-motivated and strong communicators to join our team as a Junior Data Engineer!
If you're passionate about data and eager to learn, this is your opportunity to grow in a collaborative and innovative environment. 🌟
🎓 Qualifications We're Looking For:
Passion for data and a strong desire to learn and grow.
Master's Degree in Computer Science, Information Technology, Data Analytics, Data Science, or a related field.
Intermediate Python skills (Experience with NumPy, Pandas, etc. is a plus!)
Experience with relational databases like SQL Server, Oracle, or MySQL.
Strong written and verbal communication skills.
Ability to work independently and collaboratively within a team.
🛠️ Your Responsibilities:
Collaborate with analytics teams to deliver reliable, scalable data solutions.
Design and implement ETL/ELT processes to meet business data demands.
Perform data extraction, manipulation, and production from database tables.
Build utilities, user-defined functions, and frameworks to optimize data flows.
Create automated unit tests and participate in integration testing.
Troubleshoot and resolve operational and performance-related issues.
Work with architecture and engineering teams to implement high-quality solutions and follow best practices.
🌟 Why Join BeaconFire?
✅ E-Verified employer
🌍 Work Visa Sponsorship Available
📈 Career growth in data engineering and BI
🤝 Supportive and collaborative work culture
💻 Exposure to real-world, enterprise-level projects
📩 Ready to launch your career in Data Engineering?
Apply now and let's build something amazing together! 🚀
Azure Data Engineer
Requirements engineer job in Warren, NJ
Job Title: Data Engineer - SQL, Azure, ADF (Commercial Insurance)
Experience: 12 -20 Years
Job Type: Contract
Required Skills: SQL, Azure, ADF, Commercial Insurance
We are seeking a highly skilled Data Engineer with strong experience in SQL, Azure Data Platform, and Azure Data Factory, preferably within the Insurance domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, integrating data from multiple insurance systems, and enabling analytical and reporting capabilities for underwriting, claims, policy, billing, and risk management teams.
Required Skills & Experience
Minimum 12+ years of experience in Data Engineering or related roles.
Strong expertise in:
SQL, T-SQL, PL/SQL
Azure Data Factory (ADF)
Azure SQL, Synapse, ADLS
Data modeling for relational and analytical systems.
Hands-on experience with ETL/ELT development and complex pipeline orchestration.
Experience in Azure DevOps Git, CI/CD pipelines, and DataOps practices.
Understanding of insurance domain datasets: policy, premium, claims, exposures, brokers, reinsurers, underwriting workflows.
Strong analytical and problem-solving skills, with the ability to handle large datasets and complex transformations.
Preferred Qualifications
Experience with Databricks / PySpark for large-scale transformations.
Knowledge of Commercial Property & Casualty (P&C) insurance.
Experience integrating data from Guidewire ClaimCenter/PolicyCenter, DuckCreek, or similar platforms.
Exposure to ML/AI pipelines for underwriting or claims analytics.
Azure certifications such as:
DP-203 (Azure Data Engineer)
AZ-900, AZ-204, AI-900
Data Engineer
Requirements engineer job in Jersey City, NJ
Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices.
Role: Data Engineer
Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ
Duration: Full-Time/W2
Job Description:
Must-Haves:
Python for running ETL batch jobs
Heavy SQL for data analysis, validation and querying
AWS and the ability to move the data through the data stages and into their target databases.
The Postgres database is the target, so that is required.
Nice to haves:
Snowflake
Java for API development is a nice to have (will teach this)
Experience in asset management for domain knowledge.
Production support debugging and processing of vendor data
The Expertise and Skills You Bring
A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience
Extensive experience with ETL technologies
Design and develop ETL reporting and analytics solutions.
Knowledge of Data Warehousing methodologies and concepts - preferred
Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required
RMDS experience (Snowflake, PostgreSQL ) - required
Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required
Designing and developing low to moderate complex data integration solution - required
Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred
Expert in SQL and Stored Procedures on any Relational databases
Good in debugging, analyzing and Production Support
Application Development based on JIRA stories (Agile environment)
Demonstrable experience with ETL tools (Informatica, Snaplogic)
Experience in working with Python in an AWS environment
Create, update, and maintain technical documentation for software-based projects and products.
Solving production issues.
Interact effectively with business partners to understand business requirements and assist in generation of technical requirements.
Participate in architecture, technical design, and product implementation discussions.
Working Knowledge of Unix/Linux operating systems and shell scripting
Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis.
Excellent interpersonal and communication skills
Ability to work with global Agile teams
Proven ability to deal with ambiguity and work in fast paced environment
Ability to mentor junior data engineers.
The Value You Deliver
The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack.
Strong experience of working in large teams and proven technical leadership capabilities
Knowledge of enterprise-level implementations like data warehouses and automated solutions.
Ability to negotiate, influence and work with business peers and management.
Ability to develop and drive a strategy as per the needs of the team
Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
Data Engineer
Requirements engineer job in Newark, NJ
Data Engineer
Duration: 6 months (with possible extension)
Visas-: USC, GC, GC EAD
Contract type- W2 only (No H1b or H4EAD)
Responsibilities
Prepares data for analytical or operational uses
Builds data pipelines to pull information from different source systems, integrating, consolidating, and cleansing data and structuring it for use in applications
Creates interfaces and mechanisms for the flow and access of information.
Required Skills
ETL
AWS
Data analysis
Multisource data gathering
Azure Data Engineer
Requirements engineer job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Azure DevOps Engineer
Requirements engineer job in Jersey City, NJ
About US:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ********************
Job Title: Azure DevOps Engineer
Work Location
Jersey City, NJ
Job Description:
1. Extensive hands-on experience on GitHub Actions writing workflows in YAML using re-usable templates
2. Extensive hands on experience with application CI/CD pipelines both for Azure and on-prem for different frameworks
3. Hands on experience with Azure DevOps and migration programs of CI/CD pipelines preferably from Azure DevOps to GitHub Actions
4. Proficiency in integrating and consuming REST APIs to achieve automation through scripting
5. Hands on experience with atleast 1 scripting language and has done out of box automations for platforms like People Soft, SharePoint, MDM etc
6. Hands on experience with CI/CD of databases
7. Good to have experience with infrastructure-as-code including ARM templates Terraform Azure CLI Azure PowerShell modules
8. Exposure to monitoring tools like ELK Prometheus Grafana
Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”):
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation.
Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Senior Data Engineer
Requirements engineer job in Iselin, NJ
Sr. Data Engineer (Snowflake, Databricks, Python, Pyspark, SQL and Banking)
Iselin, NJ (Need local profiles only)
In-Person interview will be required.
Over all 11+ Years of Experience & Recent experience with banking domain experience required.
Only W2 & Visa Independent candidates
Required experience:
Job Description:
We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic Team.
Responsibilities:
Understand technical specifications
Business requirements discussion with business analyst and business users
Python/SQL Server/Snowflake/Databricks application development and system design
Develop and maintain data models and schemas to support data integration and analysis.
Implement data quality and validation checks to ensure accuracy and consistency of data.
Execution of UT and SIT with business analysts to ensure of high-quality testing
Support for UAT with business users
Production support and maintenance of application platform
Qualifications:
General:
Around 12+ years IT industry experience
Agile methodology and SDLC processes
Design and Architecture experience
Experience working in global delivery model (onshore/offshore/nearshore)
Strong problem-solving and analytical skills
Self-starter, collaborative team player and works with minimal guidance
Strong communication skills
Technical Skills:
Mandatory (Strong) -
Python, SQL server and relational database concepts, Azure Databricks, Snowflake, Scheduler (Autosys/Control-M), ETL, CI/CD
Plus:
PySpark
Financial systems/capital markets/credit risk/regulatory application development experience
Senior Data Engineer
Requirements engineer job in New Providence, NJ
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
Job Description
Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems
Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance
Work in tandem with our engineering team to identify and implement the most optimal solutions
Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design
Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures
Able to manage deliverables in fast paced environments
Areas of Expertise
At least 10 years of experience designing and development of data solutions in enterprise environment
At least 5+ years' experience on Snowflake Platform
Strong hands-on SQL and Python development
Experience with designing and developing data warehouses in Snowflake
A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala
Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic
Good understanding on Metadata and data lineage
Hands-on knowledge on SQL Analytical functions
Strong knowledge and hands-on experience in Shell scripting, Java Scripting
Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering.
Good understanding and exposure to Git, Confluence and Jira
Good problem solving and troubleshooting skills.
Team player, collaborative approach and excellent communication skills
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
Senior Data Engineer - MDM
Requirements engineer job in Iselin, NJ
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our challenge:
We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs.
Additional Information
The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within Iselin, NJ is $135K to $150K/year & benefits (see below).
Key Responsibilities:
Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains.
Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization.
Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met.
Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements.
Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively.
Implement data integration pipelines leveraging modern data engineering tools and practices.
Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer.
Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies.
Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services.
Ensure compliance with data governance, data privacy, and security standards.
Support CI/CD pipelines for continuous integration and deployment of data solutions.
Qualifications:
12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry.
Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options.
Strong functional knowledge of reference data sources and domain-specific data standards.
Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer.
Familiarity with CI/CD practices, tools, and automation pipelines.
Ability to work collaboratively across teams to deliver complex data solutions.
Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
Preferred Skills:
Familiarity with financial data models and regulatory requirements.
Experience with Azure cloud platforms
Knowledge of data governance, data quality frameworks, and metadata management.
We offer:
A highly competitive compensation and benefits package
A multinational organization with 58 offices in 21 countries and the possibility to work abroad
10 days of paid annual leave (plus sick leave and national holidays)
Maternity & Paternity leave plans
A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region)
Retirement savings plans
A higher education certification policy
Commuter benefits (varies by region)
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms
A flat and approachable organization
A truly diverse, fun-loving and global work culture
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Data Engineer
Requirements engineer job in Jersey City, NJ
ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES
Skillset: Data Engineer
Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3
Nice to Haves: Java, Spark, React Js
Interview Process: Interview Process: 2 rounds, 2nd will be on site
You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you.
As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives.
Job responsibilities:
• Supports review of controls to ensure sufficient protection of enterprise data.
• Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request.
• Updates logical or physical data models based on new use cases.
• Frequently uses SQL and understands NoSQL databases and their niche in the marketplace.
• Adds to team culture of diversity, opportunity, inclusion, and respect.
• Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals).
• Supports review of controls to ensure sufficient protection of enterprise data
Required qualifications, capabilities, and skills
• Formal training or certification on data engineering concepts and 2+ years applied experience
• Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases
• Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
• Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark.
• Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
• Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
• Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs
• Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
• Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift
• Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
• Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools.
Preferred qualifications, capabilities, and skills
• Knowledge of data governance and security best practices.
• Experience in carrying out data analysis to support business insights.
• Strong Python and Spark
Python Data Engineer
Requirements engineer job in Iselin, NJ
Job Title:Data Engineer (Python, Spark, Cloud)
Pay :$90000 per year DOE
Term : Contract
Work Authorization: US Citizens only
( may need Security clearance in future)
Job Summary:
We are seeking a mid-level Data Engineer with strong Python and Big Data skills to design, develop, and maintain scalable data pipelines and cloud-based solutions. This role involves hands-on coding, data integration, and collaboration with cross-functional teams to support enterprise analytics and reporting.
Key Responsibilities:
Build and maintain ETL pipelines using Python and PySpark for batch and streaming data.
Develop data ingestion frameworks for structured/unstructured sources.
Implement data workflows using Airflow and integrate with Kafka for real-time processing.
Deploy solutions on Azure or GCP using container platforms (Kubernetes/OpenShift).
Optimize SQL queries and ensure data quality and governance.
Collaborate with data architects and analysts to deliver reliable data solutions.
Required Skills:
Python (3.x) - scripting, API development, automation.
Big Data: Spark/PySpark, Hadoop ecosystem.
Streaming: Kafka.
SQL: Oracle, Teradata, or SQL Server.
Cloud: Azure or GCP (BigQuery, Dataflow).
Containers: Kubernetes/OpenShift.
CI/CD: GitHub, Jenkins.
Preferred Skills:
Airflow for orchestration.
ETL tools (Informatica, Talend).
Financial services experience.
Education & Experience:
Bachelor's in Computer Science or related field.
3-5 years of experience in data engineering and Python development.
Keywords for Visibility:
Python, PySpark, Spark, Hadoop, Kafka, Airflow, Azure, GCP, Kubernetes, CI/CD, ETL, Data Lake, Big Data, Cloud Data Engineering.
Reply with your profiles to this posting and send it to ******************
Data Engineer
Requirements engineer job in Newark, NJ
NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises.
Role Description
This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role.
Key Responsibilities
Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools.
Data Integration: Integrate and transform data using industry-standard tools. Experience required with:
AWS Services: AWS Glue, Data Pipeline, Redshift, and S3.
Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage.
Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift.
Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity.
Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization.
Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions.
Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly.
Required Skills and Experience
Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar).
Integration: Experience integrating data via RESTful / GraphQL APIs.
Programming: Proficient in Python for ETL automation and SQL for database management.
Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) .
Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics.
Integration: Experience integrating data via RESTful APIs.
Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders.
Authorization: Must have valid work authorization in the United States.
Salary Range: $65,000- $80,000 per year
Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company.
Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.