Lead Data Scientist (Ref: 190351)
Data engineer job in New York, NY
Industry: Retail
Salary: $150,000-$175,000 + Bonus
Contact: ********************************
Our client is a prominent player in the Apparel sector, committed to fusing fashion with cutting-edge data and technological solutions. Situated in New York, this organization is on the lookout for a Data Science Manager to drive their Operations Intelligence initiatives within the Data & Analytics department. This critical role is pivotal in leveraging advanced analytics, predictive modeling, and state-of-the-art Generative AI technologies to bolster decision-making across key operational areas such as Planning, Supply Chain, Sourcing, Sales, and Logistics.
The selected candidate will oversee the integration of data science methodologies into essential operational workflows, aiming to automate processes, improve visibility, accurately forecast business dynamics, and facilitate strategic planning through insightful data analysis.
Requirements
A minimum of 6 years of experience in the field of data science, with at least 2 years in a leadership or product-related role.
Proven ability to apply analytics in complex operational environments such as planning, supply chain, and sourcing.
Strong expertise in Python and SQL, along with a solid grasp of cloud-based data ecosystems.
Experience with advanced modeling techniques, including forecasting, optimization, and classification.
Familiarity with Generative AI technologies or LLMs, combined with a keen interest in leveraging these for practical business applications.
Excellent business acumen and communication skills, facilitating effective collaboration between data insights and strategic goals.
Data Architect
Data engineer job in New York, NY
*Data Architect* Increase your chances of an interview by reading the following overview of this role before making an application. The *Data Architect* designs, governs, and evolves enterprise data architectures that enable reliable analytics, AI, and operational reporting. Data Architect defines standards for data modeling, integration, quality, security, and lifecycle management across cloud and on-prem platforms, ensuring data is trusted, performant, and cost-efficient.
This contract position is onsite.
*Job Description*:
* Define end-to-end *data architecture* patterns (warehouse, lake/lakehouse, streaming, operational data stores) and reference designs aligned to business outcomes.
* Own enterprise *data models* (conceptual, logical, physical), canonical data definitions, and metadata standards to drive consistency and reuse.
* Architect *data integration* pipelines (batch and streaming) including ingestion, transformation, enrichment, and distribution with strong SLAs and observability.
* Establish *data governance* controls (cataloging, lineage, quality rules, MDM, access policies) in partnership with security, compliance, and business stakeholders.
* Drive *platform selection and design* (e.g., cloud data services, analytics engines, storage tiers) balancing scalability, performance, resilience, and total cost.
* Implement *security and privacy by design* (RBAC/ABAC, encryption, tokenization, masking, retention) and ensure regulatory compliance requirements are met.
* Set *standards and guardrails* for SQL, schema evolution, event design, job orchestration, and CI/CD for data workloads; review solutions for architectural fit.
* Partner with product, engineering, and analytics teams to translate *business requirements* into data structures, interfaces, and service contracts.
* Lead *migration and modernization* initiatives (e.g., to cloud/lakehouse), including dependency mapping, cutover plans, and performance optimization.
* Define *SLOs/SLAs* and capacity plans; monitor cost, reliability, and performance; drive continuous improvement via benchmarking and right-sizing.
* Mentor engineers and analysts; contribute to *architecture governance*, patterns, and best practices; present roadmaps and decisions to senior stakeholders.
Required Qualifications
* Strong expertise in *data modeling* (3NF, dimensional, Data Vault), *SQL*, and distributed compute/storage paradigms.
* Practical experience with major *cloud platforms* (AWS, Azure, GCP) and modern *data ecosystems* (e.g., Snowflake, BigQuery, Databricks, Starburst/Trino, Apache Spark).
* Proficiency in *ETL/ELT orchestration* and workflow tools (e.g., Airflow, dbt, native cloud services) and event/streaming systems (e.g., Kafka).
* Proven track record implementing *data governance*: catalog, lineage, quality frameworks, MDM, and access controls.
* Solid understanding of *security and compliance* for data (PII/PHI/PCI), including policy enforcement, encryption, and auditability.
* Strong programming/scripting in *Python* (or Scala/Java) for data processing, automation, and tooling.
* Excellent communication and stakeholder management; ability to *translate* complex technical concepts into clear business value.
*Preferred Skills *
* Experience with *lakehouse* architectures, open table formats (e.g., Apache Iceberg/Delta), and data sharing patterns.
* Familiarity with *metadata-driven* design, semantic layers, and BI acceleration techniques.
* Exposure to *ML/AI data readiness* practices (feature engineering, data labeling, model data pipelines).
* Infrastructure-as-Code (e.g., *Terraform*) and CI/CD for data platform provisioning and jobs.
* Cost optimization and *FinOps* practices for data services.
*Key Outcomes *
* Deliver a scalable, secure, and well-governed *data platform* that improves time-to-insight and reduces total cost of ownership.
* Establish enterprise *data standards* that increase interoperability and reduce duplication.
* Enable trusted *analytics and AI* by elevating data quality, lineage, and accessibility. xevrcyc
*Required Qualifications*
* Master's in management information systems or computer science.
* 10-15 Years of Experience
*Duration of Contract:* 6 months with probable extension
Job Type: Contract
Pay: $90.00 - $105.00 per hour
Education:
* Master's (Required)
Experience:
* relevant work: 10 years (Required)
Ability to Commute:
* Manhattan, NY 10001 (Required)
Willingness to travel:
* 25% (Required)
Work Location: In person
Data Engineer
Data engineer job in New York, NY
Mercor is hiring a Data Engineer on behalf of a leading AI lab. In this role, you'll **design resilient ETL/ELT pipelines and data contracts** to ensure datasets are analytics- and ML-ready. You'll validate, enrich, and serve data with strong schema and versioning discipline, building the backbone that powers AI research and production systems. This position is ideal for candidates who love working with data pipelines, distributed processing, and ensuring data quality at scale.
* * * ### **You're a great fit if you:** - Have a background in **computer science, data engineering, or information systems**. - Are proficient in **Python, pandas, and SQL**. - Have hands-on experience with **databases** like PostgreSQL or SQLite. - Understand distributed data processing with **Spark or DuckDB**. - Are experienced in orchestrating workflows with **Airflow** or similar tools. - Work comfortably with common formats like **JSON, CSV, and Parquet**. - Care about **schema design, data contracts, and version control** with Git. - Are passionate about building pipelines that enable **reliable analytics and ML workflows**. * * * ### **Primary Goal of This Role** To design, validate, and maintain scalable ETL/ELT pipelines and data contracts that produce clean, reliable, and reproducible datasets for analytics and machine learning systems. * * * ### **What You'll Do** - Build and maintain **ETL/ELT pipelines** with a focus on scalability and resilience. - Validate and enrich datasets to ensure they're **analytics- and ML-ready**. - Manage **schemas, versioning, and data contracts** to maintain consistency. - Work with **PostgreSQL/SQLite, Spark/Duck DB, and Airflow** to manage workflows. - Optimize pipelines for performance and reliability using **Python and pandas**. - Collaborate with researchers and engineers to ensure data pipelines align with product and research needs. * * * ### **Why This Role Is Exciting** - You'll create the **data backbone** that powers cutting-edge AI research and applications. - You'll work with modern **data infrastructure and orchestration tools**. - You'll ensure **reproducibility and reliability** in high-stakes data workflows. - You'll operate at the **intersection of data engineering, AI, and scalable systems**. * * * ### **Pay & Work Structure** - You'll be classified as an hourly contractor to Mercor. - Paid weekly via Stripe Connect, based on hours logged. - Part-time (20-30 hrs/week) with flexible hours-work from anywhere, on your schedule. - Weekly Bonus of **$500-$1000 USD** per 5 tasks. - Remote and flexible working style.
Lead HPC Architect Cybersecurity - High Performance & Computational Data Ecosystem
Data engineer job in New York, NY
The Scientific Computing and Data group at the Icahn School of Medicine at Mount Sinai partners with scientists to accelerate scientific discovery. To achieve these aims, we support a cutting-edge high-performance computing and data ecosystem along with MD/PhD-level support for researchers. The group is composed of a high-performance computing team, a clinical data warehouse team and a data services team.
The Lead HPC Architect, Cybersecurity, High Performance Computational and Data Ecosystem, is responsible for designing, implementing, and managing the cybersecurity infrastructure and technical operations of Scientific Computing's computational and data science ecosystem. This ecosystem includes a 25,000+ core and 40+ petabyte usable high-performance computing (HPC) systems, clinical research databases, and a software development infrastructure for local and national projects. The HPC system is the fastest in the world at any academic biomedical center (Top 500 list).
To meet Sinai's scientific and clinical goals, the Lead brings a strategic, tactical and customer-focused vision to evolve the ecosystem to be continually more resilient, secure, scalable and productive for basic and translational biomedical research. The Lead combines deep technical expertise in cybersecurity, HPC systems, storage, networking, and software infrastructure with a strong focus on service, collaboration, and strategic planning for researchers and clinicians throughout the organization and beyond. The Lead is an expert troubleshooter, productive partner and leader of projects. The lead will work with stakeholders to make sure the HPC infrastructure is in compliance with governmental funding agency requirements and to promote efficient resource utilizations for researchers
This position reports to the Director for HPC and Data Ecosystem in Scientific Computing and Data.
Key Responsibilities:
HPC Cybersecurity & System Administration:
Design, implement, and manage all cybersecurity operations within the HPC environment, ensuring alignment with industry standards (NIST, ISO, GDPR, HIPAA, CMMC, NYC Cyber Command, etc.).
Implement best practices for data security, including but not limited to encryption (at rest, in transit, and in use), audit logging, access control, authentication control, configuration managements, secure enclaves, and confidential computing.
Perform full-spectrum HPC system administration: installation, monitoring, maintenance, usage reporting, troubleshooting, backup and performance tuning across HPC applications, web service, database, job scheduler, networking, storage, computes, and hardware to optimize workload efficiency.
Lead resolution of complex cybersecurity and system issues; provide mentorship and technical guidance to team members.
Ensure that all designs and implementations meet cybersecurity, performance, scalability, and reliability goals. Ensure that the design and operation of the HPC ecosystem is productive for research.
Lead the integration of HPC resources with laboratory equipment for data ingestion aligned with all regulatory such as genomic sequencers, microscopy, clinical system etc.
Develop, review and maintain security policies, risk assessments, and compliance documentation accurately and efficiently.
Collaborate with institutional IT, compliance, and research teams to ensure all regulatory, Sinai Policy and operational alignment.
Design and implement hybrid and cloud-integrated HPC solutions using on-premise and public cloud resources.
Partner with other peers regionally, nationally and internationally to discover, propose and deploy a world-class research infrastructure for Mount Sinai.
Stay current with emerging HPC, cloud, and cybersecurity technologies to keep the organization's infrastructure up-to-date.
Work collaboratively, effectively and productively with other team members within the group and across Mount Sinai.
Provide after-hours support as needed.
Perform other duties as assigned or requested.
Requirements:
Bachelor's degree in computer science, engineering or another scientific field. Master's or PhD preferred.
10 years of progressive HPC system administration experience with Enterprise Linux releases including RedHat/CentOS/Rocky Systems, and batch cluster environment.
Experience with all aspects of high-throughput HPC including schedulers (LSF or Slurm), networking (Infiniband/Gigabit Ethernet), parallel file systems and storage, configuration management systems (xCAT, Puppet and/or Ansible), etc.
Proficient in cybersecurity processes, posture, regulations, approaches, protocols, firewalls, data protection in a regulated environment (e.g. finance, healthcare).
In-depth knowledge HIPAA, NIST, FISMA, GDPR and related compliance standards, with prove experience building and maintaining compliant HPC system
Experience with secure enclaves and confidential computing.
Proven ability to provide mentorship and technical leadership to team members.
Proven ability to lead complex projects to completion in collaborative, interdisciplinary settings with minimum guidance.
Excellent analytical ability and troubleshooting skills.
Excellent communication, documentation, collaboration and interpersonal skills. Must be a team player and customer focused.
Scripting and programming experience.
Preferred Experience
Proficient with cloud services, orchestration tools, openshift/Kubernetes cost optimization and hybrid HPC architectures.
Experience with Azure, AWS or Google cloud services.
Experience with LSF job scheduler and GPFS Spectrum Scale.
Experience in a healthcare environment.
Experience in a research environment is highly preferred.
Experience with software that enables privacy-preserving linking of PHI.
Experience with Globus data transfer.
Experience with Web service, SAP HANA, Oracle, SQL, MariaDB and other database technologies.
Strength through Unity and Inclusion
The Mount Sinai Health System is committed to fostering an environment where everyone can contribute to excellence. We share a common dedication to delivering outstanding patient care. When you join us, you become part of Mount Sinai's unparalleled legacy of achievement, education, and innovation as we work together to transform healthcare. We encourage all team members to actively participate in creating a culture that ensures fair access to opportunities, promotes inclusive practices, and supports the success of every individual.
At Mount Sinai, our leaders are committed to fostering a workplace where all employees feel valued, respected, and empowered to grow. We strive to create an environment where collaboration, fairness, and continuous learning drive positive change, improving the well-being of our staff, patients, and organization. Our leaders are expected to challenge outdated practices, promote a culture of respect, and work toward meaningful improvements that enhance patient care and workplace experiences. We are dedicated to building a supportive and welcoming environment where everyone has the opportunity to thrive and advance professionally. Explore this opportunity and be part of the next chapter in our history.
About the Mount Sinai Health System:
Mount Sinai Health System is one of the largest academic medical systems in the New York metro area, with more than 48,000 employees working across eight hospitals, more than 400 outpatient practices, more than 300 labs, a school of nursing, and a leading school of medicine and graduate education. Mount Sinai advances health for all people, everywhere, by taking on the most complex health care challenges of our time - discovering and applying new scientific learning and knowledge; developing safer, more effective treatments; educating the next generation of medical leaders and innovators; and supporting local communities by delivering high-quality care to all who need it. Through the integration of its hospitals, labs, and schools, Mount Sinai offers comprehensive health care solutions from birth through geriatrics, leveraging innovative approaches such as artificial intelligence and informatics while keeping patients' medical and emotional needs at the center of all treatment. The Health System includes more than 9,000 primary and specialty care physicians; 13 joint-venture outpatient surgery centers throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and more than 30 affiliated community health centers. We are consistently ranked by U.S. News & World Report's Best Hospitals, receiving high "Honor Roll" status.
Equal Opportunity Employer
The Mount Sinai Health System is an equal opportunity employer, complying with all applicable federal civil rights laws. We do not discriminate, exclude, or treat individuals differently based on race, color, national origin, age, religion, disability, sex, sexual orientation, gender, veteran status, or any other characteristic protected by law. We are deeply committed to fostering an environment where all faculty, staff, students, trainees, patients, visitors, and the communities we serve feel respected and supported. Our goal is to create a healthcare and learning institution that actively works to remove barriers, address challenges, and promote fairness in all aspects of our organization.
Data & Performance Analytics (Hedge Fund)
Data engineer job in New York, NY
Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing.
Responsibilities
This role will be responsible for Compiling periodical fund performance analyses
Review and analyze portfolio performance data, benchmark performance and risk statistics
Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner
Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems
Apply queries, pivot tables, filters and other tools to analyze data.
Maintain client relationship management database and providing reports to Directors on a regular basis
Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis
Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization
Provide various ad hoc data research and analysis as needed.
Desired Skills and Experience
Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience
Proficiency in Microsoft Office, particularly Excel Modeling
Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint
Outstanding communication skills, proven ability to effectively work with all levels of Managment
Comfortable working in a fast-paced, dead-line driven dynamic environment
Innovative and creative thinker
Must be detail oriented
Lead Data Engineer (Marketing Technology)
Data engineer job in New York, NY
required
)
About the job:
We're seeking a Lead Data Engineer to drive innovation and excellence across our Marketing Technology data ecosystem. You thrive in dynamic, fast-paced environments and are comfortable navigating both legacy systems and modern data architectures. You balance long-term strategic planning with short-term urgency, responding to challenges with clarity, speed, and purpose.
You take initiative, quickly familiarize yourself with source systems, ingestion pipelines, and operational processes, and integrate seamlessly into agile work rhythms. Above all, you bring a solution-oriented, win-win mindset-owning outcomes and driving progress.
What you will do at Sogeti:
Rapidly onboard into our Martech data ecosystem-understanding source systems, ingestion flows, and operational processes.
Build and maintain scalable data pipelines across Martech, Loyalty, and Engineering teams.
Balance long-term projects with short-term reactive tasks, including urgent bug fixes and business-critical issues.
Identify gaps in data infrastructure or workflows and proactively propose and implement solutions.
Collaborate with product managers, analysts, and data scientists to ensure data availability and quality.
Participate in agile ceremonies and contribute to backlog grooming, sprint planning, and team reviews.
What you will bring:
7+ years of experience in data engineering, with a strong foundation in ETL design, cloud platforms, and real-time data processing.
Deep expertise in Snowflake, Airflow, dbt, Fivetran, AWS S3, Lambda, Python, SQL.
Previous experience integrating data from multiple retail and ecommerce source systems.
Experience with implementation and data management for loyalty platforms, customer data platforms, marketing automation systems, and ESPs.
Deep expertise in data modeling with dbt.
Demonstrated ability to lead critical and complex platform migrations and new deployments.
Strong communication and stakeholder management skills.
Self-driven, adaptable, and proactive problem solver
Education:
Bachelor's or Master's degree in Computer Science, Software Engineering, Information Systems, Business Administration, or a related field.
Life at Sogeti: Sogeti supports all aspects of your well-being throughout the changing stages of your life and career. For eligible employees, we offer:
Flexible work options
401(k) with 150% match up to 6%
Employee Share Ownership Plan
Medical, Prescription, Dental & Vision Insurance
Life Insurance
100% Company-Paid Mobile Phone Plan
3 Weeks PTO + 7 Paid Holidays
Paid Parental Leave
Adoption, Surrogacy & Cryopreservation Assistance
Subsidized Back-up Child/Elder Care & Tutoring
Career Planning & Coaching
$5,250 Tuition Reimbursement & 20,000+ Online Courses
Employee Resource Groups
Counseling & Support for Physical, Financial, Emotional & Spiritual Well-being
Disaster Relief Programs
About Sogeti
Part of the Capgemini Group, Sogeti makes business value through technology for organizations that need to implement innovation at speed and want a local partner with global scale. With a hands-on culture and close proximity to its clients, Sogeti implements solutions that will help organizations work faster, better, and smarter. By combining its agility and speed of implementation through a DevOps approach, Sogeti delivers innovative solutions in quality engineering, cloud and application development, all driven by AI, data and automation.
Become Your Best | *************
Disclaimer
Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.
This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.
Capgemini is committed to providing reasonable accommodation during our recruitment process. If you need assistance or accommodation, please reach out to your recruiting contact.
Please be aware that Capgemini may capture your image (video or screenshot) during the interview process and that image may be used for verification, including during the hiring and onboarding process.
Click the following link for more information on your rights as an Applicant **************************************************************************
Applicants for employment in the US must have valid work authorization that does not now and/or will not in the future require sponsorship of a visa for employment authorization in the US by Capgemini.
Capgemini discloses salary range information in compliance with state and local pay transparency obligations. The disclosed range represents the lowest to highest salary we, in good faith, believe we would pay for this role at the time of this posting, although we may ultimately pay more or less than the disclosed range, and the range may be modified in the future. The disclosed range takes into account the wide range of factors that are considered in making compensation decisions including, but not limited to, geographic location, relevant education, qualifications, certifications, experience, skills, seniority, performance, sales or revenue-based metrics, and business or organizational needs. At Capgemini, it is not typical for an individual to be hired at or near the top of the range for their role. The base salary range for the tagged location is $125,000 - $175,000.
This role may be eligible for other compensation including variable compensation, bonus, or commission. Full time regular employees are eligible for paid time off, medical/dental/vision insurance, 401(k), and any other benefits to eligible employees.
Note: No amount of pay is considered to be wages or compensation until such amount is earned, vested, and determinable. The amount and availability of any bonus, commission, or any other form of compensation that are allocable to a particular employee remains in the Company's sole discretion unless and until paid and may be modified at the Company's sole discretion, consistent with the law.
Data Engineer Manager
Data engineer job in New York, NY
Be part of a global consulting powerhouse, partnering with clients on their most critical strategic transformations.
We are Wavestone. Energetic, solution-driven experts who focus as much on people as on performance and growth. Hand in hand, we share a deep desire to make a positive impact. We are an ambitious firm with a worldwide reach and an ever-expanding portfolio of clients, topics, and projects. In North America, Wavestone operates from hubs in New York City, Pittsburgh, Dallas and Toronto. We work closely with CEOs and technology leaders to optimize IT strategy, sourcing models, and business processes and are committed to building lasting partnerships with our clients.
Are you a true team player, living strong values? Are you a passionate learner, aiming to grow every day? Are you a driven go-getter, tackling challenges head-on? Then we could be the right fit for you. Join Wavestone and thrive in an environment that's empowering, collaborative, and full of opportunities to turn today's challenges into tomorrow's solutions - contributing to one or more of our core 4 capabilities:
Business Consulting | Business Strategy & Transformation, Organizational Effectiveness & Change Management, Operating Model Design & Agility, Program Leadership & Project Management, Marketing, Innovation, & Customer Experience
Technology Consulting | IT Strategy & CTO Advisory, Technology Delivery, Data & Artificial Intelligence, Software & Application: Development & Integration, SAP Consulting, Insurance/Reinsurance
Cybersecurity | Cyber Transformation Remediation, Cyber Defense & Recovery, Digital Identity, Audit & Incident Response, Product & Industrial Cybersecurity
Sourcing & Service Optimization | Global Services Strategy, IT & Business Process Services Outsourcing, Global In-House Center Support, Services Optimization, Sourcing Program Management
Read more at *****************
Job Description
As a Data Engineer at a manager level at Wavestone, you will be expected to help address strategic as well as detailed client needs, specifically serving as a trusted advisor to C-level executives and be comfortable supporting and leading hands-on data projects with technical teams.
In this role you would be leading or supporting high-impact data transformation, data modernization and data initiatives to accelerate and enable AI solutions, bridging business strategy and technical execution. You will architect and deliver robust, scalable data solutions, while mentoring teams and helping to shape the firm's data consulting offerings and skills. This role requires a unique blend of strategic vision, technical depth, and consulting leadership.
Key Responsibilities
Lead complex client engagements in data engineering, analytics, and digital transformation, from strategy through hands-on implementation.
Advise C-level and senior stakeholders on data strategy, architecture, governance, and technology adoption to drive measurable business value.
Architect and implement enterprise-scale data platforms, pipelines, and cloud-native solutions (Azure, AWS, Snowflake, Databricks, etc.).
Oversee and optimize ETL/ELT processes, data integration, and data quality frameworks for large, complex organizations.
Translate business objectives into actionable technical road maps, balancing innovation, scalability, and operational excellence.
Mentor and develop consultants and client teams, fostering a culture of technical excellence, continuous learning, and high performance.
Drive business development by shaping proposals, leading client pitches, and contributing to thought leadership and market offerings.
Stay at the forefront of emerging technologies and industry trends in data engineering, AI/ML, and cloud platforms.
Key Competencies & Skills
Strategic Data Leadership: Proven ability to set and execute data strategy, governance, and architecture at the enterprise level.
Advanced Data Engineering: Deep hands-on experience designing, building, and optimizing data pipelines and architectures (Python, SQL, Spark, Databricks, Snowflake, Azure, AWS, etc.).
Designing Data Models: Experience creating conceptual, logical, and physical data models that leverage different data modeling concepts and methodologies (normalization/denormalization, dimensional typing, data vault methodology, partitioning/embedding strategies, etc.) to meet solution requirements.
Cloud Data Platforms: Expertise in architecting and deploying solutions on leading cloud platforms (Azure, AWS, GCP, Snowflake).
Data Governance & Quality: Mastery of data management, MDM, data quality, and regulatory compliance (e.g., IFRS17, GDPR).
Analytics & AI Enablement: Experience enabling advanced analytics, BI, and AI/ML initiatives in complex environments.
Executive Stakeholder Management: Ability to communicate and influence at the C-suite and senior leadership level.
Project & Team Leadership: Demonstrated success managing project delivery, budgets, and cross-functional teams in a consulting context.
Continuous Learning & Innovation: Commitment to staying ahead of industry trends and fostering innovation within teams.
Qualifications
Bachelor's or master's degree in Computer Science, Engineering, Data Science, or related field, or equivalent business experience.
8+ years of experience in data engineering, data architecture, or analytics consulting, with at least 2 years in a leadership or management role.
Demonstrated success in client-facing roles, ideally within a consulting or professional services environment.
Advanced proficiency in Python, SQL, and modern data engineering tools (e.g., Spark, Databricks, Airflow).
Experience with cloud data platforms (Azure, AWS, GCP, Snowflake).
Relevant certifications (e.g., AWS Certified Data Analytics, Azure Data Engineer, Databricks, Snowflake) are a strong plus.
Exceptional problem-solving, analytical, and communication skills.
Industry exposure: Deep experience in Insurance, Pharma, or Financial Services
Additional Information
Salary Range : $157k - $200k annual salary
We are recruiting across several levels of seniority from Senior Consultant to Manager.
*Only candidates legally authorized to work for any employer in the U.S on a full time basis without the need for sponsorship will be considered. We are unable to sponsor or take over sponsorship of an employment Visa at this time.
Our Commitment
Wavestone values and Positive Way
At Wavestone, we believe our employees are our greatest ambassadors. By embodying our shared values, vision, mission, and corporate brand, you'll become a powerful force for positive change. We are united by a shared commitment to making a positive impact, no matter where we are. This is better defined by our value base, "The Positive Way," which serves as the glue that binds us together:
Energetic - A positive attitude gives energy to lead projects to success. While we may not control the circumstances, we can always choose how we respond to them.
Responsible - We act with integrity and take ownership of our decisions and actions, considering their impact around us.
Together - We want to be a great team, not a team of greats. The team's strength is each individual member, each member's strength is the team.
We are Energetic, Responsible and Together!
Benefits
25 PTO / 6 Federal Holidays / 4 Floating Holidays
Great parental leave (birthing parent: 4 months | supporting parent: 2 months)
Medical / Dental / Vision coverage
401K Savings Plan with Company Match
HSA/FSA
Up to 4% bonus based on personal and company performance with room to grow as you progress in your career
Regular Compensation increases based on performance
Employee Stock Options Plan (ESPP)
Travel and Location
This full-time position is based in our New York office. You must reside or be willing to relocate within commutable distance to the office.
Travel requirements tend to fluctuate depends on your projects and client needs
Diversity and Inclusion
Wavestone seeks diversity among our team members and is an Equal Opportunity Employer.
At Wavestone, we celebrate diversity and inclusion. We have a strong global CSR agenda and an active Diversity & Inclusion committee with Gender Equality, LGBTQ+, Disability Inclusion and Anti-Racism networks.
If you need flexibility, assistance, or an adjustment to our recruitment process due to a disability or impairment, you may reach out to us to discuss this.
Feel free to visit our Wavestone website and LinkedIn page to see our most trending insights!!
Data Engineer
Data engineer job in New Providence, NJ
Job Title: Senior Data Engineer (Python & Snowflake, SQL)
Employment Type: Contract
Sr. Data Engineer (Python, Snowflake, SQL)
The developer should have strong Python, Snowflake, SQL coding skills.
The developer should be able to articulate few real time experience scenarios and should have a good aptitude to show case solutions for real life problems in Snowflake and Python.
The developer should be able to write code in Python for some intermediate level problems given during the L1 assessment.
Lead qualities to be able to guide a team and to own the end to end support of the project.
Around 8 years' experience as Snowflake Developer on design and development of data solutions within the Snowflake Data Cloud, leveraging its cloud-based data warehousing capabilities. Responsible for designing and implementing data pipelines, data models, and ETL processes, ensuring efficient and effective data storage, processing, and analysis.
Able to write Complex SQL Queries, Write Python Stored Procedure code in Snowflake
Job Description Summary:
Data Modelling and Schema Design:
Create and maintain well-structured data models and schemas within Snowflake, ensuring data integrity and efficient query performance.
ETL/ELT Development:
Design and implement ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes to load data into Snowflake from various sources.
Data Pipeline Management:
Build and optimize data pipelines to ingest data into Snowflake, ensuring accurate and timely data flow.
SQL Optimization:
Write and optimize SQL queries to enhance performance and efficiency within Snowflake.
Performance Tuning:
Identify and address performance bottlenecks within Snowflake, optimizing query execution and resource allocation.
Security and Governance:
Implement data security and governance best practices within Snowflake environments, including access control and encryption.
Documentation and Maintenance:
Maintain documentation for data models, data pipelines, and other Snowflake solutions.
Troubleshooting and Support:
Troubleshoot and resolve issues within Snowflake, providing technical support to users.
Collaboration:
Collaborate with data architects, data engineers, and business users to understand requirements and deliver solutions
Other Skills:
Experience with data warehousing concepts and data modelling.
Hands-on experience in creating stored procedures, functions, tables, cursors.
Experience in database testing, data comparison, and data transformation scripting.
Capable of troubleshooting common database issues
Hands on experience in Gitlab with understanding of CI/CD Pipeline, DevOps tools
Knowledge on AWS Lambda and Azure Functions
Data Engineer
Data engineer job in New York, NY
About Beauty by Imagination:
Beauty by Imagination is a global haircare company dedicated to boosting self-confidence with imaginative solutions for every hair moment. We are a platform company of diverse, market-leading brands, including Wet Brush, Goody, Bio Ionic, and Ouidad - all of which are driven to be the most trusted choice for happy, healthy hair. Our talented team is passionate about delivering high-performing products for consumers and salon professionals alike.
Position Overview:
We are looking for a skilled Data Engineer to design, build, and maintain our enterprise Data Warehouse (DWH) and analytics ecosystem - with a growing focus on enabling AI-driven insights, automation, and enterprise-grade AI usage. In this role, you will architect scalable pipelines, improve data quality and reliability, and help lay the foundational data structures that power tools like Microsoft Copilot, Copilot for Power BI, and AI-assisted analytics across the business.
You'll collaborate with business stakeholders, analysts, and IT teams to modernize our data environment, integrate complex data sources, and support advanced analytics initiatives. Your work will directly influence decision-making, enterprise reporting, and next-generation AI capabilities built on top of our Data Warehouse.
Key Responsibilities
Design, develop, and maintain Data Warehouse architecture, including ETL/ELT pipelines, staging layers, and data marts.
Build and manage ETL workflows using SQL Server Integration Services (SSIS) and other data integration tools.
Integrate and transform data from multiple systems, including ERP platforms such as NetSuite.
Develop and optimize SQL scripts, stored procedures, and data transformations for performance and scalability.
Support and enhance Power BI dashboards and other BI/reporting systems.
Implement data quality checks, automation, and process monitoring.
Collaborate with business and analytics teams to translate requirements into scalable data solutions.
Contribute to data governance, standardization, and documentation practices.
Support emerging AI initiatives by ensuring model-ready data quality, accessibility, and semantic alignment with Copilot and other AI tools.
Required Qualifications
Proven experience with Data Warehouse design and development (ETL/ELT, star schema, SCD, staging, data marts).
Hands-on experience with SSIS (SQL Server Integration Services) for building and managing ETL workflows.
Strong SQL skills and experience with Microsoft SQL Server.
Proficiency in Power BI or other BI tools (Tableau, Looker, Qlik).
Understanding of data modeling, performance optimization, and relational database design.
Familiarity with Python, Airflow, or Azure Data Factory for data orchestration and automation.
Excellent analytical and communication skills.
Preferred Qualifications
Experience with cloud data platforms (Azure, AWS, or GCP).
Understanding of data security, governance, and compliance (GDPR, SOC2).
Experience with API integrations and real-time data ingestion.
Background in finance, supply chain, or e-commerce analytics.
Experience with NetSuite ERP or other ERP systems (SAP, Oracle, Dynamics, etc.).
AI Focused Preferred Skills:
Experience implementing AI-driven analytics or automation inside Data Warehouses.
Hands-on experience using Microsoft Copilot, Copilot for Power BI, or Copilot Studio to accelerate SQL, DAX, data modeling, documentation, or insights.
Familiarity with building RAG (Retrieval-Augmented Generation) or AI-assisted query patterns using SQL Server, Synapse, or Azure SQL.
Understanding of how LLMs interact with enterprise data, including grounding, semantic models, and data security considerations (Purview, RBAC).
Experience using AI tools to optimize ETL/ELT workflows, generate SQL scripts, or streamline data mapping/design.
Exposure to AI-driven data quality monitoring, anomaly detection, or pipeline validation tools.
Experience with Microsoft Fabric, semantic models, or ML-integrated analytics environments.
Soft Skills
Strong analytical and problem-solving mindset.
Ability to communicate complex technical concepts to business stakeholders.
Detail-oriented, organized, and self-motivated.
Collaborative team player with a growth mindset.
Impact
You will play a key role in shaping the company's modern data infrastructure - building scalable pipelines, enabling advanced analytics, and empowering the organization to safely and effectively adopt AI-powered insights across all business functions.
Our Tech Stack
SQL Server, SSIS, Azure Synapse
Python, Airflow, Azure Data Factory
Power BI, NetSuite ERP, REST APIs
CI/CD (Azure DevOps, GitHub)
What We Offer
Location: New York, NY (Hybrid work model)
Employment Type: Full-time
Compensation: Competitive salary based on experience
Benefits: Health insurance, 401(k), paid time off
Opportunities for professional growth and participation in enterprise AI modernization initiatives
DevOps Engineer (Bilingual in Mandarin)
Data engineer job in New York, NY
Interested in learning more about this job Scroll down and find out what skills, experience and educational qualifications are needed. We are seeking a detail-oriented and experienced DevOps Engineer to lead the administration of our AWS cloud infrastructure, CI/CD pipelines, and Database environments. This role requires deep expertise in AWS (including multi-account structures, SSO, and Organizations), hands-on experience with MongoDB cluster and MySQL/Aurora administration, and strong proficiency in CI/CD using tools like TeamCity and Git. You will be responsible for automating deployments, ensuring system reliability and performance, and supporting a complex ecosystem of services and databases. The ideal candidate has a strong grasp of modern DevOps practices-including infrastructure as code, proactive monitoring, and security automation-and collaborates effectively with global teams to deliver secure, scalable, and high-performing infrastructure across all environments.
*Key Responsibilities:*
AWS Infrastructure & Identity Management:
* Working experience in AWS Organization Management, including AWS Single Sign-on, roles, and permissions
* Understand the best practice in identity, account and permission management
* Optimize AWS resource usage and implement cost-saving measures through tagging, lifecycle policies, and instance type adjustments.
*Advanced AWS Networking & Security:*
* Deep understanding and working operational experience with common network components, including but not limited to AWS CloudFront, API Gateway, AWS Loadbalancer, and firewalls.
* Working experience in VPC configuration, deep understanding on VPC related securities
* Ability to troubleshoot network related issues.
*Infrastructure as Code*
* Working experience in managing large infrastructure through Terraform in AWS environment
*MongoDB/MySql/Aurora Database Management:*
* Manage and optimize database clusters.
* Perform upgrades, backups, replication setup, performance tuning, and TLS configuration.
* Coordinate cross-environment database migrations and health monitoring using MongoDB
*Ops Manager and AWS tools.*
* Database access control and permission management
* Database query optimization
*CI/CD & Automation:*
* Design, build, and maintain pipelines using Bitbucket Pipelines and TeamCity.
* Automate build/test/deploy processes with rollback capabilities and health checks.
*Monitoring & Observability:*
* Set up comprehensive system and application monitoring using CloudWatch, and Uptime Kuma.
* Implement log aggregation and alerting for AWS services, MongoDB, and deployed applications.
*Security & Compliance:*
* Implement and enforce TLS/SSL configurations to meet PCI-DSS and internal compliance standards.
* Conduct vulnerability scans and work with cybersecurity teams to close findings.
* Maintain IAM roles, access policies, and audit trails for security reviews.
*Collaboration & Support:*
* Work closely with development, QA, and global infrastructure teams.
* Provide documentation and onboarding for systems, pipelines, and recovery procedures.
* Participate in on-call rotations and lead incident response efforts.
*Hybrid Schedule:* onsite 3 days per week from Tuesday to Thursday.
*Required Qualifications:*
* 5+ years in DevOps, Cloud Engineering, or SRE roles.
* Deep expertise with AWS, including SSO, Organizations, EC2, IAM, S3, and multi-account management.
* Strong hands-on experience with CloudFront, API Gateway, ALB, NLB, and WAF.
* Proven MongoDB cluster management experience (EC2-based and Atlas).
* Proven SQL database administration, including MySQL and Postgres DB
* Proficient in CI/CD workflows with TeamCity and Bitbucket Pipelines.
* Skilled in Linux, Docker, and scripting languages (Bash, Python, Node.js).
* Monitoring experience with CloudWatch, Datadog, and Uptime Kuma.
* Infrastructure-as-Code knowledge using Terraform or CloudFormation.
* Experience managing TLS certificates, DNS, and secure network routing.
* Strong documentation and collaboration skills across distributed teams. xevrcyc
* Ability to communicate in Mandarin Chinese.
Job Type: Full-time
Pay: $125,000.00 - $165,000.00 per year
Benefits:
* 401(k)
* Dental insurance
* Health insurance
* Paid time off
* Vision insurance
Application Question(s):
* Will you now or in the future require sponsorship(H1-B, etc) to work in the US?
Experience:
* AWS: 3 years (Preferred)
* Cloud infrastructure: 3 years (Preferred)
* CI/CD: 3 years (Preferred)
Language:
* Mandarin (Required)
Ability to Commute:
* New York, NY 10016 (Required)
Ability to Relocate:
* New York, NY 10016: Relocate before starting work (Required)
Work Location: Hybrid remote in New York, NY 10016
AI Engineer
Data engineer job in New York, NY
AI Engineer - Healthcare Automation Platform
Well-Funded Startup | Healthcare AI | Hybrid (NYC preferred) or Remote
About Us
We're building an AI-powered automation platform that streamlines critical workflows in healthcare operations. Our system processes complex, unstructured data to ensure time-sensitive information gets where it needs to go-reducing delays and improving operational efficiency for healthcare providers.
We're in production with paying enterprise customers and experiencing rapid growth.
The Role
We're looking for a high-agency AI Engineer to bridge the gap between cutting-edge ML research and real-world product delivery. You'll design and build agentic workflows that automate complex operational processes, combining LLMs, vision models, and structured automation to solve challenging infrastructure and workflow problems.
This role involves creating pipelines, evaluation harnesses, and scalable production-grade agents. You'll research and implement the best-fit technology for each workflow, working across the full stack from data collection to orchestration to frontend integration.
What You'll Build
Ship full-stack AI systems end-to-end-from prototype to production
Build observability and debugging tools to capture model performance, user feedback, and edge cases
Go from ideation to working code within hours; iterate rapidly on experiments and data
Design agentic workflows powered by LLMs and vision models for document understanding
Create evaluation frameworks to test AI system performance beyond raw model accuracy
Work directly with cross-functional teams (ML, Sales, Customer Success) to build AI solutions for diverse use cases
What We're Looking For
Full-stack engineering experience with web frameworks, backend systems, and cloud infrastructure
Proven track record of building, testing, deploying, scaling, and monitoring LLM-centered software architectures
Hands-on expertise with LLM APIs and production AI system deployment
Understanding of how to evaluate AI systems holistically-beyond model accuracy alone
Strong communication skills-ability to write clear technical documentation and explain complex systems
Bonus: Experience in healthcare or working with unstructured documents
Why Join Us?
Drive Impact: High-agency culture where you set the pace and see direct results
Own Your Work: End-to-end ownership from research to production deployment
Innovate with Purpose: Join a high-caliber team solving real problems at scale
Competitive Package: $200K-$240K + equity + comprehensive benefits
Great Perks: Unlimited PTO, 100% paid health benefits, 401(k) match, catered lunch, snacks
Location: NYC office 4 days/week preferred (Chelsea), remote considered for exceptional candidates
Desired Skills and Experience
AI Engineer - Healthcare Automation Platform
Well-Funded Startup | Healthcare AI | Hybrid (NYC preferred) or Remote
About Us
We're building an AI-powered automation platform that streamlines critical workflows in healthcare operations. Our system processes complex, unstructured data to ensure time-sensitive information gets where it needs to go-reducing delays and improving operational efficiency for healthcare providers.
We're in production with paying enterprise customers and experiencing rapid growth.
The Role
We're looking for a high-agency AI Engineer to bridge the gap between cutting-edge ML research and real-world product delivery. You'll design and build agentic workflows that automate complex operational processes, combining LLMs, vision models, and structured automation to solve challenging infrastructure and workflow problems.
This role involves creating pipelines, evaluation harnesses, and scalable production-grade agents. You'll research and implement the best-fit technology for each workflow, working across the full stack from data collection to orchestration to frontend integration.
What You'll Build
Ship full-stack AI systems end-to-end-from prototype to production
Build observability and debugging tools to capture model performance, user feedback, and edge cases
Go from ideation to working code within hours; iterate rapidly on experiments and data
Design agentic workflows powered by LLMs and vision models for document understanding
Create evaluation frameworks to test AI system performance beyond raw model accuracy
Work directly with cross-functional teams (ML, Sales, Customer Success) to build AI solutions for diverse use cases
What We're Looking For
Full-stack engineering experience with web frameworks, backend systems, and cloud infrastructure
Proven track record of building, testing, deploying, scaling, and monitoring LLM-centered software architectures
Hands-on expertise with LLM APIs and production AI system deployment
Understanding of how to evaluate AI systems holistically-beyond model accuracy alone
Strong communication skills-ability to write clear technical documentation and explain complex systems
Bonus: Experience in healthcare or working with unstructured documents
Why Join Us?
Drive Impact: High-agency culture where you set the pace and see direct results
Own Your Work: End-to-end ownership from research to production deployment
Innovate with Purpose: Join a high-caliber team solving real problems at scale
Competitive Package: $200K-$240K + equity + comprehensive benefits
Great Perks: Unlimited PTO, 100% paid health benefits, 401(k) match, catered lunch, snacks
Location: NYC office 4 days/week preferred (Chelsea), remote considered for exceptional candidates
Oscar Associates Limited (US) is acting as an Employment Agency in relation to this vacancy.
GTM Engineer
Data engineer job in New York, NY
About us:
Camber builds software to improve the quality and accessibility of healthcare. We streamline and replace manual work so clinicians can focus on what they do best: providing great care. For more details on our thesis, check out our write-up: What is Camber?
We've raised $50M in funding from phenomenal supporters at a16z, Craft Ventures, YCombinator, Manresa, and many others who are committed to improving the accessibility of care. For more information, take a look at: Announcing Camber
About our Culture:
Our mission to change behavioral health starts with us and how we operate. We don't want to just change behavioral health, we want to change the way startups operate. Here are a few tactical examples:
1) Improving accessibility and quality of healthcare is something we live and breathe. Everyone on Camber's team cares deeply about helping clinicians and patients.
2) We have to have a sense of humor. Healthcare is so broken, it's depressing if you don't laugh with us.
About the role:
We're seeking a proactive, tech-savvy sales operations professional with a startup mindset-someone who thrives on breaking growth barriers and enabling sales excellence. This person will be both a systems admin and a strategic partner: ensuring HubSpot and our tech stack are humming, while also helping shape compensation, territories, and GTM expansion.
What you'll do:
Systems & CRM Administration
Manage and optimize current CRM (HubSpot) and other tech stack integrations: build workflows, dashboards, and troubleshoot system issues
Support onboarding/offboarding of users, governance, data hygiene, and adoption
Data, Forecasting & Reporting
Design and maintain dashboards, reports, and metrics that drive decision-making (e.g., pipeline health, forecast accuracy, win rates)
Deliver actionable insights to stakeholders across sales leadership
Compensation & Territory Strategy
Assist in designing incentive and quota plans that align with sales goals
Collaborate on territory definition, alignment, and carve strategy to ensure balanced coverage
Process & Cross-Functional Enablement
Streamline sales workflows and sales-marketing-sales handoffs
Partner across teams-Sales, Marketing, Finance-to ensure operational alignment and seamless execution
Strategic & Tactical Execution
Be hands-on when needed (data crunching, HubSpot tweaks) while contributing to broader sales strategy planning
What we're looking for:
2-4 years in startup, sales operations, or Rev-Ops environment (or similar roles)
CRM administration experience-ideally HubSpot; bonus if familiar with other tools and workflows
Strong analytical skills-coding, Excel, BI, sales forecasting, data modeling
Operational rigor and problem-solving mindset
A strategic thinker who can scale systems and structure
Thrives in growth-stage constraints; comfortable wearing multiple hats and moving quickly
Perks & Benefits at Camber:
Comprehensive Health Coverage: Medical, dental, and vision plans with nationwide coverage, including 24/7 virtual urgent care.
Mental Health Support: Weekly therapy reimbursement up to $100, so you can prioritize the care that works best for you.
Paid Parental Leave: Up to 12 weeks of fully paid time off for new parents (
birth, adoption, or foster care)
.
Financial Wellness: 401K (traditional & Roth), HSA & FSA options, and monthly commuter benefits for NYC employees.
Time Off That Counts: 18 PTO days per year
(plus rollover
), plus office closures for holidays, monthly team events, company off-sites, and daily, in-office lunches for our team.
Fitness Stipend: $100/month to use on fitness however you choose.
Hybrid Flexibility: In NYC? We gather in the office 3-5x/week, with flexibility when life happens. Fridays are remote-friendly.
Camber is based in New York City, and we prioritize in-person and hybrid candidates.
Building an inclusive culture is one of our core tenets as a company. We're very aware of structural inequalities that exist, and recognize that underrepresented minorities are less likely to apply for a role if they don't think they meet all of the requirements. If that's you and you're reading this, we'd like to encourage you to apply regardless - we'd love to get to know you and see if there's a place for you here!
In addition, we take security seriously, and all of our employees contribute to uphold security requirements and maintain compliance with HIPAA security regulations.
Staff Software Engineer
Data engineer job in New York, NY
Who We Are
At City Storage Systems (CSS), we are dedicated to building Infrastructure for Better Food. Our mission is to empower restaurateurs worldwide to thrive in the online food delivery market. By making food more affordable, of higher quality, and convenient, we're transforming the industry for everyone, from budding entrepreneurs opening their first restaurant to global quick-service chains.
What You'll Do
As a backend-focused Software Engineer at CSS, you'll play a crucial role in our data-driven development team, helping to advance our state-of-the-art menu platform. Your responsibilities will include:
Data-Driven Development: Contribute to our data-centric development efforts.
Project Planning: Participate in strategic planning for various internal tools.
Agile Methodologies: Implement and test software using agile methodologies.
Collaborative Teamwork: Work closely with a team to enhance and support our technology.
Code Contribution: Write, debug, maintain, and test code across multiple projects.
Architectural Design: Design scalable systems with a focus on robust architecture.
Continuous Improvement: Engage in continuous improvement initiatives.
Innovation: Drive innovation within the team and support technological advancements at CSS.
What the Team Focuses On
Our menu platform (check our tech blog) offers comprehensive menu management features designed to streamline restaurant operations, enhance customer experiences, and optimize performance. It serves as a single source of truth for menus, seamlessly integrating with online channels such as DoorDash, UberEats, and Grubhub and offline point-of-sale (POS) systems like Square, Toast, and NCR.
Key capabilities include updating menus with new items, pricing, and taxes, performing A/B testing on different structures, setting availability by channel, creating combos and promotions, managing ingredients and SKUs, and configuring operational hours. Additionally, our platform features automated linking to ensure POS and online menus are always synchronized, minimizing discrepancies.
Boasting a 99.9% availability rate, our platform supports a vast network of brands in the US and worldwide, ensuring uninterrupted service. Over 100,000 restaurateurs use our platform daily to streamline their operations and consistently express high satisfaction.
What We're Looking For
Education: Bachelor's Degree in Computer Science or equivalent.
Experience: 7-10 years of experience in a relevant role.
Individual Contribution: Proven track record of significant contributions in previous roles, demonstrating your impact.
Architectural Skills: Ability to design and create robust architecture from scratch and evolve existing systems.
Communication Skills: Strong communication and presentation skills, with the ability to collaborate with non-engineering stakeholders.
Technical Expertise: Experience designing and implementing scalable, reliable, and efficient distributed systems. Familiarity with Java / Go / Kotlin is required.
Concurrency: Experience building systems that can execute multiple tasks while managing overlapping run-time and space complexities simultaneously.
Application Maintenance: Experience in maintaining and extending large-scale, high-traffic applications.
Why Join Us
Growing Market: You'll be part of an $80 billion market projected to reach at least $500 billion by 2030 in the US alone.
Industry Impact: Join a team that is transforming the restaurant industry and helping restaurants succeed in online food delivery.
Collaborative Environment: Benefit from the support and guidance of experienced colleagues and managers, who will help you learn, grow, and achieve your goals. Work closely with other teams to ensure our customers' success.
Additional Information
This role is based in our Mountain View office. We look forward to sharing more about a meaningful career at CSS!
Plumbing Engineer
Data engineer job in New York, NY
🔎 We're Hiring: Senior Plumbing & Fire Protection Engineer / MEP Designer (On-Site - Brooklyn, NY)
Precision Design, a leading MEP Engineering firm in Brooklyn, NY, is seeking a Senior Plumbing Engineer / MEP Designer with strong experience in Plumbing, Fire Protection
We are looking for a highly skilled professional who can independently design systems, coordinate across multiple disciplines, and manage multiple projects in a fast-paced environment.
Candidates must have at least 5 years of industry experience, including a minimum of 3 years designing in NYC, and must be fully knowledgeable of NYC Building and Energy Codes.
💼 Responsibilities
Design Plumbing & Fire Protection systems from concept through full construction documents
Prepare calculations for water, gas, sanitary/sewer, and storm loads
Perform field surveys and assess existing building conditions
Produce drawings, specifications, and all phases of design (schematic → construction administration)
Coordinate with architectural, engineering, and external project teams, including contractors and city agencies
Manage multiple projects simultaneously
Review shop drawings and participate in project meetings
📘 Required Skills & Experience
5+ years of related experience in Plumbing and/or Fire Protection design
At least 3 years of NYC-specific design experience
Strong knowledge of NYC Building Codes, NYC Energy Conservation Code, and NYC filing requirements
Experience with utility company filing procedures
Proficiency in AutoCAD (Revit is a plus)
Familiarity with NFPA-13, NFPA-13R, and hydraulic calculations
Experience with DEP cross-connection and site connection submissions is strongly preferred
Excellent communication, teamwork, and interpersonal skills
Ability to work independently and manage multiple deadlines
📍 Work Location
On-site in our Brooklyn, NY office (no remote option)
Java Software Engineer
Data engineer job in New York, NY
BeaconFire is based in Central NJ, specializing in Software Development, Web Development, and Business Intelligence; we are looking for candidates with a strong background in Software Engineering or Computer Science for a Java/Software Developer position.
Responsibilities:
● Develop software and web applications using Java 8/J2EE/Java EE (and higher), React.js,Angular2+, SQL, Spring, HTML5, CSS, JavaScript and TypeScript among other tools;
● Write scalable, secure, maintainable code that powers our clients' platforms;
● Create, deploy, and maintain automated system tests;
● Work with Testers to understand defects opened and resolves them in a timely manner;
● Support continuous improvement by investigating alternatives and technologies and
presenting these for architectural review;
● Collaborate effectively with other team members to accomplish shared user story
and sprint goals;
Basic Qualifications:
● Experience in programming language JavaScript or similar (e.g. Java, Python, C, C++, C#, etc.) an understanding of the software development life cycle;
● Basic programming skills using object-oriented programming (OOP) languages with in-depth knowledge of common APIs and data structures like Collections, Maps, lists, Sets etc.
● Knowledge of relational databases (e.g. SQL Server, Oracle) basic SQL query language skills
Preferred Qualifications:
● Master's Degree in Computer Science (CS)
● 0-1 year of practical experience in Java coding
● Experience using Spring, Maven and Angular frameworks, HTML, CSS
● Knowledge with other contemporary Java technologies (e.g. Weblogic, RabbitMQ,
Tomcat, etc.) · Knowledge of JSP, J2EE, and JDBC
·
Compensation: $65,000.00 to $80,000.00 /year
BeaconFire is an e-verified company. Work visa sponsorship is available.
Principal Software Engineer
Data engineer job in Iselin, NJ
Job Title: Principal Software Engineer
Experience: 13+ Years
As a Principal Software Engineer, you will collaborate with engineering teams and architecture to deliver high-quality, scalable technology solutions. This role offers autonomy to lead, design, and develop innovative solutions to complex challenges in the banking industry. You will serve as a peer-leader, driving cutting-edge initiatives and fostering a culture of technical excellence and innovation. We are looking for someone who is hands on with Angular and Typescript and can also develop on backend using Spring Boot Java.
This role requires a blend of business acumen, technical proficiency, and strategic leadership, making it ideal for professionals who thrive at the intersection of banking, technology, and innovation.
Key Responsibilities:
Lead full-stack development efforts with a strong emphasis on frontend engineering using Angular and TypeScript.
Build micro frontends using the Single SPA framework to support modular and maintainable UI architecture.
Design, develop, and maintain scalable microservices using Spring Boot (Java) and responsive, dynamic web applications using Angular.
Collaborate with cross-functional teams to define, design, and ship new features, ensuring alignment with enterprise architecture principles.
Infuse quality of service characteristics such as scalability, manageability, and maintainability into distributed service-based frameworks.
Ensure code quality and security using tools like SonarQube, Fortify, and Nexus IQ.
Troubleshoot and resolve issues efficiently, maintaining high standards of performance and reliability.
Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and continuous improvement.
Mentor and guide software engineers, fostering a culture of innovation, accountability, and technical excellence.
Mentor and develop team members to strengthen organizational knowledge in Loan IQ capabilities and best practices
Required Tech Stack & Tools Experience
Frontend: Angular (TypeScript), Single SPA
Backend: Spring Boot (Java)
Database: PostgreSQL
DevOps & CI/CD: Jenkins, OpenShift
Source Control: Bitbucket
Agile Management: Jira
Code Quality & Security: SonarQube, Nexus IQ, Fortify
Cloud: AWS, OpenShift technologies
Required Qualifications:
7+ years of hands-on software development experience, including full-stack development.
Proven experience leading and mentoring software engineers.
Mastery of multiple programming languages
Required expertise in Angular/TypeScript.
Experience building micro frontends and working with Single SPA.
Solid understanding of RESTful APIs, secure coding practices, and vulnerability remediation.
Experience with CI/CD pipelines and containerized deployments.
Strong communication and interpersonal skills.
Cloud certifications such as AWS Solutions Architect.
Familiarity with data structures such as linked lists, dictionaries, arrays, and custom object creation.
Preferred Qualifications:
3+ years of experience in the financial services industry, especially in commercial banking, portfolio management, trading, compliance, or wealth management.
Experience working on Commercial Card platform is a plus.
Understanding of banking systems and custodial/commercial banking operations.
Experience with technologies such as Apache, Lucene, Memcache, RabbitMQ, and NoSQL.
Education
Required: Bachelor's degree in software engineering, Computer Science, Engineering, Mathematics, or related discipline.
Preferred: Master's degree in software engineering, Computer Science, or related discipline
Company Profile:
Stratacent is a Global IT consulting and Services firm, headquartered in Jersey City, NJ, USA with offices in UK, Canada, and South Africa and global capability centers (GCC) in Pune and Gurugram in India. Our focus areas include Data and AI, Cloud Services, Automation, IT Operations, Application Development, and Information Security.
URL - *********************
Stratacent - data driven solutions
Global managed services firm assisting customers with digital transformation, including data and analytics, cloud services, automation, and IT service management.
stratacent.com
Employee Benefits:
• Group Medical Insurance
• Cab facility
• Meals/snacks
• Continuous Learning Program
Stratacent India Private Limited is an equal opportunity employer and will not discriminate against any employee or applicant for employment on the basis of race, color, creed, religion, age, sex, national origin, ancestry, handicap, or any other factors.
ETL Talend MDM Architect
Data engineer job in New York, NY
Responsibilities: • Develop and test Extract, Transformation, and Loading (ETL) modules based on design specifications • Develop and test ETL Mappings in Talend • Plan, test, and deploy ETL mappings, and database code as part of application build process across the enterprise
• Provide effective communications with all levels of internal and external customers and staff
• Must demonstrate knowledge in the following areas:
o Data Integration
o Data Architecture
o Team Lead experience is a plus
• Understand, analyze, assess and recommend ETL environment from technology strategy and operational standpoint
• Understand and assess source system data issues and recommend solution from data integration standpoint
• Create high level, low level technical design documents for data integration
• Design exceptions handling, audit and data resolution processes
• Performance tune ETL environment
• Conduct proof of concepts
• Estimation of work based on functional requirements documents
• Identify system deficiencies and recommending solutions
• Designing, coding, and writing unit test cases from functional requirements
• Delivering efficient and bug-free ETL packages and documentation
• Maintenance and support of enterprise ETL jobs
• Experience with Talend Hadoop tools is a plus
Basic Qualifications:
• 3+ years of development experience on Talend ETL tools
• 7+ years working with one or more of the following ETL Tools: Talend, Informatica, Ab Initio or Data Stage
• 7+ years proficient experience as a developer
• Bachelor's Degree in Computer Science or equivalent
• Database (Oracle, SQL Server, DB2)
• Database Programming (Complex SQL, PL/SQL development knowledge)
• Data Modeling
• Business Analysis
• Top level performer with ability to work independently in short time frames
• Proficient working in a Linux environment
• Experience in scripting languages (Shell, Python or Perl)
• 5+ years of experience deploying large scale projects ETL projects that
• 3+ years of experience in a development lead position
• Data analysis, data mapping, data loading, and data validation
• Understand reusability, parameterization, workflow design, etc.
• Thorough understanding of Entire life cycle of Software and various Software Engineering Methodologies
• Performance tuning of interfaces that extract, transform and load tens of millions of records
• Knowledge of Hadoop ecosystem technologies is a plus
Additional Information
If you are comfortable with the position and location then please revert me back at the earliest with your updated resume and following details or I would really appreciate if you can call me back on my number.
Full Name:
Email:
Skype id:
Contact Nos.:
Current Location:
Open to relocate:
Start Availability:
Work Permit:
Flexible time for INTERVIEW:
Current Company:
Current Rate:
Expected Rate:
Total IT Experience [Years]:
Total US Experience [Years]:
Key Skill Set:
Best time to call:
In case you are not interested, I will be very grateful if you can pass this position to your colleagues or friends who might be interested.
All your information will be kept confidential according to EEO guidelines.
Principal Data Scientist : Product to Market (P2M) Optimization
Data engineer job in New York, NY
About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials.
This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team.
About the Role
Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization.
The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels.
What You'll Do
* Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges.
* Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals.
* Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment.
* Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility.
* Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly.
* Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights.
* Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success.
* Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value
* Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP.
* Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality.
Who You Are
* Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline.
* 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains.
* Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes.
* Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation.
* Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib.
* Hands-on experience with enterprise platforms such as Databricks and cloud environments
* Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions.
* Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences.
* Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations.
* Experience managing code quality, CI/CD pipelines, and GitHub-based workflows.
Preferred Qualifications
* Experience shaping and executing multi-year analytics strategies in retail or supply chain domains.
* Proven ability to balance long-term innovation with short-term deliverables.
* Background in agile product development and stakeholder alignment for enterprise-scale initiatives.
Benefits at Gap Inc.
* Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees.
* One of the most competitive Paid Time Off plans in the industry.*
* Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.*
* Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.*
* Employee stock purchase plan.*
* Medical, dental, vision and life insurance.*
* See more of the benefits we offer.
* For eligible employees
Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity.
Salary Range: $201,700 - $267,300 USD
Employee pay will vary based on factors such as qualifications, experience, skill level, competencies and work location. We will meet minimum wage or minimum of the pay range (whichever is higher) based on city, county and state requirements.
ETL Architect
Data engineer job in Jersey City, NJ
US Tech Solutions is a global staff augmentation firm providing a wide-range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit our website ************************ We are constantly on the lookout for professionals to fulfill the staffing needs of our clients, sets the correct expectation and thus becomes an accelerator in the mutual growth of the individual and the organization as well.
Keeping the same intent in mind, we would like you to consider the job opening with US Tech Solutions that fits your expertise and skillset.
Job Description
Responsibilities:
• Assist in the identification/resolution of critical path issues related to the generation of technical specifications and associated Informatica ETL development
• Provide solutions to design and develop ETL mappings and related activities; ability to articulate in architecture terms
• Strong ability to author technical solution documents, and architecture components
• Provide technical support and troubleshooting to data integration issues
• Work together with the business intelligence team to design and develop reporting and dashboard solutions
• Create and maintain detailed technical documentation on developments
• Provide solutions to fine tune ETLs and database components for performance
• Provide testing strategy and support execution
• Provide deployment and release management strategy and support
Qualifications
Qualifications:
• BS in Computer Science ore related IT field, or equivalent work experience required
• Experience: 3-5 years development experience with database solutions (Oracle SQL, PL/SQL) and Unix scripting
• Hands on development experience 5+ years' experience in data integration (ETL) specifically in Informatica PowerCenter9.5 and above
• Experience in developing data warehouse solutions in the Finance industry
• Ability to analyze information architectures understands and derives business requirements
• Ability to collaborate and work in a team environment
• Proactive thinking and ability to work under minimal supervision
• Excellent analytical and communication skills and ability to deal effectively with customers
• Good communication skills both oral and written to be able to work with both the technical and banking users
• Strong project experience in providing data integration solutions to Data Warehouse and/or Data Mart implementations
Additional Information
Kushal kumar
**********
ETL Architect
Data engineer job in New York, NY
A Few Words About Us Integrated Resources, Inc is a premier staffing firm recognized as one of the tri-states most well-respected professional specialty firms. IRI has built its reputation on excellent service and integrity since its inception in 1996. Our mission centers on delivering only the best quality talent, the first time and every time. We provide quality resources in four specialty areas: Information Technology (IT), Clinical Research, Rehabilitation Therapy and Nursing.
Position: ETL Architect
Location: NYC
Duration: 6 months
Job Description:
This opportunity is for individuals who have Hands-on experience in data warehouse design and development. The Role demands more than a typical ETL lead role as it interacts outwardly on projects with architects, PM's, OPS, data modelers, developers, admins, DBA's and testers. This is a hands-on delivery-focused role, and the individual will be responsible for technical delivery of data warehouse and data integration projects
Must have skills
• 7-10 years Hands on experience with Informatica ETL in designing and developing ETL processes based on multiple sources using ETL tools
• Experience in Architecting end to end ETL solutions
• Hands on UNIX experience. Scripting (e.g. shell, perl, alerts, cron, automation)
• Expert at all aspects of relational database design
• Experience working with engineering team with respect to database-related performance tuning, writing of complex SQL, indexing, etc.
Good to Have:
• Experience with IDQ, MDM, other ETL tools
• Experience with dashboard and report development
• Experience with financial services firms will be preferred
Additional Information
Kind Regards
Sachin Gaikwad
Technical Recruiter
Integrated Resources, Inc.
Direct Line : 732-429-1920