Post job

Data scientist jobs in Paramus, NJ

- 991 jobs
All
Data Scientist
Data Engineer
Actuary
Analytical Data Miner
  • Data Scientist

    AGM Tech Solutions-A Woman and Latina-Owned It Staffing Firm-An Inc. 5000 Company

    Data scientist job in Parsippany-Troy Hills, NJ

    ***This role is hybrid three days per week onsite in Parsippany, NJ. LOCAL CANDIDATES ONLY. No relocation*** Data Scientist • Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC health, prioritize improvements, validate pilot outcomes, and implement AI-driven development lifecycle capabilities. • Responsibilities: o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and delivery KPIs. o Build dashboards, anomaly detection, and data models; implement GenAI solutions (e.g., code suggestion, PR summarization, automated test generation) to improve developer workflows. o Design experiments and validate AI-driven features during the pilot. o Collaborate with engineering and SRE to operationalize models and ensure observability and data governance. • Required skills: o 3+ years applied data science/ML in production; hands-on experience with GenAI/LLMs applied to developer workflows or DevOps automation. o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data visualization (Tableau/Power BI). o Experience with observability/telemetry data (logs/metrics/traces) and A/B experiment design. • Preferred: o Experience with model deployment, MLOps, prompt engineering, and cloud data platforms (AWS/GCP/Azure).
    $76k-106k yearly est. 1d ago
  • Reinsurer Actuary

    BJRC Recruiting

    Data scientist job in New York, NY

    Pricing Reinsurance Actuary New York, USA Our Client Our client is a modern reinsurance company specializing in data-driven portfolios of Property & Casualty (P&C) risk. The firm's approach combines deep industry relationships, a sustainable capital structure, and a partner-first philosophy to deliver efficient and innovative reinsurance solutions. As the company continues to grow, Northern is seeking a Pricing Actuary to strengthen its technical and analytical capabilities within the underwriting team. This is a unique opportunity to join a collaborative and forward-thinking environment that values expertise, precision, and long-term partnerships. Responsibilities Lead the pricing of reinsurance treaties, with a primary focus on U.S. Casualty Programs, and expand to other lines as the portfolio grows. Provide core input into transactional and portfolio-level underwriting decisions. Conduct peer reviews of pricing models and assumptions prepared by other team members. Support quarterly loss ratio analyses to monitor portfolio performance, identify trends, and inform capital management strategies. Contribute to the annual reserve study process and present insights to the executive leadership team. Execute benchmark parameter studies and communicate findings to management to support strategic planning. Participate in broker meetings and client visits, helping to structure and negotiate optimal treaty terms. Enhance portfolio monitoring and efficiency through process improvement, automation, and the development of analytical tools and dashboards. Some travel may be required. Qualifications Degree in Actuarial Science, Mathematics, Statistics, or a related field. FCAS or ACAS designation (or international equivalent) strongly preferred. Minimum of 8-10 years of actuarial experience in Property & Casualty reinsurance or insurance pricing. Proven expertise in casualty and specialty lines, including professional liability, D&O, and E&O. Strong analytical and problem-solving skills, with demonstrated experience in data modeling, pricing tools, and portfolio analytics. Excellent communication and collaboration abilities, capable of working closely with underwriting, finance, and executive teams. The Ideal Profile A technically strong, business-minded actuary who thrives in a dynamic, collaborative, and data-driven environment. Recognized for analytical rigor, attention to detail, and the ability to translate complex models into actionable insights. Values teamwork, intellectual curiosity, and innovation in solving industry challenges. Seeks to contribute meaningfully to a growing, entrepreneurial organization shaping the future of reinsurance. Why Join the Team This is a rare opportunity to join a modern, high-growth reinsurer where your analytical insight and strategic thinking will directly influence underwriting decisions and company performance. Northern offers a collaborative culture, flexible hybrid work model, and the chance to shape cutting-edge reinsurance strategies in partnership with leading underwriters and actuaries. REF# WEB1429
    $91k-142k yearly est. 1d ago
  • Data & Performance Analytics (Hedge Fund)

    Coda Search│Staffing

    Data scientist job in New York, NY

    Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing. Responsibilities This role will be responsible for Compiling periodical fund performance analyses Review and analyze portfolio performance data, benchmark performance and risk statistics Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems Apply queries, pivot tables, filters and other tools to analyze data. Maintain client relationship management database and providing reports to Directors on a regular basis Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization Provide various ad hoc data research and analysis as needed. Desired Skills and Experience Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience Proficiency in Microsoft Office, particularly Excel Modeling Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint Outstanding communication skills, proven ability to effectively work with all levels of Managment Comfortable working in a fast-paced, dead-line driven dynamic environment Innovative and creative thinker Must be detail oriented
    $68k-96k yearly est. 1d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data scientist job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 3d ago
  • Senior Data Engineer

    Godel Terminal

    Data scientist job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 2d ago
  • Data Engineer

    Company 3.0company rating

    Data scientist job in Fort Lee, NJ

    The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance. Responsibilities Analyze, structure, and interpret raw data. Build and maintain datasets for business use. Design and optimize database tables, schemas, and data structures. Enhance data accuracy, consistency, and overall efficiency. Develop views, functions, and stored procedures. Write efficient SQL queries to support application integration. Create database triggers to support automation processes. Oversee data quality, integrity, and database security. Translate complex data into clear, actionable insights. Collaborate with cross-functional teams on multiple projects. Present data through graphs, infographics, dashboards, and other visualization methods. Define and track KPIs to measure the impact of business decisions. Prepare reports and presentations for management based on analytical findings. Conduct daily system maintenance and troubleshoot issues across all platforms. Perform additional ad hoc analysis and tasks as needed. Qualification Bachelor's Degree in Information Technology or relevant 4+ years of experience as a Data Analyst or Data Engineer, including database design experience. Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations. Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views. Excellent written, verbal, and interpersonal communication skills. Ability to manage multiple tasks in a fast-paced and evolving environment. Strong work ethic, professionalism, and integrity. Advanced proficiency in Microsoft Office applications.
    $93k-132k yearly est. 5d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 2d ago
  • Lead Data Engineer

    APN Consulting, Inc. 4.5company rating

    Data scientist job in New York, NY

    Job title: Lead Software Engineer Duration: Fulltime/Contract to Hire Role description: The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives. Key Responsibilities: Manage and support HR business applications, including problem resolution and issue ownership Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance Develop and maintain data ingestion processes using Informatica, Python, and related technologies Support data analytics and data science initiatives with optimized data structures and AI/ML tools Manage vendor products and their integrations with internal/external applications Gather requirements and translate functional needs into technical specifications Perform QA testing and impact analysis across the BI ecosystem Maintain system documentation and knowledge repositories Provide technical guidance and manage stakeholder communications Required Skills & Experience: Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space. Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming. Exposure to AI & ML technologies and experience tuning models and building LLM integrations. Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics. Extensive database development experience in MS SQL Server/ Oracle and SQL scripting. Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins Proficiency in using collaboration tools like Confluence, SharePoint, JIRA Analytical skills to model business functions, processes and dataflow within or between systems. Strong problem-solving skills to debug complex, time-critical production incidents. Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams. Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
    $93k-133k yearly est. 5d ago
  • Market Data Engineer

    Harrington Starr

    Data scientist job in New York, NY

    🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 3d ago
  • Data Engineer

    Aaratech

    Data scientist job in New York, NY

    Data Engineer (3-4 Years Experience) Remote / On-site - based on client needs ) Employment Type: Full-time ( Contract or Contract-to-Hire ) Experience Level: Mid-level (3-4 years) Company: Aaratech Inc 🛑 Eligibility: Open to U.S. Citizens and Green Card holders only. We do not offer visa sponsorship. 🔍 About Aaratech Inc Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud, data, and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects. 🎯 Position Overview We are seeking a Data Engineer with 3-4 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines, robust data models, and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark, Snowflake, and Python, to deliver clean, reliable, and timely data for advanced analytics and reporting. 🛠️ Key Responsibilities Design and develop scalable Data Pipelines to support batch and real-time processing Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt Develop and optimize queries using SQL for data analysis and warehousing Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery Collaborate with business and technical teams to gather requirements and create accurate Data Models Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation Ensure end-to-end Data Processing integrity, scalability, and performance Follow best practices for data governance, security, and compliance ✅ Required Skills & Experience 3-4 years of experience in Data Engineering or a similar role Strong proficiency in SQL and Python (Programming Language) Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines Solid understanding of Data Warehousing concepts and architecture Hands-on experience with Snowflake, Apache Spark, or similar big data technologies Proven experience in Data Modeling and data schema design Exposure to Data Processing frameworks and performance optimization techniques Familiarity with cloud platforms like AWS, GCP, or Azure ⭐ Nice to Have Experience with streaming data pipelines (e.g., Kafka, Kinesis) Exposure to CI/CD practices in data development Prior work in consulting or multi-client environments Understanding of data quality frameworks and monitoring strategies
    $90k-123k yearly est. 1d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data scientist job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 2d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 3d ago
  • Lead Data Engineer

    Themesoft Inc. 3.7company rating

    Data scientist job in Roseland, NJ

    Job Title: Lead Data Engineer. Hybrid Role: 3 Times / Week. Type: 12 Months Contract - Rolling / Extendable Contract. Work Authorization: Candidates must be authorized to work in the U.S. without current or future sponsorship requirements. Must haves: AWS. Databricks. Lead experience- this can be supplemented for staff as well. Python. Pyspark. Contact Center Experience is a nice to have. Job Description: As a Lead Data Engineer, you will spearhead the design and delivery of a data hub/marketplace aimed at providing curated client service data to internal data consumers, including analysts, data scientists, analytic content authors, downstream applications, and data warehouses. You will develop a service data hub solution that enables internal data consumers to create and maintain data integration workflows, manage subscriptions, and access content to understand data meaning and lineage. You will design and maintain enterprise data models for contact center-oriented data lakes, warehouses, and analytic models (relational, OLAP/dimensional, columnar, etc.). You will collaborate with source system owners to define integration rules and data acquisition options (streaming, replication, batch, etc.). You will work with data engineers to define workflows and data quality monitors. You will perform detailed data analysis to understand the content and viability of data sources to meet desired use cases and help define and maintain enterprise data taxonomy and data catalog. This role requires clear, compelling, and influential communication skills. You will mentor developers and collaborate with peer architects and developers on other teams. TO SUCCEED IN THIS ROLE: Ability to define and design complex data integration solutions with general direction and stakeholder access. Capability to work independently and as part of a global, multi-faceted data warehousing and analytics team. Advanced knowledge of cloud-based data engineering and data warehousing solutions, especially AWS, Databricks, and/or Snowflake. Highly skilled in RDBMS platforms such as Oracle, SQLServer. Familiarity with NoSQL DB platforms like MongoDB. Understanding of data modeling and data engineering, including SQL and Python. Strong understanding of data quality, compliance, governance and security. Proficiency in languages such as Python, SQL, and PySpark. Experience in building data ingestion pipelines for structured and unstructured data for storage and optimal retrieval. Ability to design and develop scalable data pipelines. Knowledge of cloud-based and on-prem contact center technologies such as Salesforce.com, ServiceNow, Oracle CRM, Genesys Cloud, Genesys InfoMart, Calabrio Voice Recording, Nuance Voice Biometrics, IBM Chatbot, etc., is highly desirable. Experience with code repository and project tools such as GitHub, JIRA, and Confluence. Working experience with CI/CD (Continuous Integration & Continuous Deployment) process, with hands-on expertise in Jenkins, Terraform, Splunk, and Dynatrace. Highly innovative with an aptitude for foresight, systems thinking, and design thinking, with a bias towards simplifying processes. Detail-oriented with strong analytical, problem-solving, and organizational skills. Ability to clearly communicate with both technical and business teams. Knowledge of Informatica PowerCenter, Data Quality, and Data Catalog is a plus. Knowledge of Agile development methodologies is a plus. Having a Databricks data engineer associate certification is a plus but not mandatory. Data Engineer Requirements: Bachelor's degree in computer science, information technology, or a similar field. 8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies. Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features. Knowledge of mathematical foundations and statistical analysis. Strong interpersonal skills. Excellent communication and presentation skills. Advanced troubleshooting skills. Regards, Purnima Pobbathy Senior Technical Recruiter ************ | ********************* |Themesoft Inc |
    $78k-106k yearly est. 4d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 2d ago
  • Senior Data Engineer

    Vysystems

    Data scientist job in Jersey City, NJ

    Hi #Connections, We do have a Job Opening for, Role-----Senior Data Engineer with Databricks and Python Locations-----Jersey City, NJ & Wilmington, DE (Day 1 Onsite) (Need Local to Texas & Delaware Only) Please share the suitable resumes to ******************* & ***************************************** Job Description: Key Responsibilities: • Design and build ETL/ELT pipelines using Databricks and PySpark • Develop and maintain data models and data warehouse structures (dimensional modeling, star/snowflake schemas) • Optimize data workflows for performance, scalability, and cost • Work with cloud platforms (Azure/AWS/GCP) for storage, compute, and orchestration • Ensure data quality, reliability, and security across pipelines • Collaborate with cross-functional teams (Data Science, BI, Product) • Write clean, reusable code and follow engineering best practices • Troubleshoot issues in production data pipelines Required Skills: • Strong hands-on skills in Databricks, PySpark, and SQL • Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines • Solid understanding of Delta Lake and Lakehouse architecture • Experience with at least one cloud platform (Azure preferred) • Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.) Nice to Have: • Experience with CI/CD for data pipelines • Knowledge of data governance tools (Unity Catalog or similar) • Exposure to ML data preparation pipelines Soft Skills: • Strong communication and documentation skills • Ability to work independently and mentor others • Problem-solver with a focus on delivering business value Please share the suitable resumes to ******************* & ***************************************** Please attach the Updated Resume: Kindly fill details, 1. Years of Exp---- 2. Visa Status----- 3. Current Location-------- 4. Linkedin ID----- 5. Share Updated Resume Thanks & Regards., Ramkumar.R || Sr.Technical Recruiter Email: ******************* Linkedin ID: ***************************************** 4701 Patrick Henry Drive Building 16 Santa Clara CA 95054, USA.
    $82k-112k yearly est. 1d ago
  • Senior Data Engineer (Snowflake)

    Epic Placements

    Data scientist job in Parsippany-Troy Hills, NJ

    Senior Data Engineer (Snowflake & Python) 1-Year Contract | $60/hour + Benefit Options Hybrid: On-site a few days per month (local candidates only) Work Authorization Requirement You must be authorized to work for any employer as a W2 employee. This is required for this role. This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered. Overview We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake. Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement. What You'll Do Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake Participate across the full software development lifecycle - planning, requirements, development, testing, and QA Partner closely with engineering and data teams to identify and implement optimal technical solutions Build and maintain high-performance, scalable data pipelines and data warehouse architectures Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions Manage deliverables and priorities effectively in a fast-moving environment Contribute to data governance practices including metadata management and data lineage Support analytics and reporting use cases leveraging advanced SQL and analytical functions Required Skills & Experience 8+ years of experience designing and developing data solutions in an enterprise environment 5+ years of hands-on Snowflake experience Strong hands-on development skills with SQL and Python Proven experience designing and developing data warehouses in Snowflake Ability to diagnose, optimize, and tune SQL queries Experience with Azure data frameworks (e.g., Azure Data Factory) Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar Solid understanding of metadata management and data lineage Hands-on experience with SQL analytical functions Working knowledge of Shell scripting and Java scripting Experience using Git, Confluence, and Jira Strong problem-solving and troubleshooting skills Collaborative mindset with excellent communication skills Nice to Have Experience supporting Pharma industry data Exposure to Omni-channel data environments Why This Opportunity $60/hour W2 on a long-term 1-year contract Benefit options available Hybrid structure with limited on-site requirement High-impact role supporting enterprise data initiatives Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
    $60 hourly 1d ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data scientist job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: • Supports review of controls to ensure sufficient protection of enterprise data. • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. • Updates logical or physical data models based on new use cases. • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. • Adds to team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). • Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills • Formal training or certification on data engineering concepts and 2+ years applied experience • Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases • Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis • Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. • Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. • Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs • Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar • Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills • Knowledge of data governance and security best practices. • Experience in carrying out data analysis to support business insights. • Strong Python and Spark
    $79k-111k yearly est. 3d ago
  • Senior Data Engineer

    Apexon

    Data scientist job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 5d ago
  • Data Engineer

    Haptiq

    Data scientist job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 4d ago
  • Data Engineer

    Neenopal Inc.

    Data scientist job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 4d ago

Learn more about data scientist jobs

How much does a data scientist earn in Paramus, NJ?

The average data scientist in Paramus, NJ earns between $65,000 and $124,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Paramus, NJ

$90,000

What are the biggest employers of Data Scientists in Paramus, NJ?

The biggest employers of Data Scientists in Paramus, NJ are:
  1. Virtusa
  2. Mindlance
  3. KPMG
Job type you want
Full Time
Part Time
Internship
Temporary