Post job

Data scientist jobs in Point Pleasant, NJ

- 1,013 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Biostatistician
Data Modeler
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    Data scientist job in New York, NY

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 4d ago
  • RCI-GRD-845-1 Senior Data Scientist (Agentic AI) - NO C2C

    Rangam 4.3company rating

    Data scientist job in New York, NY

    Please apply here: ************************************************************************************* Senior Data Scientist, Agentic AI This is a potential contract to hire role. About the Role: Our client is transforming into a modern, data-driven insurance company. As part of this journey, we're looking for a Senior Data Scientist to help build and scale advanced Agentic AI and LLM-based solutions that automate workflows, improve decision-making, and enhance customer experience. You'll work closely with senior leaders and cross-functional teams to design AI systems that drive measurable business impact. This is a high-visibility role within company's Data Science Lab (DSL) - a hub for innovation, rapid testing, and operationalizing AI solutions across the enterprise. What You'll Do Design and deploy Agentic AI solutions to automate complex business processes. Operationalize LLMs and Generative AI for unstructured data (contracts, claims, medical records, customer interactions). Build intelligent agents and reasoning systems that integrate with core business platforms. Partner with data engineering and AIOps teams to scale AI models from prototype to production. Translate AI research into real-world solutions for underwriting, claims, customer service, and risk assessment. Define use cases, measure ROI, and collaborate with business stakeholders. Mentor junior data scientists and help standardize AI/ML frameworks across the organization. What You Bring PhD + 2 years OR Master's + 4 years of relevant AI/ML experience. 3+ years of hands-on experience in AI model development. Strong foundation in probability, statistics, and machine learning. Proficiency in Python, PyTorch, TensorFlow, and LangGraph. Proven experience deploying scalable AI solutions. Excellent communication skills and ability to work cross-functionally with Product, Engineering, and Business teams. Leadership experience in mentoring and guiding data scientists.
    $107k-148k yearly est. 1d ago
  • Senior Data Scientist Agentic AI

    PRI Technology 4.1company rating

    Data scientist job in New York, NY

    My name is Bill Stevens, and I have a new three month plus contract to hire Senior Data Scientist Agentic AI opportunity available for a major firm with offices located in Midtown, Manhattan on the West Side and Holmdel, New Jersey that could be of interest to you, please review my specification below and I am available at any time to speak with you so please feel free to call me. The work week schedule will be hybrid, three days a week in either of the firms' offices and two days remote. The onsite work site will be determined by the candidate. The ideal candidate should also possess a green card or be of citizenship. No Visa entanglements and no H1-B holding company submittals. The firms Data & AI team spearheads a culture of intelligence and automation across the enterprise, creating business value from advanced data and AI solutions. Their team includes data scientists, engineers, analysts, and product leaders working together to deliver AI-driven products that power growth, improve risk management, and elevate customer experience. The firm created the Data Science Lab (DSL) to reimagine emerging technologies, evolving consumer needs, and rapid advances in AI. The DSL expedites transition to data-driven decision making and fosters innovation by rapidly testing, scaling, and operationalizing state-of-the-art AI. We are seeking a Senior Data Scientist Engineer, Agentic AI who is an experienced individual contributor with deep expertise in AI/ML and a track record of turning advanced research into practical, impactful enterprise solutions. This role focuses on building, deploying, and scaling agentic AI systems, large language models, and intelligent automation solutions that reshape how the firm operates, serves customers, and drives growth. You'll collaborate directly with senior executives on high-visibility projects to bring next-generation AI to life across the firm's products and services. Key Responsibilities: Design and deploy Agentic AI solutions to automate complex business workflows, enhance decision-making, and improve customer and employee experiences. Operationalize cutting-edge LLMs and generative AI to process and understand unstructured data such as contracts, claims, medical records, and customer interactions. Build autonomous agents and multi-step reasoning systems that integrate with the firm's core platforms to deliver measurable business impact. Partner with data engineers and AIOps teams to ensure AI models are production-ready, scalable, and robust, from prototype to enterprise deployment. Translate research in agentic AI, reinforcement learning, and reasoning into practical solutions that support underwriting, claims automation, customer servicing, and risk assessment. Collaborate with product owners, engineers, and business leaders to define use cases, design solutions, and measure ROI. Contribute to the Data Science Lab by establishing repeatable frameworks for developing, testing, and deploying agentic AI solutions. Mentor junior data scientists and contribute to the standardization of AI/ML practices, tools, and frameworks across the firm. You are: Passionate about pushing the frontier of AI while applying it to solve real-world business problems. Excited by the potential of agentic AI, autonomous systems, and LLM-based solutions to transform industries. A hands-on builder who thrives on seeing AI solutions move from proof-of-concept to real-world deployment. Comfortable working in multi-disciplinary teams and engaging with senior business leaders to align AI solutions with enterprise goals. You have: PhD with 2+ years of experience OR have a Master's degree with 4+ years of experience in Statistics, Computer Science, Engineering, Applied mathematics or related field 3+ years of hands-on AI modeling/development experience Strong theoretical foundations in probability & statistics Strong programming skills in Python including PyTorch, Tensorflow, LangGraph Solid background in machine learning algorithms, optimization, and statistical modeling Excellent communication skills and ability to work and collaborating cross-functionally with Product, Engineering, and other disciplines at both the leadership and hands-on level Excellent analytical and problem-solving abilities with superb attention to detail Proven leadership in providing technical leadership and mentoring to data scientists and strong management skills with ability to monitor/track performance for enterprise success This position pays $150.00 per hour on a w-2 hourly basis or $175.00 per hour on a Corp basis. The Corp rate is for independent contractors only and not third-party firms. No Visa entanglements and no H1-B holding companies. The interview process will include an initial phone or virtual interview screening. Please let me know your interest in this position, availability to interview and start for this position along with a copy of your recent resume or please feel free to call me at any time with any questions. Regards Bill Stevens Senior Technical Recruiter PRI Technology Denville, New Jersey 07834 ************** ******************************
    $107k-150k yearly est. 5d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Data scientist job in Plainfield, NJ

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 1d ago
  • Data Engineer

    Brooksource 4.1company rating

    Data scientist job in New York, NY

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 2d ago
  • Senior Data Engineer

    Godel Terminal

    Data scientist job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 1d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data scientist job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 2d ago
  • Lead Data Engineer

    APN Consulting, Inc. 4.5company rating

    Data scientist job in New York, NY

    Job title: Lead Software Engineer Duration: Fulltime/Contract to Hire Role description: The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives. Key Responsibilities: Manage and support HR business applications, including problem resolution and issue ownership Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance Develop and maintain data ingestion processes using Informatica, Python, and related technologies Support data analytics and data science initiatives with optimized data structures and AI/ML tools Manage vendor products and their integrations with internal/external applications Gather requirements and translate functional needs into technical specifications Perform QA testing and impact analysis across the BI ecosystem Maintain system documentation and knowledge repositories Provide technical guidance and manage stakeholder communications Required Skills & Experience: Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space. Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming. Exposure to AI & ML technologies and experience tuning models and building LLM integrations. Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics. Extensive database development experience in MS SQL Server/ Oracle and SQL scripting. Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins Proficiency in using collaboration tools like Confluence, SharePoint, JIRA Analytical skills to model business functions, processes and dataflow within or between systems. Strong problem-solving skills to debug complex, time-critical production incidents. Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams. Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
    $93k-133k yearly est. 4d ago
  • Data Engineer

    Ztek Consulting 4.3company rating

    Data scientist job in Hamilton, NJ

    Key Responsibilities: Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory. Integrate and process Bloomberg market data feeds and files into trading or analytics platforms. Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion. Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL. Manage FTP/SFTP file transfers between internal systems and external vendors. Ensure data quality, completeness, and timeliness for downstream trading and reporting systems. Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows. Required Skills & Experience: 10+ years of experience in data engineering or production support within financial services or trading environments. Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric. Strong Python and SQL programming skills. Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP). Experience with Git, CI/CD pipelines, and Azure DevOps. Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling. Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools). Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments. Excellent communication, problem-solving, and stakeholder management skills.
    $89k-125k yearly est. 4d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data scientist job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 1d ago
  • Market Data Engineer

    Harrington Starr

    Data scientist job in New York, NY

    🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 2d ago
  • AWS Data engineer with Databricks || USC Only || W2 Only

    Ipivot

    Data scientist job in Princeton, NJ

    AWS Data Engineer with Databricks Princeton, NJ - Hybrid - Need Locals or Neaby Duration: Long Term is available only to U.S. citizens. Key Responsibilities Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data. Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability. Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions. Document processes and mentor junior team members on best practices. Required Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda). Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
    $82k-112k yearly est. 2d ago
  • Data Engineer

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves. Responsibilities Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries. Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring. Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines. Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift. Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm. Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets. Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users. Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns. Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, or related field. 2-5 years of experience in a similar Data Engineering or Web Scraping role. Capital markets knowledge with familiarity across asset classes and experience supporting trading systems. Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift). Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright). Strong Python programming skills and experience with SQL and NoSQL databases. Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus. Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
    $86k-120k yearly est. 2d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 1d ago
  • Biostatistician

    Net2Source (N2S

    Data scientist job in Rahway, NJ

    Join a Global Leader in Workforce Solutions - Net2Source Inc. Who We Are Net2Source Inc. isn't just another staffing company - we're a powerhouse of innovation, connecting top talent with the right opportunities. Recognized for 300% growth in the past three years, we operate in 34 countries with a global team of 5,500+. Our mission? To bridge the talent gap with precision- Right Talent. Right Time. Right Place. Right Price. Title: Statistical Scientist Duration: 12 Months (Start Date: First Week of January) Location: Rahway, NJ (Onsite/Hybrid Only) Rate: $65/hr on W2 Position Summary We are seeking an experienced Statistical Scientist to support analytical method qualification, validation, and experimental design for Client's scientific and regulatory programs. The successful candidate will work closely with scientists to develop statistically sound protocols, contribute to method robustness and validation studies, and prepare reporting documentation for regulatory readiness. This position requires deep expertise in statistical methodologies and hands-on programming skills in SAS, R, and JMP. Key Responsibilities • Collaborate with scientists to design experiments, develop study protocols, and establish acceptance criteria for analytical applications. • Support analytical method qualification and validation through statistical protocol development, analysis, and reporting. • Write memos and technical reports summarizing statistical analyses for internal and regulatory audiences. • Assist scientists in assessing protocol deviations and resolving investigation-related issues. • Coordinate with the Quality Audit group to finalize statistical reports for BLA (Biologics License Application) readiness. • Apply statistical modeling approaches to evaluate assay robustness and method reliability. • Support data integrity and ensure compliance with internal processes and regulatory expectations. Qualifications Education (Required): • Ph.D. in Statistics, Biostatistics, Applied Statistics, or related discipline with 3+ years of relevant experience, or • M.S. in Statistics/Applied Statistics with 6+ years of relevant experience. (BS/BA candidates will not be considered.) Required Skills: • Proficiency in SAS, R, and JMP. • Demonstrated experience in analytical method qualification and validation, including protocol writing and statistical execution. • Strong background in experimental design for analytical applications. • Ability to interpret and communicate statistical results clearly in both written and verbal formats. Preferred / Nice-to-Have: • Experience with mixed-effect modeling. • Experience with Bayesian analysis. • Proven ability to write statistical software/code to automate routine analyses. • Experience presenting complex statistical concepts to leadership. • Experience in predictive stability analysis. Why Work With Us? At Net2Source, we believe in more than just jobs - we build careers. We champion leadership at all levels, celebrate diverse perspectives, and empower our people to make an impact. Enjoy a collaborative environment where your ideas matter, and your professional growth is our priority. Our Commitment to Inclusion & Equity Net2Source is an equal opportunity employer dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all. Awards & Recognition • America's Most Honored Businesses (Top 10%) • Fastest-Growing Staffing Firm by Staffing Industry Analysts • INC 5000 List for Eight Consecutive Years • Top 100 by Dallas Business Journal • Spirit of Alliance Award by Agile1 Ready to Level Up Your Career? Click Apply Now and let's make it happen.
    $65 hourly 4d ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data scientist job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: • Supports review of controls to ensure sufficient protection of enterprise data. • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. • Updates logical or physical data models based on new use cases. • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. • Adds to team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). • Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills • Formal training or certification on data engineering concepts and 2+ years applied experience • Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases • Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis • Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. • Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. • Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs • Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar • Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills • Knowledge of data governance and security best practices. • Experience in carrying out data analysis to support business insights. • Strong Python and Spark
    $79k-111k yearly est. 2d ago
  • Data Engineer

    Haptiq

    Data scientist job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 3d ago
  • Senior Data Engineer - MDM

    Synechron 4.4company rating

    Data scientist job in Iselin, NJ

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge: We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs. Additional Information The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within Iselin, NJ is $135K to $150K/year & benefits (see below). Key Responsibilities: Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains. Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization. Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met. Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements. Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively. Implement data integration pipelines leveraging modern data engineering tools and practices. Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer. Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies. Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services. Ensure compliance with data governance, data privacy, and security standards. Support CI/CD pipelines for continuous integration and deployment of data solutions. Qualifications: 12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry. Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options. Strong functional knowledge of reference data sources and domain-specific data standards. Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer. Familiarity with CI/CD practices, tools, and automation pipelines. Ability to work collaboratively across teams to deliver complex data solutions. Experience with financial systems (capital markets, credit risk, and regulatory compliance applications). Preferred Skills: Familiarity with financial data models and regulatory requirements. Experience with Azure cloud platforms Knowledge of data governance, data quality frameworks, and metadata management. We offer: A highly competitive compensation and benefits package A multinational organization with 58 offices in 21 countries and the possibility to work abroad 10 days of paid annual leave (plus sick leave and national holidays) Maternity & Paternity leave plans A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region) Retirement savings plans A higher education certification policy Commuter benefits (varies by region) Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms A flat and approachable organization A truly diverse, fun-loving and global work culture SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-150k yearly 3d ago
  • Sr Data Modeler with Capital Markets/ Custody

    Ltimindtree

    Data scientist job in Jersey City, NJ

    LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ******************* Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets Work Location Jersey City, NJ (Onsite, 5 days / week) Job Description: We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain. The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model. A software development background is not required. Key Responsibilities Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics). Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model. Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model. Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality. Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity. Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage. Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation. Champion a detail-oriented and documentation-first culture within the data modeling team. Qualifications Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services. Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics). Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures. Strong skills in data analysis using SQL and/or other data analysis tools. Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation. Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation. Ability to mentor and lead junior team members in data modeling best practices. Passion for detail, documentation, and continuous improvement. Software development background is not required. Preferred Skills Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer). Familiarity with capital markets, business processes and data flows. Knowledge of regulatory and compliance requirements in financial data management. Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases). Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $79k-111k yearly est. 5d ago
  • Principal Data Scientist : Product to Market (P2M) Optimization

    The Gap 4.4company rating

    Data scientist job in New York, NY

    About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials. This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team. About the Role Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization. The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels. What You'll Do * Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges. * Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals. * Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment. * Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility. * Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly. * Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights. * Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success. * Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value * Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP. * Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality. Who You Are * Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline. * 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains. * Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes. * Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation. * Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib. * Hands-on experience with enterprise platforms such as Databricks and cloud environments * Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions. * Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences. * Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations. * Experience managing code quality, CI/CD pipelines, and GitHub-based workflows. Preferred Qualifications * Experience shaping and executing multi-year analytics strategies in retail or supply chain domains. * Proven ability to balance long-term innovation with short-term deliverables. * Background in agile product development and stakeholder alignment for enterprise-scale initiatives. Benefits at Gap Inc. * Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees. * One of the most competitive Paid Time Off plans in the industry.* * Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.* * Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.* * Employee stock purchase plan.* * Medical, dental, vision and life insurance.* * See more of the benefits we offer. * For eligible employees Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Salary Range: $201,700 - $267,300 USD Employee pay will vary based on factors such as qualifications, experience, skill level, competencies and work location. We will meet minimum wage or minimum of the pay range (whichever is higher) based on city, county and state requirements.
    $88k-128k yearly est. 24d ago

Learn more about data scientist jobs

How much does a data scientist earn in Point Pleasant, NJ?

The average data scientist in Point Pleasant, NJ earns between $65,000 and $124,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Point Pleasant, NJ

$90,000
Job type you want
Full Time
Part Time
Internship
Temporary