Post job

Data scientist jobs in Perth Amboy, NJ

- 1,010 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Actuary
Analytical Data Miner
  • Sr. Data Scientist

    Apexon

    Data scientist job in Princeton, NJ

    Sr. Data Scientist- Remote- Fulltime We're looking for a Sr. Data Scientist who can use data, advanced analytics, and modern AI/ML techniques to solve real business problems and deliver actionable insights and we need 12-15+ years of experience in solving complex business problems using data, analytics, and modern AI/ML technologies. Must Have: Strong programming skills in languages such as Python, R, or Scala, with a focus on machine learning and advanced analytics. Familiarity with statistical modeling, data-mining techniques, and predictive analytics. Experience handling large datasets and working with relational databases. Excellent problem-solving skills and ability to recommend solutions quickly. Strong communication and collaboration skills. Experience with big data platforms (e.g., Hadoop, Spark). Knowledge of cloud-based analytics tools (AWS, Azure, GCP). Exposure to visualization tools like Tableau or Power BI. What You'll Do Develop and deploy ML models and data-driven solutions. Apply statistical methods, predictive modeling, and GenAI/agentic AI frameworks. Work with large datasets, cloud platforms (AWS/Azure/GCP), and relational databases. Translate business needs into analytical solutions and present insights clearly. What You Bring Master's degree in a quantitative field (CS, Engineering, Statistics, Mathematics, etc.). Strong Python skills (R/Scala a plus). Experience with ML, analytics, cloud deployment, and big data. Solid problem-solving ability and clear technical communication. Familiarity with traditional ML, modern AI, and GenAI approaches
    $90k-125k yearly est. 4d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Data scientist job in Plainfield, NJ

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 1d ago
  • Reinsurer Actuary

    BJRC Recruiting

    Data scientist job in New York, NY

    Pricing Reinsurance Actuary New York, USA Our Client Our client is a modern reinsurance company specializing in data-driven portfolios of Property & Casualty (P&C) risk. The firm's approach combines deep industry relationships, a sustainable capital structure, and a partner-first philosophy to deliver efficient and innovative reinsurance solutions. As the company continues to grow, Northern is seeking a Pricing Actuary to strengthen its technical and analytical capabilities within the underwriting team. This is a unique opportunity to join a collaborative and forward-thinking environment that values expertise, precision, and long-term partnerships. Responsibilities Lead the pricing of reinsurance treaties, with a primary focus on U.S. Casualty Programs, and expand to other lines as the portfolio grows. Provide core input into transactional and portfolio-level underwriting decisions. Conduct peer reviews of pricing models and assumptions prepared by other team members. Support quarterly loss ratio analyses to monitor portfolio performance, identify trends, and inform capital management strategies. Contribute to the annual reserve study process and present insights to the executive leadership team. Execute benchmark parameter studies and communicate findings to management to support strategic planning. Participate in broker meetings and client visits, helping to structure and negotiate optimal treaty terms. Enhance portfolio monitoring and efficiency through process improvement, automation, and the development of analytical tools and dashboards. Some travel may be required. Qualifications Degree in Actuarial Science, Mathematics, Statistics, or a related field. FCAS or ACAS designation (or international equivalent) strongly preferred. Minimum of 8-10 years of actuarial experience in Property & Casualty reinsurance or insurance pricing. Proven expertise in casualty and specialty lines, including professional liability, D&O, and E&O. Strong analytical and problem-solving skills, with demonstrated experience in data modeling, pricing tools, and portfolio analytics. Excellent communication and collaboration abilities, capable of working closely with underwriting, finance, and executive teams. The Ideal Profile A technically strong, business-minded actuary who thrives in a dynamic, collaborative, and data-driven environment. Recognized for analytical rigor, attention to detail, and the ability to translate complex models into actionable insights. Values teamwork, intellectual curiosity, and innovation in solving industry challenges. Seeks to contribute meaningfully to a growing, entrepreneurial organization shaping the future of reinsurance. Why Join the Team This is a rare opportunity to join a modern, high-growth reinsurer where your analytical insight and strategic thinking will directly influence underwriting decisions and company performance. Northern offers a collaborative culture, flexible hybrid work model, and the chance to shape cutting-edge reinsurance strategies in partnership with leading underwriters and actuaries. REF# WEB1429
    $91k-142k yearly est. 5d ago
  • Data & Performance Analytics (Hedge Fund)

    Coda Searchβ”‚Staffing

    Data scientist job in New York, NY

    Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing. Responsibilities This role will be responsible for Compiling periodical fund performance analyses Review and analyze portfolio performance data, benchmark performance and risk statistics Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems Apply queries, pivot tables, filters and other tools to analyze data. Maintain client relationship management database and providing reports to Directors on a regular basis Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization Provide various ad hoc data research and analysis as needed. Desired Skills and Experience Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience Proficiency in Microsoft Office, particularly Excel Modeling Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint Outstanding communication skills, proven ability to effectively work with all levels of Managment Comfortable working in a fast-paced, dead-line driven dynamic environment Innovative and creative thinker Must be detail oriented
    $68k-96k yearly est. 5d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data scientist job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 2d ago
  • Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    Please note: (OPTEAD, CPT, H1B will not workable) Min 7-8 years exp Β· Expert level skills writing and optimizing complex SQL Β· Experience with complex data modelling, ETL design, and using large databases in a business environment Β· Experience with building data pipelines and applications to stream and process datasets at low latencies Β· Fluent with Big Data technologies like Spark, Kafka and Hive Β· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required Β· Designing and building of data pipelines using API ingestion and Streaming ingestion methods Β· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential Β· Experience in developing NO SQL solutions using Azure Cosmos DB is essential Β· Thorough understanding of Azure and AWS Cloud Infrastructure offerings Β· Working knowledge of Python is desirable Β· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services Β· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB Β· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance Β· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information Β· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks Β· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. Β· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards Β· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
    $92k-132k yearly est. 2d ago
  • Lead Data Engineer

    STM Consulting, Inc.

    Data scientist job in Jersey City, NJ

    (NO SPONSORSHIP) - Must be "Visa independent" Hybrid 4 days a week onsite Salary: $130-140K per annum plus Bonus Should have Top 10 Consulting Company Experience. Notes: 12+ years of overall experience. Good stakeholder management and team handling skills Excellent communication skills Tech skill set: Snowflake, Pyspark, AWS Role: We are seeking a highly skilled Lead Data Engineer to drive end-to-end data engineering initiatives, lead cross-functional teams, and deliver scalable, cloud-based data solutions. The ideal candidate will bring deep technical expertise, strong leadership, and the ability to collaborate effectively with global stakeholders in a fast-paced environment. Must-Have Skills 12 to 15 years of experience in Data Engineering with proven experience in delivering enterprise-scale projects. Strong expertise in Big Data concepts, distributed systems, and cloud-native architectures. Proficiency in Snowflake, SQL, and a wide range of AWS services (Glue, EMR, S3, Aurora, RDS, Lambda, Step Functions). Hands-on experience with Python, PySpark, and building cloud-based microservices. Strong problem-solving, analytical skills, and end-to-end ownership mindset. Proven ability to work in Agile/Scrum environments with iterative delivery cycles. Exceptional communication, leadership, and stakeholder management skills. Demonstrated capability in leading onshore-offshore teams and coordinating multi-region delivery efforts. Knowledge of Data Vault 2.0 and modern data modeling techniques. Experience with cloud migration and large-scale modernization projects. Good-to-Have Skills Experience with DevOps tools (Jenkins, Git, GitHub/GitLab) and CI/CD pipeline implementation. Familiarity with the US insurance/reinsurance domain + P&C Insurance knowledge Key Responsibilities Leadership & Team Management Lead and mentor a team of onshore and offshore data engineers to ensure high-quality deliverables. Provide technical direction, coaching, and knowledge sharing to foster team growth and capability building. Establish and enforce engineering best practices, coding standards, and reusable frameworks. Champion innovation, continuous learning, and thought leadership within the data engineering function. Project Delivery & Execution Oversee end-to-end project delivery, ensuring timely, high-quality execution aligned with project objectives. Define high-level solution designs, data architectures, and ETL/ELT frameworks for cloud-based data platforms. Drive development, code reviews, unit testing, and deployment to production environments. Ensure optimal performance, scalability, and reliability across data pipelines and systems. Stakeholder Communication & Collaboration Collaborate with clients, product owners, business leaders, and offshore teams to gather requirements and define technical solutions. Communicate project updates, timelines, risks, and technical concepts effectively to both technical and non-technical stakeholders. Act as the primary point of contact for client technical discussions, solution design workshops, and progress reviews. Risk & Issue Management Proactively identify project risks, dependencies, and issues; develop and execute mitigation plans. Ensure governance, compliance, and alignment with organizational standards and methodologies. Educational Qualifications Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related field.
    $130k-140k yearly 4d ago
  • MDM Data Engineer (Banking Domain exp)

    Fustis LLC

    Data scientist job in Iselin, NJ

    Job Role: MDM Data Engineer (Banking Domain exp) Pay Rate: $75/hr. on 1099 Need local profiles only In-Person interview will be required. Eligibility: USC, GC, GC-EAD, H4-EAD, L2S only Recent 3-5 years of banking domain experience required. Job Description: Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs. Key Responsibilities: Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains. Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization. Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met. Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements. Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively. Implement data integration pipelines leveraging modern data engineering tools and practices. Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer. Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies. Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services. Ensure compliance with data governance, data privacy, and security standards. Support CI/CD pipelines for continuous integration and deployment of data solutions. Required Qualifications: 12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry. Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options. Strong functional knowledge of reference data sources and domain-specific data standards. Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer. Familiarity with CI/CD practices, tools, and automation pipelines. Ability to work collaboratively across teams to deliver complex data solutions. Experience with financial systems (capital markets, credit risk, and regulatory compliance applications). Preferred Qualifications: Familiarity with financial data models and regulatory requirements. Experience with Azure cloud platforms Knowledge of data governance, data quality frameworks, and metadata management.
    $75 hourly 2d ago
  • Market Data Engineer

    Harrington Starr

    Data scientist job in New York, NY

    πŸš€ Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 2d ago
  • Data Analytics Engineer

    Dale Workforce Solutions

    Data scientist job in Somerset, NJ

    Client: manufacturing company Type: direct hire Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets. This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams. This role is on-site five days per week in Somerset, NJ. Key Responsibilities Power BI Reporting & Administration Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions Develop and maintain data models to ensure accuracy, consistency, and reliability Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance Optimize Power BI solutions for performance, scalability, and ease of use ETL & Data Warehousing Design and maintain data warehouse structures, including schema and database layouts Develop and support ETL processes to ensure timely and accurate data ingestion Integrate data from multiple systems while ensuring quality, consistency, and completeness Work closely with database administrators to optimize data warehouse performance Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed Training & Documentation Create and maintain technical documentation, including specifications, mappings, models, and architectural designs Document data warehouse processes for reference, troubleshooting, and ongoing maintenance Manage data definitions, lineage documentation, and data cataloging for all enterprise data models Project Management Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team Collaborate with key business stakeholders to ensure departmental reporting needs are met Record meeting notes in Confluence and document project updates in Jira Data Governance Implement and enforce data governance policies to ensure data quality, compliance, and security Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness Routine IT Functions Resolve Help Desk tickets related to reporting, dashboards, and BI tools Support general software and hardware installations when needed Other Responsibilities Manage email and phone communication professionally and promptly Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel Perform additional assigned duties as needed Qualifications Required Minimum of 3 years of relevant experience Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience Experience with cloud-based BI environments (Azure, AWS, etc.) Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica) Proficiency in SQL for data extraction, manipulation, and transformation Strong knowledge of DAX Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake) Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools Strong analytical, problem-solving, and documentation skills Excellent written and verbal communication abilities High attention to detail and strong self-review practices Effective time management and organizational skills; ability to prioritize workload Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
    $82k-112k yearly est. 1d ago
  • Cloud Data Engineer

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Title: Enterprise Data Management - Data Cloud, Senior Developer I Duration: FTE/Permanent Salary: 130-165k The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools. Responsibilities: Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures. Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications. Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools. Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency. Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation. Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications. Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access. Champion data engineering standards, best practices, and governance across the enterprise. Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning. Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions. Qualifications: Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent. 10+ years in data engineering, cloud platform development, or analytics engineering. Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies. Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers). Preferred: Experience containerizing data workloads with Docker and Kubernetes. Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios. Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products. Bonus: Background in data mesh designs and automated data access systems. Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow. Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines. Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling. Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role. Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
    $86k-120k yearly est. 2d ago
  • Lead data Engineer (W2 OnlY)

    Oreva Technologies, Inc.

    Data scientist job in Roseland, NJ

    Role: Lead Data Engineer (W2 Only) Must haves: AWS Databricks Lead experience- this can be supplemented for staff as well Python Pyspark Contact Center Experience Data Engineer Requirements: Bachelor's degree in computer science, information technology, or a similar field. 8+ years of experience integrating and transforming contact center data into standard, consumption-ready data sets incorporating standardized KPIs, supporting metrics, attributes, and enterprise hierarchies. Expertise in designing and deploying data integration solutions using web services with client-driven workflows and subscription features. Knowledge of mathematical foundations and statistical analysis. Strong interpersonal skills. Excellent communication and presentation skills. Advanced troubleshooting skills. Thanks and Regards, Jeet Kumar Thapa Technical Recruiter Oreva Technologies Inc. P: ************ Ext: 323 E: ******************** L: ******************************************************* A: 1320 Greenway Drive, Suite 460, Irving, TX 75038 W: **********************
    $82k-112k yearly est. 3d ago
  • Data Engineer Full time 10+ yrs

    Confidential Jobs 4.2company rating

    Data scientist job in Jersey City, NJ

    Job Title: Senior Data Engineer - 10+ Years exp must Required Skills & Experience: Proven experience as a Senior Data Engineer in Snowflake and other Cloud DB environments. Strong hands-on expertise in DBT, SnowSQL, and SQL performance tuning. Familiarity with Oracle Exadata and PL/SQL is a plus for comparative analysis. 10+ years experience in data modeling ( logical and physical ), ETL/ELT design, and designing and implementing data warehouse , performance tuning and optimization. Understanding of CI/CD pipelines, version control, and deployment automation. Excellent communication skills and ability to translate technical findings into actionable recommendations. Preferred Qualifications: Snowflake Architect Certification , DBT Certification. Experience in banking or financial services data platforms. Exposure to cloud platforms like AWS, Azure, or GCP.
    $91k-131k yearly est. 2d ago
  • Senior Data Engineer

    Avanciers Inc.

    Data scientist job in Jersey City, NJ

    Avanciers is seeking to hire few Sr. Data Engineers in Jersey City, NJ area, for one of its esteemed client. Candidates can apply directly through the job description ASAP: Mandatory Skills: Python, scripting, web services, api design, etc. Kubernetes CI/CD (preferably gitlab) Job Orchestration (preferably Airflow) Familiarity with Snowflake (doesn't need to be an expert) Nice to have: Azure DBT AI automation experience Financial services experience
    $82k-112k yearly est. 2d ago
  • Data Engineer (Talend ETL)

    Tekgence Inc.

    Data scientist job in Jersey City, NJ

    Data engineering experience, including extensive work with Talend and Snowflake in production environments. Deep expertise in Studio, TAC and AWS services(S3, EC2, RDS and No SQL). Expert SQL skills and performance tuning in Snowflake (query profiling, result caching, multi-cluster warehouses). Experience with orchestration tools (Autosys, Airflow) and CI/CD for data pipelines.
    $82k-112k yearly est. 3d ago
  • Data Engineer (ITIL)

    Enterprise Engineering Inc. (EEI

    Data scientist job in Iselin, NJ

    Sr. Data Engineer with ITIL New York City, NY - in person interview required Need local resumes with strong Banking Domain experience. (Snowflake, SnowSQL, Python, Pyspark) (ServiceNow or Jenkins) ***ITIL Certification must have*** Job Description: Overview - We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will possess expertise in data pipeline development, data warehousing, and have a solid understanding of ITIL processes to support operational efficiency. Key Responsibilities: Β· Design, develop, and maintain scalable data pipelines using SQL, Python, and PySpark. Β· Build and optimize data warehouses leveraging Snowflake for efficient data storage and retrieval. Β· Collaborate with cross-functional teams to understand data requirements and deliver solutions. Β· Monitor and troubleshoot data workflows ensuring data quality and performance. Β· Document data processes and procedures following ITIL best practices. Technical Skills: 10 years of experience in data engineering or related roles. Strong proficiency in writing complex queries for data extraction, transformation, and loading (ETL). Experience in Python coding and PySpark frameworks. Strong experience in SQL Hands-on experience with designing, implementing, and managing data warehouses. Deep understanding of Snowflake platform features, architecture, and best practices. Experience with ITIL framework, especially in Incident Management and Problem Management. Experience in Servicenow or Jenkins Proven experience handling incident resolution, root cause analysis, and problem tracking within ITSM tools. Preferred Certifications: ITIL Foundation or higher. Snowflake Certification. Any relevant certifications in big data, Python, or SQL.
    $82k-112k yearly est. 2d ago
  • Data Engineer

    Beaconfire Inc.

    Data scientist job in East Windsor, NJ

    πŸš€ Junior Data Engineer πŸ“ E-Verified | Visa Sponsorship Available πŸ” About Us: BeaconFire, based in Central NJ, is a fast-growing company specializing in Software Development, Web Development, and Business Intelligence. We're looking for self-motivated and strong communicators to join our team as a Junior Data Engineer! If you're passionate about data and eager to learn, this is your opportunity to grow in a collaborative and innovative environment. 🌟 πŸŽ“ Qualifications We're Looking For: Passion for data and a strong desire to learn and grow. Master's Degree in Computer Science, Information Technology, Data Analytics, Data Science, or a related field. Intermediate Python skills (Experience with NumPy, Pandas, etc. is a plus!) Experience with relational databases like SQL Server, Oracle, or MySQL. Strong written and verbal communication skills. Ability to work independently and collaboratively within a team. πŸ› οΈ Your Responsibilities: Collaborate with analytics teams to deliver reliable, scalable data solutions. Design and implement ETL/ELT processes to meet business data demands. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to optimize data flows. Create automated unit tests and participate in integration testing. Troubleshoot and resolve operational and performance-related issues. Work with architecture and engineering teams to implement high-quality solutions and follow best practices. 🌟 Why Join BeaconFire? βœ… E-Verified employer 🌍 Work Visa Sponsorship Available πŸ“ˆ Career growth in data engineering and BI 🀝 Supportive and collaborative work culture πŸ’» Exposure to real-world, enterprise-level projects πŸ“© Ready to launch your career in Data Engineering? Apply now and let's build something amazing together! πŸš€
    $82k-112k yearly est. 2d ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data scientist job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: β€’ Supports review of controls to ensure sufficient protection of enterprise data. β€’ Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. β€’ Updates logical or physical data models based on new use cases. β€’ Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. β€’ Adds to team culture of diversity, opportunity, inclusion, and respect. β€’ Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). β€’ Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills β€’ Formal training or certification on data engineering concepts and 2+ years applied experience β€’ Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases β€’ Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis β€’ Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. β€’ Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). β€’ Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. β€’ Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs β€’ Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar β€’ Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift β€’ Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar β€’ Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills β€’ Knowledge of data governance and security best practices. β€’ Experience in carrying out data analysis to support business insights. β€’ Strong Python and Spark
    $79k-111k yearly est. 2d ago
  • Python Data Engineer

    Tekvana Inc.

    Data scientist job in Iselin, NJ

    Job Title:Data Engineer (Python, Spark, Cloud) Pay :$90000 per year DOE Term : Contract Work Authorization: US Citizens only ( may need Security clearance in future) Job Summary: We are seeking a mid-level Data Engineer with strong Python and Big Data skills to design, develop, and maintain scalable data pipelines and cloud-based solutions. This role involves hands-on coding, data integration, and collaboration with cross-functional teams to support enterprise analytics and reporting. Key Responsibilities: Build and maintain ETL pipelines using Python and PySpark for batch and streaming data. Develop data ingestion frameworks for structured/unstructured sources. Implement data workflows using Airflow and integrate with Kafka for real-time processing. Deploy solutions on Azure or GCP using container platforms (Kubernetes/OpenShift). Optimize SQL queries and ensure data quality and governance. Collaborate with data architects and analysts to deliver reliable data solutions. Required Skills: Python (3.x) - scripting, API development, automation. Big Data: Spark/PySpark, Hadoop ecosystem. Streaming: Kafka. SQL: Oracle, Teradata, or SQL Server. Cloud: Azure or GCP (BigQuery, Dataflow). Containers: Kubernetes/OpenShift. CI/CD: GitHub, Jenkins. Preferred Skills: Airflow for orchestration. ETL tools (Informatica, Talend). Financial services experience. Education & Experience: Bachelor's in Computer Science or related field. 3-5 years of experience in data engineering and Python development. Keywords for Visibility: Python, PySpark, Spark, Hadoop, Kafka, Airflow, Azure, GCP, Kubernetes, CI/CD, ETL, Data Lake, Big Data, Cloud Data Engineering. Reply with your profiles to this posting and send it to ******************
    $90k yearly 3d ago
  • Oracle Fusion HCM Data Engineer (C2H)-- CDC5692355

    Compunnel Inc. 4.4company rating

    Data scientist job in Princeton, NJ

    A Oracle Engineer with Oracle Fusion experience and Databricks is responsible for designing developing and maintaining scalable and efficient data solutions that integrate data from various sources including Oracle Fusion applications and process it within the Databricks environment Key Responsibilities Data Pipeline Development Design build and optimize robust ELT pipelines to ingest transform and load data from Oracle Fusion applications and other sources into the Databricks Lakehouse This involves using PySpark SQL and Databricks notebooks Databricks Platform Expertise Leverage Databricks functionalities such as Delta Lake Unity Catalog and Spark optimization techniques to ensure data quality performance and governance Oracle Fusion Integration Develop connectors and integration strategies to extract data from Oracle Fusion modules eg Financials HCM SCM using APIs SQL or other appropriate methods Data Modeling and Warehousing Design and implement data models within Databricks potentially following a medallion architecture to support analytical and reporting requirements Performance Optimization Tune Spark jobs and optimize data processing within Databricks for efficiency and cost-effectiveness Data Quality and Governance Implement data quality checks error handling and data validation frameworks to ensure data integrity Adhere to data governance policies and security best practices Collaboration Work closely with data architects' data scientists business analysts and other stakeholders to understand data requirements and deliver solutions that meet business needs Automation and CICD Develop automation scripts and implement CICD pipelines for Databricks workflows and deployments Troubleshooting and Support Provide operational support troubleshoot data related issues and perform root cause analysis Required Skills and Qualifications Strong proficiency in Databricks Including PySpark Scala Delta Lake Unity Catalog and Databricks notebooks Experience with Oracle Fusion Knowledge of Oracle Fusion data structures APIs and data extraction methods Expertise in SQL For querying manipulating and optimizing data in both Oracle and Databricks Cloud Platform Experience Familiarity with a major cloud provider eg AWS Azure GCP where Databricks is deployed Data Warehousing and ETLELT Concepts Solid understanding of data warehousing principles and experience in building and optimizing data pipelines Problem solving and Analytical Skills Ability to analyze complex data issues and propose effective solutions Communication and Collaboration Strong interpersonal skills to work effectively within cross functional teams
    $79k-105k yearly est. 1d ago

Learn more about data scientist jobs

How much does a data scientist earn in Perth Amboy, NJ?

The average data scientist in Perth Amboy, NJ earns between $65,000 and $124,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Perth Amboy, NJ

$90,000

What are the biggest employers of Data Scientists in Perth Amboy, NJ?

The biggest employers of Data Scientists in Perth Amboy, NJ are:
  1. Ansell
  2. Brillio
  3. Capgemini
Job type you want
Full Time
Part Time
Internship
Temporary