Post job

Data scientist jobs in Sayreville, NJ

- 1,163 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Analytical Data Miner
  • Analyst, Data Scientist (Ref: 194313)

    Forsyth Barnes

    Data scientist job in New York, NY

    Job Title: Analyst, Data Scientist Salary: $70,000-$90,000 Contact: ******************************** is not available for C2C or C2H and the client is unable to sponsor at this time. About the Company We are partnering with a leading organization in the textiles and apparel sector, known for its commitment to innovation, quality, and operational excellence within the retail industry. The company is focused on enhancing business processes and delivering exceptional value to its customers through data-driven decision-making. Role Overview The Analyst, Data Scientist plays a critical role in developing and maintaining reporting tools, including metrics, dashboards, and analytical platforms. This position supports strategic and operational decision-making by applying data analysis to identify insights, maintain process controls, and drive continuous improvement across the organization. Key Responsibilities Manage reporting requests from field operations by defining technical requirements and developing reports, metrics, and dashboards using SQL and Power BI Design and build advanced Power BI dashboards leveraging DAX, Power Query, and other advanced features Perform data cleaning and analysis, translating complex analytical results into clear, actionable insights Collaborate with business stakeholders and IT teams to align on project objectives and deliverables Support strategic initiative planning through prioritization, estimation, and analysis Gather, document, and maintain detailed business and technical requirements Participate in problem-solving sessions with business users and leadership to address analytical challenges Lead change management efforts and deliver training to end users as needed Serve as a liaison between business teams and IT to resolve system issues and improve processes Provide regular project updates and communicate issue resolution status to stakeholders Qualifications Bachelor's degree in Engineering, Mathematics, Computer Science, or a related field 1-2 years of experience developing business or technology solutions Strong proficiency in Power BI, including DAX and Power Query Solid understanding of data warehousing and business intelligence concepts Ability to read and write SQL Familiarity with R, Python, and machine learning concepts (theoretical or practical) is a plus Advanced skills in Microsoft Office tools, including Excel, PowerPoint, Word, Visio, and SharePoint Experience in a corporate retail environment is preferred This role is ideal for a proactive, analytical professional who thrives in a fast-paced environment. The successful candidate will demonstrate strong communication and collaboration skills and the ability to work effectively across cross-functional teams.
    $70k-90k yearly 4d ago
  • Data Scientist

    Strategic Employment Partners (Sep 4.5company rating

    Data scientist job in New York, NY

    Senior Data Scientist - Sports & Entertainment Our client, a premier Sports, Entertainment, and Hospitality organization, is hiring a Senior Data Scientist. In this position you will own high-impact analytics projects that redefine how predictive analytics influence business strategy. This is a pivotal role where you will build and deploy machine learning solutions-ranging from Bayesian engagement scoring to purchase-propensity and lifetime-value models-to drive fan acquisition and revenue growth. Requirements: Experience: 8+ years of professional experience using data science to solve complex business problems, preferably as a solo contributor or team lead. Education: Bachelor's degree in Data Science, Statistics, Computer Science, or a related quantitative field (Master's or PhD preferred). Tech Stack: Hands-on expertise in Python, SQL/PySpark, and ML frameworks (scikit-learn, XGBoost, TensorFlow, or PyTorch). Infrastructure: Proficiency with cloud platforms (AWS preferred) and modern data stacks like Snowflake, Databricks, or Dataiku. MLOps: Strong experience in productionizing models, including version control (Git), CI/CD, and model monitoring/governance. Location: Brooklyn, NY (4 days onsite per week) Compensation: $100,000 - $150,000 + Bonus Benefits: Comprehensive medical/dental/vision, 401k match, competitive PTO, and unique access to live entertainment and sports events.
    $89k-130k yearly est. 4d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Data scientist job in Plainfield, NJ

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 21h ago
  • Data & Performance Analytics (Hedge Fund)

    Coda Search│Staffing

    Data scientist job in New York, NY

    Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing. Responsibilities This role will be responsible for Compiling periodical fund performance analyses Review and analyze portfolio performance data, benchmark performance and risk statistics Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems Apply queries, pivot tables, filters and other tools to analyze data. Maintain client relationship management database and providing reports to Directors on a regular basis Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization Provide various ad hoc data research and analysis as needed. Desired Skills and Experience Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience Proficiency in Microsoft Office, particularly Excel Modeling Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint Outstanding communication skills, proven ability to effectively work with all levels of Managment Comfortable working in a fast-paced, dead-line driven dynamic environment Innovative and creative thinker Must be detail oriented
    $68k-96k yearly est. 2d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data scientist job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 1d ago
  • Data Engineer

    Beauty By Imagination (BBI

    Data scientist job in New York, NY

    About Beauty by Imagination: Beauty by Imagination is a global haircare company dedicated to boosting self-confidence with imaginative solutions for every hair moment. We are a platform company of diverse, market-leading brands, including Wet Brush, Goody, Bio Ionic, and Ouidad - all of which are driven to be the most trusted choice for happy, healthy hair. Our talented team is passionate about delivering high-performing products for consumers and salon professionals alike. Position Overview: We are looking for a skilled Data Engineer to design, build, and maintain our enterprise Data Warehouse (DWH) and analytics ecosystem - with a growing focus on enabling AI-driven insights, automation, and enterprise-grade AI usage. In this role, you will architect scalable pipelines, improve data quality and reliability, and help lay the foundational data structures that power tools like Microsoft Copilot, Copilot for Power BI, and AI-assisted analytics across the business. You'll collaborate with business stakeholders, analysts, and IT teams to modernize our data environment, integrate complex data sources, and support advanced analytics initiatives. Your work will directly influence decision-making, enterprise reporting, and next-generation AI capabilities built on top of our Data Warehouse. Key Responsibilities Design, develop, and maintain Data Warehouse architecture, including ETL/ELT pipelines, staging layers, and data marts. Build and manage ETL workflows using SQL Server Integration Services (SSIS) and other data integration tools. Integrate and transform data from multiple systems, including ERP platforms such as NetSuite. Develop and optimize SQL scripts, stored procedures, and data transformations for performance and scalability. Support and enhance Power BI dashboards and other BI/reporting systems. Implement data quality checks, automation, and process monitoring. Collaborate with business and analytics teams to translate requirements into scalable data solutions. Contribute to data governance, standardization, and documentation practices. Support emerging AI initiatives by ensuring model-ready data quality, accessibility, and semantic alignment with Copilot and other AI tools. Required Qualifications Proven experience with Data Warehouse design and development (ETL/ELT, star schema, SCD, staging, data marts). Hands-on experience with SSIS (SQL Server Integration Services) for building and managing ETL workflows. Strong SQL skills and experience with Microsoft SQL Server. Proficiency in Power BI or other BI tools (Tableau, Looker, Qlik). Understanding of data modeling, performance optimization, and relational database design. Familiarity with Python, Airflow, or Azure Data Factory for data orchestration and automation. Excellent analytical and communication skills. Preferred Qualifications Experience with cloud data platforms (Azure, AWS, or GCP). Understanding of data security, governance, and compliance (GDPR, SOC2). Experience with API integrations and real-time data ingestion. Background in finance, supply chain, or e-commerce analytics. Experience with NetSuite ERP or other ERP systems (SAP, Oracle, Dynamics, etc.). AI Focused Preferred Skills: Experience implementing AI-driven analytics or automation inside Data Warehouses. Hands-on experience using Microsoft Copilot, Copilot for Power BI, or Copilot Studio to accelerate SQL, DAX, data modeling, documentation, or insights. Familiarity with building RAG (Retrieval-Augmented Generation) or AI-assisted query patterns using SQL Server, Synapse, or Azure SQL. Understanding of how LLMs interact with enterprise data, including grounding, semantic models, and data security considerations (Purview, RBAC). Experience using AI tools to optimize ETL/ELT workflows, generate SQL scripts, or streamline data mapping/design. Exposure to AI-driven data quality monitoring, anomaly detection, or pipeline validation tools. Experience with Microsoft Fabric, semantic models, or ML-integrated analytics environments. Soft Skills Strong analytical and problem-solving mindset. Ability to communicate complex technical concepts to business stakeholders. Detail-oriented, organized, and self-motivated. Collaborative team player with a growth mindset. Impact You will play a key role in shaping the company's modern data infrastructure - building scalable pipelines, enabling advanced analytics, and empowering the organization to safely and effectively adopt AI-powered insights across all business functions. Our Tech Stack SQL Server, SSIS, Azure Synapse Python, Airflow, Azure Data Factory Power BI, NetSuite ERP, REST APIs CI/CD (Azure DevOps, GitHub) What We Offer Location: New York, NY (Hybrid work model) Employment Type: Full-time Compensation: Competitive salary based on experience Benefits: Health insurance, 401(k), paid time off Opportunities for professional growth and participation in enterprise AI modernization initiatives
    $90k-123k yearly est. 1d ago
  • Senior Data Engineer

    Godel Terminal

    Data scientist job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 21h ago
  • Senior Data Engineer

    Quintrix, By Mindlance

    Data scientist job in New York, NY

    Title: Senior Data Engineer Duration: 12-15 months (possibilities of conversion) W2 Candidates only. Our client, is seeking a Senior Data Engineer to join their team in New York (preferred, Downtown WTC) or Boston. This is a long-term contract position with the potential to convert to a full-time employee (FTE) role. The role requires focusing on overseeing third-party fund accounting administration, client life cycle, valuation automation, and driving data-related initiatives. Key Responsibilities: • Lead the design, development, and optimization of data architecture, modeling, and pipelines to support fund accounting administration transitions to third parties. • Oversee and manage third-party vendors, ensuring seamless integration and efficiency in data processes. • Collaborate with business units (BUs) and stakeholders to gather requirements, refine processes, and implement data solutions. • Build and maintain robust CI/CD pipelines to ensure scalable and reliable data workflows. • Utilize Snowflake and advanced SQL to manage and query large datasets effectively. • Drive data engineering best practices, ensuring high-quality, efficient, and secure data systems. • Communicate complex technical concepts to non-technical stakeholders, ensuring alignment and clarity. Must-Have Qualifications: • Experience: 10+ years in data engineering or related roles. • Technical Expertise: o Advanced proficiency in Python and SQL for data processing and pipeline development. o Experience with additional cloud-based AWS data platforms or tools. o Strong experience in data architecture, data modeling, and CI/CD pipeline implementation. o Hands-on expertise with Snowflake for data warehousing and analytics. • Domain Knowledge: Extensive experience in asset management is mandatory. • Communication: Exceptional verbal and written communication skills, with the ability to engage effectively with business units and stakeholders. Nice-to-Have Qualifications: • Prior experience leading or overseeing third-party vendors in a data-related capacity. • Familiarity with advanced data orchestration tools or frameworks.
    $90k-123k yearly est. 2d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data scientist job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 21h ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 21h ago
  • Data Engineer

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves. Responsibilities Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries. Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring. Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines. Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift. Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm. Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets. Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users. Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns. Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, or related field. 2-5 years of experience in a similar Data Engineering or Web Scraping role. Capital markets knowledge with familiarity across asset classes and experience supporting trading systems. Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift). Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright). Strong Python programming skills and experience with SQL and NoSQL databases. Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus. Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
    $86k-120k yearly est. 1d ago
  • Azure Data Engineer

    Wall Street Consulting Services LLC

    Data scientist job in Warren, NJ

    Job Title: Data Engineer - SQL, Azure, ADF (Commercial Insurance) Experience: 12 -20 Years Job Type: Contract Required Skills: SQL, Azure, ADF, Commercial Insurance We are seeking a highly skilled Data Engineer with strong experience in SQL, Azure Data Platform, and Azure Data Factory, preferably within the Insurance domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, integrating data from multiple insurance systems, and enabling analytical and reporting capabilities for underwriting, claims, policy, billing, and risk management teams. Required Skills & Experience Minimum 12+ years of experience in Data Engineering or related roles. Strong expertise in: SQL, T-SQL, PL/SQL Azure Data Factory (ADF) Azure SQL, Synapse, ADLS Data modeling for relational and analytical systems. Hands-on experience with ETL/ELT development and complex pipeline orchestration. Experience in Azure DevOps Git, CI/CD pipelines, and DataOps practices. Understanding of insurance domain datasets: policy, premium, claims, exposures, brokers, reinsurers, underwriting workflows. Strong analytical and problem-solving skills, with the ability to handle large datasets and complex transformations. Preferred Qualifications Experience with Databricks / PySpark for large-scale transformations. Knowledge of Commercial Property & Casualty (P&C) insurance. Experience integrating data from Guidewire ClaimCenter/PolicyCenter, DuckCreek, or similar platforms. Exposure to ML/AI pipelines for underwriting or claims analytics. Azure certifications such as: DP-203 (Azure Data Engineer) AZ-900, AZ-204, AI-900
    $82k-112k yearly est. 21h ago
  • Data Engineer

    Beaconfire Inc.

    Data scientist job in East Windsor, NJ

    🚀 Junior Data Engineer 📝 E-Verified | Visa Sponsorship Available 🔍 About Us: BeaconFire, based in Central NJ, is a fast-growing company specializing in Software Development, Web Development, and Business Intelligence. We're looking for self-motivated and strong communicators to join our team as a Junior Data Engineer! If you're passionate about data and eager to learn, this is your opportunity to grow in a collaborative and innovative environment. 🌟 🎓 Qualifications We're Looking For: Passion for data and a strong desire to learn and grow. Master's Degree in Computer Science, Information Technology, Data Analytics, Data Science, or a related field. Intermediate Python skills (Experience with NumPy, Pandas, etc. is a plus!) Experience with relational databases like SQL Server, Oracle, or MySQL. Strong written and verbal communication skills. Ability to work independently and collaboratively within a team. 🛠️ Your Responsibilities: Collaborate with analytics teams to deliver reliable, scalable data solutions. Design and implement ETL/ELT processes to meet business data demands. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to optimize data flows. Create automated unit tests and participate in integration testing. Troubleshoot and resolve operational and performance-related issues. Work with architecture and engineering teams to implement high-quality solutions and follow best practices. 🌟 Why Join BeaconFire? ✅ E-Verified employer 🌍 Work Visa Sponsorship Available 📈 Career growth in data engineering and BI 🤝 Supportive and collaborative work culture 💻 Exposure to real-world, enterprise-level projects 📩 Ready to launch your career in Data Engineering? Apply now and let's build something amazing together! 🚀
    $82k-112k yearly est. 1d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 21h ago
  • Data Engineer

    Drillo.Ai

    Data scientist job in New Providence, NJ

    Job Title: Senior Data Engineer (Python & Snowflake, SQL) Employment Type: Contract Sr. Data Engineer (Python, Snowflake, SQL) The developer should have strong Python, Snowflake, SQL coding skills. The developer should be able to articulate few real time experience scenarios and should have a good aptitude to show case solutions for real life problems in Snowflake and Python. The developer should be able to write code in Python for some intermediate level problems given during the L1 assessment. Lead qualities to be able to guide a team and to own the end to end support of the project. Around 8 years' experience as Snowflake Developer on design and development of data solutions within the Snowflake Data Cloud, leveraging its cloud-based data warehousing capabilities. Responsible for designing and implementing data pipelines, data models, and ETL processes, ensuring efficient and effective data storage, processing, and analysis. Able to write Complex SQL Queries, Write Python Stored Procedure code in Snowflake Job Description Summary: Data Modelling and Schema Design: Create and maintain well-structured data models and schemas within Snowflake, ensuring data integrity and efficient query performance. ETL/ELT Development: Design and implement ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) processes to load data into Snowflake from various sources. Data Pipeline Management: Build and optimize data pipelines to ingest data into Snowflake, ensuring accurate and timely data flow. SQL Optimization: Write and optimize SQL queries to enhance performance and efficiency within Snowflake. Performance Tuning: Identify and address performance bottlenecks within Snowflake, optimizing query execution and resource allocation. Security and Governance: Implement data security and governance best practices within Snowflake environments, including access control and encryption. Documentation and Maintenance: Maintain documentation for data models, data pipelines, and other Snowflake solutions. Troubleshooting and Support: Troubleshoot and resolve issues within Snowflake, providing technical support to users. Collaboration: Collaborate with data architects, data engineers, and business users to understand requirements and deliver solutions Other Skills: Experience with data warehousing concepts and data modelling. Hands-on experience in creating stored procedures, functions, tables, cursors. Experience in database testing, data comparison, and data transformation scripting. Capable of troubleshooting common database issues Hands on experience in Gitlab with understanding of CI/CD Pipeline, DevOps tools Knowledge on AWS Lambda and Azure Functions
    $82k-112k yearly est. 3d ago
  • Python Data Engineer

    Tekvana Inc.

    Data scientist job in Iselin, NJ

    Job Title:Data Engineer (Python, Spark, Cloud) Pay :$90000 per year DOE Term : Contract Work Authorization: US Citizens only ( may need Security clearance in future) Job Summary: We are seeking a mid-level Data Engineer with strong Python and Big Data skills to design, develop, and maintain scalable data pipelines and cloud-based solutions. This role involves hands-on coding, data integration, and collaboration with cross-functional teams to support enterprise analytics and reporting. Key Responsibilities: Build and maintain ETL pipelines using Python and PySpark for batch and streaming data. Develop data ingestion frameworks for structured/unstructured sources. Implement data workflows using Airflow and integrate with Kafka for real-time processing. Deploy solutions on Azure or GCP using container platforms (Kubernetes/OpenShift). Optimize SQL queries and ensure data quality and governance. Collaborate with data architects and analysts to deliver reliable data solutions. Required Skills: Python (3.x) - scripting, API development, automation. Big Data: Spark/PySpark, Hadoop ecosystem. Streaming: Kafka. SQL: Oracle, Teradata, or SQL Server. Cloud: Azure or GCP (BigQuery, Dataflow). Containers: Kubernetes/OpenShift. CI/CD: GitHub, Jenkins. Preferred Skills: Airflow for orchestration. ETL tools (Informatica, Talend). Financial services experience. Education & Experience: Bachelor's in Computer Science or related field. 3-5 years of experience in data engineering and Python development. Keywords for Visibility: Python, PySpark, Spark, Hadoop, Kafka, Airflow, Azure, GCP, Kubernetes, CI/CD, ETL, Data Lake, Big Data, Cloud Data Engineering. Reply with your profiles to this posting and send it to ******************
    $90k yearly 2d ago
  • Senior Data Engineer (Snowflake)

    Epic Placements

    Data scientist job in Parsippany-Troy Hills, NJ

    Senior Data Engineer (Snowflake & Python) 1-Year Contract | $60/hour + Benefit Options Hybrid: On-site a few days per month (local candidates only) Work Authorization Requirement You must be authorized to work for any employer as a W2 employee. This is required for this role. This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered. Overview We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake. Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement. What You'll Do Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake Participate across the full software development lifecycle - planning, requirements, development, testing, and QA Partner closely with engineering and data teams to identify and implement optimal technical solutions Build and maintain high-performance, scalable data pipelines and data warehouse architectures Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions Manage deliverables and priorities effectively in a fast-moving environment Contribute to data governance practices including metadata management and data lineage Support analytics and reporting use cases leveraging advanced SQL and analytical functions Required Skills & Experience 8+ years of experience designing and developing data solutions in an enterprise environment 5+ years of hands-on Snowflake experience Strong hands-on development skills with SQL and Python Proven experience designing and developing data warehouses in Snowflake Ability to diagnose, optimize, and tune SQL queries Experience with Azure data frameworks (e.g., Azure Data Factory) Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar Solid understanding of metadata management and data lineage Hands-on experience with SQL analytical functions Working knowledge of Shell scripting and Java scripting Experience using Git, Confluence, and Jira Strong problem-solving and troubleshooting skills Collaborative mindset with excellent communication skills Nice to Have Experience supporting Pharma industry data Exposure to Omni-channel data environments Why This Opportunity $60/hour W2 on a long-term 1-year contract Benefit options available Hybrid structure with limited on-site requirement High-impact role supporting enterprise data initiatives Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
    $60 hourly 4d ago
  • Senior Data Engineer

    Apexon

    Data scientist job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 3d ago
  • Data Engineer

    Haptiq

    Data scientist job in New York, NY

    Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets. Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage. The Opportunity As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations. Responsibilities and Duties Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms). Ensure consistent data hygiene, normalization, and enrichment across source systems. Develop and maintain data models and data warehouses optimized for analytics and operational reporting. Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights. Own the documentation of data schemas, definitions, lineage, and data quality controls. Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets. Monitor pipeline performance and proactively resolve data discrepancies or failures. Contribute to architectural decisions related to internal data infrastructure and tools. Requirements 3-5 years of experience as a data engineer, analytics engineer, or similar role. Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt). Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday. Proficiency in Python or another scripting language for data manipulation. Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment). Strong understanding of data governance, documentation, and schema management. Excellent communication skills and ability to work cross-functionally. Benefits Flexible work arrangements (including hybrid mode) Great Paid Time Off (PTO) policy Comprehensive benefits package (Medical / Dental / Vision / Disability / Life) Healthcare and Dependent Care Flexible Spending Accounts (FSAs) 401(k) retirement plan Access to HSA-compatible plans Pre-tax commuter benefits Employee Assistance Program (EAP) Opportunities for professional growth and development. A supportive, dynamic, and inclusive work environment. Why Join Us? We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it. The compensation range for this role is $75,000 to $80,000 USD
    $75k-80k yearly 2d ago
  • Senior Data Engineer

    The Cypress Group 3.9company rating

    Data scientist job in New York, NY

    Our client is a growing Fintech software company Headquarted in New York, NY. They have several hundred employees and are in growth mode. They are currently looking for a Senior Data Engineer w/ 6+ years of overall professional experience. Qualified candidates will have hands-on experience with Python (6 years), SQL (6 years), DBT (3 years), AWS (Lambda, Glue), Airflow and Snowflake (3 years). BSCS and good CS fundamentals. The Senior Data Engineer will work in a collaborative team environment and will be responsible for building, optimizing and scaling ETL Data Pipelines, DBT models and Datawarehousing. Excellent communication and organizational skills are expected. This role features competitive base salary, equity, 401(k) with company match and many other attractive perks. Please send your resume to ******************* for immediate consideration.
    $98k-129k yearly est. 4d ago

Learn more about data scientist jobs

How much does a data scientist earn in Sayreville, NJ?

The average data scientist in Sayreville, NJ earns between $65,000 and $124,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Sayreville, NJ

$90,000

What are the biggest employers of Data Scientists in Sayreville, NJ?

The biggest employers of Data Scientists in Sayreville, NJ are:
  1. Guardian Life
  2. Capgemini
  3. Ansell
  4. Robert Half
Job type you want
Full Time
Part Time
Internship
Temporary