Post job

Senior data scientist jobs in New Jersey

- 446 jobs
  • Data Scientist

    Insight Global

    Senior data scientist job in Camden, NJ

    Title: Data Scientist Duration: Direct Hire Schedule: Hybrid (Mon/Fri WFH, onsite Tues-Thurs) Interview Process: 2 rounds, virtual (2nd/final round is a case study) Salary Range: $95-120k/yr (with benefits) Must haves: 1yr min professional/post-grad Data Scientist experience, and knowledge across areas such as Machine Learning, NLP, LLMs, etc Proficiency in Python and SQL for data manipulation and pipeline development Strong communication skills for stakeholder engagement Bachelor's Degree Plusses Master's Degree Azure experience (and/or other MS tools) Experience working with healthcare data, preferably from Epic Strong skills in data visualization, dashboard design, and interpreting complex datasets Day to Day: We are seeking a Data Scientist to join our clients analytics team. This role focuses on leveraging advanced analytics techniques to drive clinical and business decision-making. You will work with healthcare data to build predictive models, apply machine learning and NLP methods, and optimize data pipelines. The ideal candidate combines strong technical skills with the ability to communicate insights effectively to stakeholders. Key Responsibilities Develop and implement machine learning models for predictive analytics and clinical decision support. Apply NLP and LLM techniques to extract insights from structured and unstructured data. Build and optimize data pipelines using Python and SQL for ETL processes. Preprocess and clean datasets to support analytics initiatives. Collaborate with stakeholders to understand data needs and deliver actionable insights. Interpret complex datasets and provide clear, data-driven recommendations.
    $95k-120k yearly 1d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Senior data scientist job in Plainfield, NJ

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 5d ago
  • Data Scientist

    AGM Tech Solutions-A Woman and Latina-Owned It Staffing Firm-An Inc. 5000 Company

    Senior data scientist job in Parsippany-Troy Hills, NJ

    ***This role is hybrid three days per week onsite in Parsippany, NJ. LOCAL CANDIDATES ONLY. No relocation*** Data Scientist • Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC health, prioritize improvements, validate pilot outcomes, and implement AI-driven development lifecycle capabilities. • Responsibilities: o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and delivery KPIs. o Build dashboards, anomaly detection, and data models; implement GenAI solutions (e.g., code suggestion, PR summarization, automated test generation) to improve developer workflows. o Design experiments and validate AI-driven features during the pilot. o Collaborate with engineering and SRE to operationalize models and ensure observability and data governance. • Required skills: o 3+ years applied data science/ML in production; hands-on experience with GenAI/LLMs applied to developer workflows or DevOps automation. o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data visualization (Tableau/Power BI). o Experience with observability/telemetry data (logs/metrics/traces) and A/B experiment design. • Preferred: o Experience with model deployment, MLOps, prompt engineering, and cloud data platforms (AWS/GCP/Azure).
    $76k-106k yearly est. 4d ago
  • Biostatistician

    Net2Source (N2S

    Senior data scientist job in Rahway, NJ

    Join a Global Leader in Workforce Solutions - Net2Source Inc. Who We Are Net2Source Inc. isn't just another staffing company - we're a powerhouse of innovation, connecting top talent with the right opportunities. Recognized for 300% growth in the past three years, we operate in 34 countries with a global team of 5,500+. Our mission? To bridge the talent gap with precision- Right Talent. Right Time. Right Place. Right Price. Title: Statistical Scientist Duration: 12 Months (Start Date: First Week of January) Location: Rahway, NJ (Onsite/Hybrid Only) Rate: $65/hr on W2 Position Summary We are seeking an experienced Statistical Scientist to support analytical method qualification, validation, and experimental design for Client's scientific and regulatory programs. The successful candidate will work closely with scientists to develop statistically sound protocols, contribute to method robustness and validation studies, and prepare reporting documentation for regulatory readiness. This position requires deep expertise in statistical methodologies and hands-on programming skills in SAS, R, and JMP. Key Responsibilities • Collaborate with scientists to design experiments, develop study protocols, and establish acceptance criteria for analytical applications. • Support analytical method qualification and validation through statistical protocol development, analysis, and reporting. • Write memos and technical reports summarizing statistical analyses for internal and regulatory audiences. • Assist scientists in assessing protocol deviations and resolving investigation-related issues. • Coordinate with the Quality Audit group to finalize statistical reports for BLA (Biologics License Application) readiness. • Apply statistical modeling approaches to evaluate assay robustness and method reliability. • Support data integrity and ensure compliance with internal processes and regulatory expectations. Qualifications Education (Required): • Ph.D. in Statistics, Biostatistics, Applied Statistics, or related discipline with 3+ years of relevant experience, or • M.S. in Statistics/Applied Statistics with 6+ years of relevant experience. (BS/BA candidates will not be considered.) Required Skills: • Proficiency in SAS, R, and JMP. • Demonstrated experience in analytical method qualification and validation, including protocol writing and statistical execution. • Strong background in experimental design for analytical applications. • Ability to interpret and communicate statistical results clearly in both written and verbal formats. Preferred / Nice-to-Have: • Experience with mixed-effect modeling. • Experience with Bayesian analysis. • Proven ability to write statistical software/code to automate routine analyses. • Experience presenting complex statistical concepts to leadership. • Experience in predictive stability analysis. Why Work With Us? At Net2Source, we believe in more than just jobs - we build careers. We champion leadership at all levels, celebrate diverse perspectives, and empower our people to make an impact. Enjoy a collaborative environment where your ideas matter, and your professional growth is our priority. Our Commitment to Inclusion & Equity Net2Source is an equal opportunity employer dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all. Awards & Recognition • America's Most Honored Businesses (Top 10%) • Fastest-Growing Staffing Firm by Staffing Industry Analysts • INC 5000 List for Eight Consecutive Years • Top 100 by Dallas Business Journal • Spirit of Alliance Award by Agile1 Ready to Level Up Your Career? Click Apply Now and let's make it happen.
    $65 hourly 3d ago
  • Sr Data Modeler with Capital Markets/ Custody

    Ltimindtree

    Senior data scientist job in Jersey City, NJ

    LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ******************* Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets Work Location Jersey City, NJ (Onsite, 5 days / week) Job Description: We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain. The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model. A software development background is not required. Key Responsibilities Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics). Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model. Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model. Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality. Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity. Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage. Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation. Champion a detail-oriented and documentation-first culture within the data modeling team. Qualifications Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services. Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics). Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures. Strong skills in data analysis using SQL and/or other data analysis tools. Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation. Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation. Ability to mentor and lead junior team members in data modeling best practices. Passion for detail, documentation, and continuous improvement. Software development background is not required. Preferred Skills Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer). Familiarity with capital markets, business processes and data flows. Knowledge of regulatory and compliance requirements in financial data management. Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases). Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $79k-111k yearly est. 4d ago
  • Sr Data Engineer Python Serverside

    Canyon Associates 4.2company rating

    Senior data scientist job in White House Station, NJ

    This is a direct hire full-time position, with a hybrid on-site 2 days a week format. YOU MUST BE A US CITIZEN OR GREEN CARD, NO OTHER STATUS TO WORK IN THE US WILL BE PERMITTED YOU MUST LIVE LOCAL TO THE AREA AND BE ABLE TO DRIVE ONSITE A MIN TWO DAYS A WEEK THE TECH STACK WILL BE: 7 years demonstrated server-side development proficiency 5 years demonstrated server-side development proficiency Programming Languages: Python (NumPy, Pandas, Oracle PL/SQL). Other non-interpreted languages like Java, C++, Rust, etc. are a plus. Must be proficient in the intermediate-advanced level of the language (concurrency, memory management, etc.) Design patterns: typical GOF patterns (Factory, Facade, Singleton, etc.) Data structures: maps, lists, arrays, etc SCM: solid Git proficiency, MS Azure DevOps (CI/CD)
    $97k-129k yearly est. 5d ago
  • Data Analytics Engineer

    Dale Workforce Solutions

    Senior data scientist job in Somerset, NJ

    Client: manufacturing company Type: direct hire Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets. This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams. This role is on-site five days per week in Somerset, NJ. Key Responsibilities Power BI Reporting & Administration Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions Develop and maintain data models to ensure accuracy, consistency, and reliability Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance Optimize Power BI solutions for performance, scalability, and ease of use ETL & Data Warehousing Design and maintain data warehouse structures, including schema and database layouts Develop and support ETL processes to ensure timely and accurate data ingestion Integrate data from multiple systems while ensuring quality, consistency, and completeness Work closely with database administrators to optimize data warehouse performance Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed Training & Documentation Create and maintain technical documentation, including specifications, mappings, models, and architectural designs Document data warehouse processes for reference, troubleshooting, and ongoing maintenance Manage data definitions, lineage documentation, and data cataloging for all enterprise data models Project Management Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team Collaborate with key business stakeholders to ensure departmental reporting needs are met Record meeting notes in Confluence and document project updates in Jira Data Governance Implement and enforce data governance policies to ensure data quality, compliance, and security Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness Routine IT Functions Resolve Help Desk tickets related to reporting, dashboards, and BI tools Support general software and hardware installations when needed Other Responsibilities Manage email and phone communication professionally and promptly Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel Perform additional assigned duties as needed Qualifications Required Minimum of 3 years of relevant experience Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience Experience with cloud-based BI environments (Azure, AWS, etc.) Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica) Proficiency in SQL for data extraction, manipulation, and transformation Strong knowledge of DAX Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake) Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools Strong analytical, problem-solving, and documentation skills Excellent written and verbal communication abilities High attention to detail and strong self-review practices Effective time management and organizational skills; ability to prioritize workload Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
    $82k-112k yearly est. 5d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Senior data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 5d ago
  • Data Engineer

    Ztek Consulting 4.3company rating

    Senior data scientist job in Hamilton, NJ

    Key Responsibilities: Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory. Integrate and process Bloomberg market data feeds and files into trading or analytics platforms. Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion. Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL. Manage FTP/SFTP file transfers between internal systems and external vendors. Ensure data quality, completeness, and timeliness for downstream trading and reporting systems. Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows. Required Skills & Experience: 10+ years of experience in data engineering or production support within financial services or trading environments. Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric. Strong Python and SQL programming skills. Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP). Experience with Git, CI/CD pipelines, and Azure DevOps. Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling. Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools). Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments. Excellent communication, problem-solving, and stakeholder management skills.
    $89k-125k yearly est. 3d ago
  • Senior Data Architect

    Valuemomentum 3.6company rating

    Senior data scientist job in Edison, NJ

    Act as a Enterprise Architect, supporting architecture reviews, design decisions, and strategic planning. Design and implement scalable data warehouse and analytics solutions on AWS and Snowflake. Develop and optimize SQL, ETL/ELT pipelines, and data models to support reporting and analytics. Collaborate with cross-functional teams (data engineering, application development, infrastructure) to align on architecture best practices and ensure consistency across solutions. Evaluate and recommend technologies, tools, and frameworks to improve data processing efficiency and reliability. Provide guidance and mentorship to data engineering teams, enforcing data governance, quality, and security standards. Troubleshoot complex data and performance issues and propose long-term architectural solutions. Support capacity planning, cost optimization, and environment management within AWS/Snowflake ecosystems. About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
    $82k-113k yearly est. 2d ago
  • AWS Data engineer with Databricks || USC Only || W2 Only

    Ipivot

    Senior data scientist job in Princeton, NJ

    AWS Data Engineer with Databricks Princeton, NJ - Hybrid - Need Locals or Neaby Duration: Long Term is available only to U.S. citizens. Key Responsibilities Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data. Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability. Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions. Document processes and mentor junior team members on best practices. Required Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda). Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
    $82k-112k yearly est. 1d ago
  • Data Engineer

    Company 3.0company rating

    Senior data scientist job in Fort Lee, NJ

    The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance. Responsibilities Analyze, structure, and interpret raw data. Build and maintain datasets for business use. Design and optimize database tables, schemas, and data structures. Enhance data accuracy, consistency, and overall efficiency. Develop views, functions, and stored procedures. Write efficient SQL queries to support application integration. Create database triggers to support automation processes. Oversee data quality, integrity, and database security. Translate complex data into clear, actionable insights. Collaborate with cross-functional teams on multiple projects. Present data through graphs, infographics, dashboards, and other visualization methods. Define and track KPIs to measure the impact of business decisions. Prepare reports and presentations for management based on analytical findings. Conduct daily system maintenance and troubleshoot issues across all platforms. Perform additional ad hoc analysis and tasks as needed. Qualification Bachelor's Degree in Information Technology or relevant 4+ years of experience as a Data Analyst or Data Engineer, including database design experience. Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations. Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views. Excellent written, verbal, and interpersonal communication skills. Ability to manage multiple tasks in a fast-paced and evolving environment. Strong work ethic, professionalism, and integrity. Advanced proficiency in Microsoft Office applications.
    $93k-132k yearly est. 3d ago
  • Data Engineer

    Mastech Digital 4.7company rating

    Senior data scientist job in Jersey City, NJ

    Mastech Digital Inc. (NYSE: MHH) is a minority-owned, publicly traded IT staffing and digital transformation services company. Headquartered in Pittsburgh, PA, and established in 1986, we serve clients nationwide through 11 U.S. offices. Role: Data Engineer Location: Merrimack, NH/Smithfield, RI/Jersey City, NJ Duration: Full-Time/W2 Job Description: Must-Haves: Python for running ETL batch jobs Heavy SQL for data analysis, validation and querying AWS and the ability to move the data through the data stages and into their target databases. The Postgres database is the target, so that is required. Nice to haves: Snowflake Java for API development is a nice to have (will teach this) Experience in asset management for domain knowledge. Production support debugging and processing of vendor data The Expertise and Skills You Bring A proven foundation in data engineering - bachelor's degree + preferred, 10+ years' experience Extensive experience with ETL technologies Design and develop ETL reporting and analytics solutions. Knowledge of Data Warehousing methodologies and concepts - preferred Advanced data manipulation languages and frameworks (JAVA, PYTHON, JSON) - required RMDS experience (Snowflake, PostgreSQL ) - required Knowledge of Cloud platforms and Services (AWS - IAM, EC2, S3, Lambda, RDS ) - required Designing and developing low to moderate complex data integration solution - required Experience with DevOps, Continuous Integration and Continuous Delivery (Maven, Jenkins, Stash, Ansible, Docker) will be preferred Expert in SQL and Stored Procedures on any Relational databases Good in debugging, analyzing and Production Support Application Development based on JIRA stories (Agile environment) Demonstrable experience with ETL tools (Informatica, Snaplogic) Experience in working with Python in an AWS environment Create, update, and maintain technical documentation for software-based projects and products. Solving production issues. Interact effectively with business partners to understand business requirements and assist in generation of technical requirements. Participate in architecture, technical design, and product implementation discussions. Working Knowledge of Unix/Linux operating systems and shell scripting Experience with developing sophisticated Continuous Integration & Continuous Delivery (CI/CD) pipeline including software configuration management, test automation, version control, static code analysis. Excellent interpersonal and communication skills Ability to work with global Agile teams Proven ability to deal with ambiguity and work in fast paced environment Ability to mentor junior data engineers. The Value You Deliver The associate would help the team in designing and building a best-in-class data solutions using very diversified tech stack. Strong experience of working in large teams and proven technical leadership capabilities Knowledge of enterprise-level implementations like data warehouses and automated solutions. Ability to negotiate, influence and work with business peers and management. Ability to develop and drive a strategy as per the needs of the team Good to have: Full-Stack Programming knowledge, hands-on test case/plan preparation within Jira
    $81k-105k yearly est. 4d ago
  • Senior Data Engineer- MDM

    Synechron 4.4company rating

    Senior data scientist job in Iselin, NJ

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge: We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs. Additional Information The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within Iselin, NJ is $135K to $150K/year & benefits (see below). Key Responsibilities: Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains. Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization. Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met. Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements. Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively. Implement data integration pipelines leveraging modern data engineering tools and practices. Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer. Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies. Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services. Ensure compliance with data governance, data privacy, and security standards. Support CI/CD pipelines for continuous integration and deployment of data solutions. Qualifications: 12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry. Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options. Strong functional knowledge of reference data sources and domain-specific data standards. Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer. Familiarity with CI/CD practices, tools, and automation pipelines. Ability to work collaboratively across teams to deliver complex data solutions. Experience with financial systems (capital markets, credit risk, and regulatory compliance applications). Preferred Skills: Familiarity with financial data models and regulatory requirements. Experience with Azure cloud platforms Knowledge of data governance, data quality frameworks, and metadata management. We offer: A highly competitive compensation and benefits package A multinational organization with 58 offices in 21 countries and the possibility to work abroad 10 days of paid annual leave (plus sick leave and national holidays) Maternity & Paternity leave plans A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region) Retirement savings plans A higher education certification policy Commuter benefits (varies by region) Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms A flat and approachable organization A truly diverse, fun-loving and global work culture SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-150k yearly 3d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Senior data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 5d ago
  • Senior Data Engineer (Snowflake)

    Epic Placements

    Senior data scientist job in Parsippany-Troy Hills, NJ

    Senior Data Engineer (Snowflake & Python) 1-Year Contract | $60/hour + Benefit Options Hybrid: On-site a few days per month (local candidates only) Work Authorization Requirement You must be authorized to work for any employer as a W2 employee. This is required for this role. This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered. Overview We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake. Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement. What You'll Do Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake Participate across the full software development lifecycle - planning, requirements, development, testing, and QA Partner closely with engineering and data teams to identify and implement optimal technical solutions Build and maintain high-performance, scalable data pipelines and data warehouse architectures Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions Manage deliverables and priorities effectively in a fast-moving environment Contribute to data governance practices including metadata management and data lineage Support analytics and reporting use cases leveraging advanced SQL and analytical functions Required Skills & Experience 8+ years of experience designing and developing data solutions in an enterprise environment 5+ years of hands-on Snowflake experience Strong hands-on development skills with SQL and Python Proven experience designing and developing data warehouses in Snowflake Ability to diagnose, optimize, and tune SQL queries Experience with Azure data frameworks (e.g., Azure Data Factory) Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar Solid understanding of metadata management and data lineage Hands-on experience with SQL analytical functions Working knowledge of Shell scripting and Java scripting Experience using Git, Confluence, and Jira Strong problem-solving and troubleshooting skills Collaborative mindset with excellent communication skills Nice to Have Experience supporting Pharma industry data Exposure to Omni-channel data environments Why This Opportunity $60/hour W2 on a long-term 1-year contract Benefit options available Hybrid structure with limited on-site requirement High-impact role supporting enterprise data initiatives Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
    $60 hourly 4d ago
  • Data Engineer

    The Judge Group 4.7company rating

    Senior data scientist job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: • Supports review of controls to ensure sufficient protection of enterprise data. • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. • Updates logical or physical data models based on new use cases. • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. • Adds to team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). • Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills • Formal training or certification on data engineering concepts and 2+ years applied experience • Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases • Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis • Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. • Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. • Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs • Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar • Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills • Knowledge of data governance and security best practices. • Experience in carrying out data analysis to support business insights. • Strong Python and Spark
    $79k-111k yearly est. 1d ago
  • Senior Data Engineer

    Apexon

    Senior data scientist job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 3d ago
  • Data Engineer

    Neenopal Inc.

    Senior data scientist job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 2d ago
  • Data Scientist

    Signature Science, LLC 4.4company rating

    Senior data scientist job in New Jersey

    The primary purpose of this position is to serve as the data scientist with a split portfolio between the Atlantic City office and the Austin chemistry group. Essential Duties and Responsibilities: Performs data analytics, specifically data clean-up, data processing, predictive modeling, chemometric statistical modeling and analysis, multivariate data analysis, machine learning, and/or data mining, as related to scientific data. Applies technical skills to plan and execute assigned project work including development of computational models, programming of detection algorithms, and machine learning. Maintains operational capabilities of computation assets as needed by project requirements. Leads meetings with company clients by preparing and presenting meeting materials in meetings. Appropriately annotates project developed computer code through comments and user manuals. Presents technical results through the drafting of technical reports. Presents experimental results and recommended actions at internal project meetings. Supports business development efforts as needed by drafting technical sections of proposals, providing proposal review, assessing levels of effort required to complete proposed work, and brainstorming technical solutions to client problems. Other duties as assigned. Required Knowledge, Skills & Abilities: Ability to plan sequence of experiments to answer complicated technical questions Ability to lead group of co-workers in execution of a task Software programming proficiency with Java, C, R, Python, and/or MATLAB Working knowledge of statistics as it applies to scientific data Ability to communicate technical information to non-technical audiences Team player with a positive attitude Department of Homeland Security Suitability Department of Defense Secret Clearance Working knowledge of software development practices including Agile development and Git version control Sufficient business knowledge to support proposal efforts Education/Experience: Incumbent professional should have a Ph.D. or master's degree in a physical science (preferably chemistry), statistics, or data science and significant experience in computer programming, computational modeling, or software development. Certificates and Licenses: None Clearance: The ability to obtain a Secret clearance and Department of Homeland Security suitability is required for this position. Supervisory Responsibilities: The incumbent professional may oversee junior level staff members performing tasks. Working Conditions/ Equipment: The incumbent professional is expected to work and/or be available during regular business hours. He/she should also generally be available via e-mail or phone during non-business hours as needed to address critical issues or emergencies. He/she may be required to travel on behalf of the company up to 25%. The above job description is not intended to be an all-inclusive list of duties and standards of the position. Incumbents will follow any other instructions and perform any other related duties, as assigned by their supervisor. Powered by ExactHire:160573
    $72k-99k yearly est. 26d ago

Learn more about senior data scientist jobs

Do you work as a senior data scientist?

What are the top employers for senior data scientist in NJ?

Top 10 Senior Data Scientist companies in NJ

  1. Humana

  2. Johnson & Johnson

  3. Guardian Life

  4. Harris Blitzer Sports & Entertainment

  5. Cardinal Health

  6. Incedo

  7. Gilead Sciences

  8. 8427-Janssen Cilag Manufacturing Legal Entity

  9. Deloitte

  10. Prudential Financial

Job type you want
Full Time
Part Time
Internship
Temporary

Browse senior data scientist jobs in new jersey by city

All senior data scientist jobs

Jobs in New Jersey