Post job

Data scientist jobs in Bernards, NJ

- 635 jobs
All
Data Scientist
Senior Data Scientist
Data Engineer
Data Science Internship
Actuary
Biostatistician
Data Modeler
Data Consultant
  • Data Scientist

    AGM Tech Solutions-A Woman and Latina-Owned It Staffing Firm-An Inc. 5000 Company

    Data scientist job in Parsippany-Troy Hills, NJ

    ***This role is hybrid three days per week onsite in Parsippany, NJ. LOCAL CANDIDATES ONLY. No relocation*** Data Scientist • Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC health, prioritize improvements, validate pilot outcomes, and implement AI-driven development lifecycle capabilities. • Responsibilities: o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and delivery KPIs. o Build dashboards, anomaly detection, and data models; implement GenAI solutions (e.g., code suggestion, PR summarization, automated test generation) to improve developer workflows. o Design experiments and validate AI-driven features during the pilot. o Collaborate with engineering and SRE to operationalize models and ensure observability and data governance. • Required skills: o 3+ years applied data science/ML in production; hands-on experience with GenAI/LLMs applied to developer workflows or DevOps automation. o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data visualization (Tableau/Power BI). o Experience with observability/telemetry data (logs/metrics/traces) and A/B experiment design. • Preferred: o Experience with model deployment, MLOps, prompt engineering, and cloud data platforms (AWS/GCP/Azure).
    $76k-106k yearly est. 5d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Data scientist job in Plainfield, NJ

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 1d ago
  • Sr Data Engineer Python Serverside

    Canyon Associates 4.2company rating

    Data scientist job in White House Station, NJ

    This is a direct hire full-time position, with a hybrid on-site 2 days a week format. YOU MUST BE A US CITIZEN OR GREEN CARD, NO OTHER STATUS TO WORK IN THE US WILL BE PERMITTED YOU MUST LIVE LOCAL TO THE AREA AND BE ABLE TO DRIVE ONSITE A MIN TWO DAYS A WEEK THE TECH STACK WILL BE: 7 years demonstrated server-side development proficiency 5 years demonstrated server-side development proficiency Programming Languages: Python (NumPy, Pandas, Oracle PL/SQL). Other non-interpreted languages like Java, C++, Rust, etc. are a plus. Must be proficient in the intermediate-advanced level of the language (concurrency, memory management, etc.) Design patterns: typical GOF patterns (Factory, Facade, Singleton, etc.) Data structures: maps, lists, arrays, etc SCM: solid Git proficiency, MS Azure DevOps (CI/CD)
    $97k-129k yearly est. 1d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 1d ago
  • AWS Data engineer with Databricks || USC Only || W2 Only

    Ipivot

    Data scientist job in Princeton, NJ

    AWS Data Engineer with Databricks Princeton, NJ - Hybrid - Need Locals or Neaby Duration: Long Term is available only to U.S. citizens. Key Responsibilities Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data. Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability. Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions. Document processes and mentor junior team members on best practices. Required Qualifications Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda). Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
    $82k-112k yearly est. 2d ago
  • Biostatistician

    Net2Source (N2S

    Data scientist job in Rahway, NJ

    Join a Global Leader in Workforce Solutions - Net2Source Inc. Who We Are Net2Source Inc. isn't just another staffing company - we're a powerhouse of innovation, connecting top talent with the right opportunities. Recognized for 300% growth in the past three years, we operate in 34 countries with a global team of 5,500+. Our mission? To bridge the talent gap with precision- Right Talent. Right Time. Right Place. Right Price. Title: Statistical Scientist Duration: 12 Months (Start Date: First Week of January) Location: Rahway, NJ (Onsite/Hybrid Only) Rate: $65/hr on W2 Position Summary We are seeking an experienced Statistical Scientist to support analytical method qualification, validation, and experimental design for Client's scientific and regulatory programs. The successful candidate will work closely with scientists to develop statistically sound protocols, contribute to method robustness and validation studies, and prepare reporting documentation for regulatory readiness. This position requires deep expertise in statistical methodologies and hands-on programming skills in SAS, R, and JMP. Key Responsibilities • Collaborate with scientists to design experiments, develop study protocols, and establish acceptance criteria for analytical applications. • Support analytical method qualification and validation through statistical protocol development, analysis, and reporting. • Write memos and technical reports summarizing statistical analyses for internal and regulatory audiences. • Assist scientists in assessing protocol deviations and resolving investigation-related issues. • Coordinate with the Quality Audit group to finalize statistical reports for BLA (Biologics License Application) readiness. • Apply statistical modeling approaches to evaluate assay robustness and method reliability. • Support data integrity and ensure compliance with internal processes and regulatory expectations. Qualifications Education (Required): • Ph.D. in Statistics, Biostatistics, Applied Statistics, or related discipline with 3+ years of relevant experience, or • M.S. in Statistics/Applied Statistics with 6+ years of relevant experience. (BS/BA candidates will not be considered.) Required Skills: • Proficiency in SAS, R, and JMP. • Demonstrated experience in analytical method qualification and validation, including protocol writing and statistical execution. • Strong background in experimental design for analytical applications. • Ability to interpret and communicate statistical results clearly in both written and verbal formats. Preferred / Nice-to-Have: • Experience with mixed-effect modeling. • Experience with Bayesian analysis. • Proven ability to write statistical software/code to automate routine analyses. • Experience presenting complex statistical concepts to leadership. • Experience in predictive stability analysis. Why Work With Us? At Net2Source, we believe in more than just jobs - we build careers. We champion leadership at all levels, celebrate diverse perspectives, and empower our people to make an impact. Enjoy a collaborative environment where your ideas matter, and your professional growth is our priority. Our Commitment to Inclusion & Equity Net2Source is an equal opportunity employer dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all. Awards & Recognition • America's Most Honored Businesses (Top 10%) • Fastest-Growing Staffing Firm by Staffing Industry Analysts • INC 5000 List for Eight Consecutive Years • Top 100 by Dallas Business Journal • Spirit of Alliance Award by Agile1 Ready to Level Up Your Career? Click Apply Now and let's make it happen.
    $65 hourly 4d ago
  • Senior Data Engineer (Snowflake)

    Epic Placements

    Data scientist job in Parsippany-Troy Hills, NJ

    Senior Data Engineer (Snowflake & Python) 1-Year Contract | $60/hour + Benefit Options Hybrid: On-site a few days per month (local candidates only) Work Authorization Requirement You must be authorized to work for any employer as a W2 employee. This is required for this role. This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered. Overview We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake. Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement. What You'll Do Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake Participate across the full software development lifecycle - planning, requirements, development, testing, and QA Partner closely with engineering and data teams to identify and implement optimal technical solutions Build and maintain high-performance, scalable data pipelines and data warehouse architectures Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions Manage deliverables and priorities effectively in a fast-moving environment Contribute to data governance practices including metadata management and data lineage Support analytics and reporting use cases leveraging advanced SQL and analytical functions Required Skills & Experience 8+ years of experience designing and developing data solutions in an enterprise environment 5+ years of hands-on Snowflake experience Strong hands-on development skills with SQL and Python Proven experience designing and developing data warehouses in Snowflake Ability to diagnose, optimize, and tune SQL queries Experience with Azure data frameworks (e.g., Azure Data Factory) Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar Solid understanding of metadata management and data lineage Hands-on experience with SQL analytical functions Working knowledge of Shell scripting and Java scripting Experience using Git, Confluence, and Jira Strong problem-solving and troubleshooting skills Collaborative mindset with excellent communication skills Nice to Have Experience supporting Pharma industry data Exposure to Omni-channel data environments Why This Opportunity $60/hour W2 on a long-term 1-year contract Benefit options available Hybrid structure with limited on-site requirement High-impact role supporting enterprise data initiatives Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
    $60 hourly 5d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 1d ago
  • Senior Data Engineer

    Apexon

    Data scientist job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 4d ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data scientist job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: • Supports review of controls to ensure sufficient protection of enterprise data. • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. • Updates logical or physical data models based on new use cases. • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. • Adds to team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). • Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills • Formal training or certification on data engineering concepts and 2+ years applied experience • Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases • Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis • Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. • Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. • Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs • Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar • Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills • Knowledge of data governance and security best practices. • Experience in carrying out data analysis to support business insights. • Strong Python and Spark
    $79k-111k yearly est. 2d ago
  • Data Engineer

    Neenopal Inc.

    Data scientist job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 3d ago
  • Senior Data Engineer - MDM

    Synechron 4.4company rating

    Data scientist job in Iselin, NJ

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge: We are seeking a highly skilled and experienced Senior Data Engineer specializing in Master Data Management (MDM) to join our data team. The ideal candidate will have a strong background in designing, implementing, and managing end-to-end MDM solutions, preferably within the financial sector. You will be responsible for architecting robust data platforms, evaluating MDM tools, and aligning data strategies to meet business needs. Additional Information The base salary for this position will vary based on geography and other factors. In accordance with the law, the base salary for this role if filled within Iselin, NJ is $135K to $150K/year & benefits (see below). Key Responsibilities: Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains. Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization. Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met. Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements. Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively. Implement data integration pipelines leveraging modern data engineering tools and practices. Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer. Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies. Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services. Ensure compliance with data governance, data privacy, and security standards. Support CI/CD pipelines for continuous integration and deployment of data solutions. Qualifications: 12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry. Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options. Strong functional knowledge of reference data sources and domain-specific data standards. Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer. Familiarity with CI/CD practices, tools, and automation pipelines. Ability to work collaboratively across teams to deliver complex data solutions. Experience with financial systems (capital markets, credit risk, and regulatory compliance applications). Preferred Skills: Familiarity with financial data models and regulatory requirements. Experience with Azure cloud platforms Knowledge of data governance, data quality frameworks, and metadata management. We offer: A highly competitive compensation and benefits package A multinational organization with 58 offices in 21 countries and the possibility to work abroad 10 days of paid annual leave (plus sick leave and national holidays) Maternity & Paternity leave plans A comprehensive insurance plan including: medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region) Retirement savings plans A higher education certification policy Commuter benefits (varies by region) Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms A flat and approachable organization A truly diverse, fun-loving and global work culture SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-150k yearly 3d ago
  • Sr Data Modeler with Capital Markets/ Custody

    Ltimindtree

    Data scientist job in Jersey City, NJ

    LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ******************* Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets Work Location Jersey City, NJ (Onsite, 5 days / week) Job Description: We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain. The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model. A software development background is not required. Key Responsibilities Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics). Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model. Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model. Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality. Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity. Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage. Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation. Champion a detail-oriented and documentation-first culture within the data modeling team. Qualifications Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services. Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics). Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures. Strong skills in data analysis using SQL and/or other data analysis tools. Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation. Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation. Ability to mentor and lead junior team members in data modeling best practices. Passion for detail, documentation, and continuous improvement. Software development background is not required. Preferred Skills Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer). Familiarity with capital markets, business processes and data flows. Knowledge of regulatory and compliance requirements in financial data management. Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases). Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $79k-111k yearly est. 5d ago
  • Data Scientist Lead [Multiple Positions Available]

    JPMC

    Data scientist job in Jersey City, NJ

    Duties: Responsible for identifying key metrics and conducting in-depth analyses to guide business and inform strategy decisions. Work with cross-functional teams including Product, Sales, Marketing, Account Management, and Risk to help upsell and cross-sell products and to help improve our merchants' experience. Partner with leaders across the business of Sales, Marketing, Finance and Account Management to deliver high-impact analytics. Identify and define a business challenge and craft a strategic analysis plan to uncover the necessary insights and solutions. Collaborate with leaders, various internal teams and stakeholders across the company to develop insights and analytics solutions for both internal and external clients. Examine diverse data sources and develop statistical models to pinpoint trends and key factors, supporting decision-making in product development, sales strategies, risk management, and operational processes. Build data visualization by leveraging existing tools or developing new ones, as well as providing ongoing enhancements to business dashboards. Implement processes and build tools to make data access, extraction, and analysis more efficient. Build and grow relationships with internal teams to proactively identify business needs and address them using analytics. QUALIFICATIONS: Minimum education and experience required: Master's degree in Business Analytics, Data Science, Machine Learning, or related field of study plus 4 years (48 months) of experience in the job offered or as Data Scientist, Decision Analytics Associate, or related occupation. Skills Required: This position requires four (4) years of experience with the following: Using Python to conduct complex analyses to solve unstructured problems via building predictive models using libraries NumPy, Pandas, Scikit-learn, Scipy, and Statsmodels; predictive modeling techniques, including Logistic Regression and Decision Trees; unsupervised learning using K-Means clustering and elbow method; model interpretability via SHAP; Microsoft Excel (VBA, Vlookup, Index Match, Pivot Tables). This position requires two (2) years of experience with the following: Customer Churn Predictive Models with supervised learning techniques leveraging XGBoost, Random Forest, and time series forecasting methods ARIMA, Prophet, and LSTM using libraries TensorFlow, Keras, Fbprophet; anomaly detection methods, including Isolation Forest and One-Class SVM; Statistical Analysis and Hypothesis Testing capabilities, including ANOVA and TURF Analysis; handling banking-related datasets, including Customer Complaints Data and Payments Data; Financial Scenario Analysis and Risk Modeling using Monte Carlo simulations and scenario analysis; Dashboard creation and automation using Looker, Tableau, and Power BI; Advanced SQL, including writing CTE, Windows Functions, and Stored Procedures; data pipelines leveraging Cron Jobs, Shell Scripts, and Python Automation; Log parsing and text processing using PERL; Data integration with Java backends via REST APIs; KPI tracking and reporting automation by integrating Python, SQL, and BI tools; Jupyter Notebooks for data analysis and visualization; Streaming and deployment with Apache Kafka and Docker. We offer a competitive total rewards package including base salary determined based on the role, experience, skill set, and location. For those in eligible roles, discretionary incentive compensation which may be awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process. In addition, please visit: ******************************************** Job Location: 545 Washington Blvd, Jersey City, NJ 07310. Full-Time. Salary: $173,945 to $210,000 per year.
    $173.9k-210k yearly Auto-Apply 2d ago
  • Data Scientist

    Flutter Entertainment PLC

    Data scientist job in Jersey City, NJ

    :" Our roster has an opening with your name on it FanDuel is looking for a Data Scientist, to drive our Sportsbook customer generosity distribution and bonus spend optimization. In this role you will be responsible for customer segmentation, modeling, automation, reporting and data analysis across all areas of Sportsbook generosity distribution. Using your numerical and technology capabilities you will play a key role in bonus spend research, optimization and communication of results to CRM and Commercial senior management. This is a great opportunity for somebody who wants to work in an environment where curiosity is rewarded, and people are empowered to make decisions. The ideal candidate will have a passion for continual learning and developing innovative solutions to business problems. THE GAME PLAN Everyone on our team has a part to play * Take on data science projects from start to finish, including working with stakeholders to agree project KPIs, and reporting on progress and outcomes throughout projects at agreed cadence * Deliver high-quality analysis, customer segmentation and predictive models which help drive incremental revenue through optimal automated bonus spend distribution * Improve our understanding of customer behaviors, efficiency of promotions, and effectiveness of bonus spend * Through idea generation and exploratory analysis, develop new and innovative ways to turn behaviors into features for use in analysis, segmentation and predictive modeling * Create, modify, and evolve models which solve business problems, and show demonstrable improvements to department KPIs * Demonstrate accurate insights and predictions using easy to interpret visualizations and clear explanations of results, recommendations, methods, biases and limitations * Work with Data Engineering and Analytics Engineer to optimize data quality and data modeling pipelines * Take responsibility to learn the necessary skills and techniques, to develop ability to work with independence, integrity and accuracy * Collaborate with key stakeholders to help in identifying areas for improvement, and support their strategic objectives * Assist in coaching, developing and performance managing less experienced team members * Contribute new ideas for data science projects which have demonstrable potential to improve revenue performance or customer experience THE STATS What we're looking for in our next teammate * Minimum of 3+ years of previous Data Science or Analytics Experience * Minimum 1+ years building and deploying complex machine learning models with real world applications * Have a developed technical skill set which facilitates high-quality, efficient & comprehensive quantitative analysis * Proficient in SQL and experience of manipulating large datasets in Microsoft Excel * Proficient in statistical or programming software and techniques (e.g. in Python, R, C#, VBA, etc.) * Strong communication and presentation skills, experienced in explaining technical projects to non-technical stakeholders * Experience working with data visualization tools like Tableau an advantage * Undergraduate degree in a numerate or technical discipline * Interest in sports is a plus * Interest or experience in the Sports betting industry is an advantage ABOUT FANDUEL FanDuel Group is the premier mobile gaming company in the United States. FanDuel Group consists of a portfolio of leading brands across mobile wagering including: America's #1 Sportsbook, FanDuel Sportsbook; its leading iGaming platform, FanDuel Casino; the industry's unquestioned leader in horse racing and advance-deposit wagering, FanDuel Racing; and its daily fantasy sports product. In addition, FanDuel Group operates FanDuel TV, its broadly distributed linear cable television network and FanDuel TV+, its leading direct-to-consumer OTT platform. FanDuel Group has a presence across all 50 states and Puerto Rico. The company is based in New York with US offices in Los Angeles, Atlanta, and Jersey City, as well as global offices in Canada and Scotland. The company's affiliates have offices worldwide, including in Ireland, Portugal, Romania, and Australia. FanDuel Group is a subsidiary of Flutter Entertainment, the world's largest sports betting and gaming operator with a portfolio of globally recognized brands and traded on the New York Stock Exchange (NYSE: FLUT). PLAYER BENEFITS We treat our team right We offer amazing benefits above and beyond the basics. We have an array of health plans to choose from (some as low as $0 per paycheck) that include programs for fertility and family planning, mental health support, and fitness benefits. We offer generous paid time off (PTO & sick leave), annual bonus and long-term incentive opportunities (based on performance), 401k with up to a 5% match, commuter benefits , pet insurance, and more - check out all our benefits here: FanDuel Total Rewards. *Benefits differ across location, role, and level. FanDuel is an equal opportunities employer and we believe, as one of our principles states, "We are One Team!". As such, we are committed to equal employment opportunity regardless of race, color, ethnicity, ancestry, religion, creed, sex, national origin, sexual orientation, age, citizenship status, marital status, disability, gender identity, gender expression, veteran status, or another other characteristic protected by state, local or federal law. We believe FanDuel is strongest and best able to compete if all employees feel valued, respected, and included. The applicable salary range for this position is $116,000 - $145,000 USD, which is dependent on a variety of factors including relevant experience, location, business needs and market demand. This role may offer the following benefits: medical, vision, and dental insurance; life insurance; disability insurance; a 401(k) matching program; among other employee benefits. This role may also be eligible for short-term or long-term incentive compensation, including, but not limited to, cash bonuses and stock program participation. This role includes paid personal time off and 14 paid company holidays. FanDuel offers paid sick time in accordance with all applicable state and federal laws. #LI-Hybrid "}
    $116k-145k yearly 6d ago
  • Principal Data Scientist

    Maximus 4.3company rating

    Data scientist job in Newark, NJ

    Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team. You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes. This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.) This position requires occasional travel to the DC area for client meetings. Essential Duties and Responsibilities: - Make deep dives into the data, pulling out objective insights for business leaders. - Initiate, craft, and lead advanced analyses of operational data. - Provide a strong voice for the importance of data-driven decision making. - Provide expertise to others in data wrangling and analysis. - Convert complex data into visually appealing presentations. - Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners. - Understand the importance of automation and look to implement and initiate automated solutions where appropriate. - Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects. - Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects. - Guide operational partners on product performance and solution improvement/maturity options. - Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization. - Learn new skills in advanced analytics/AI/ML tools, techniques, and languages. - Mentor more junior data analysts/data scientists as needed. - Apply strategic approach to lead projects from start to finish; Job-Specific Minimum Requirements: - Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation. - Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital. - Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning. - Contribute to the development of mathematically rigorous process improvement procedures. - Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments. Minimum Requirements - Bachelor's degree in related field required. - 10-12 years of relevant professional experience required. Job-Specific Minimum Requirements: - 10+ years of relevant Software Development + AI / ML / DS experience. - Professional Programming experience (e.g. Python, R, etc.). - Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML. - Experience with API programming. - Experience with Linux. - Experience with Statistics. - Experience with Classical Machine Learning. - Experience working as a contributor on a team. Preferred Skills and Qualifications: - Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.). - Experience developing machine learning or signal processing algorithms: - Ability to leverage mathematical principles to model new and novel behaviors. - Ability to leverage statistics to identify true signals from noise or clutter. - Experience working as an individual contributor in AI. - Use of state-of-the-art technology to solve operational problems in AI and Machine Learning. - Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles. - Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions. - Ability to build reference implementations of operational AI & Advanced Analytics processing solutions. Background Investigations: - IRS MBI - Eligibility #techjobs #VeteransPage EEO Statement Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics. Pay Transparency Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances. Accommodations Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************. Minimum Salary $ 156,740.00 Maximum Salary $ 234,960.00
    $79k-113k yearly est. Easy Apply 9d ago
  • Data Scientist

    Implify

    Data scientist job in Newark, NJ

    Implify, Inc is a Global IT Solutions and services firm. Since it's inception, Implify, Inc has been providing best-quality and cost-effective IT solutions to fortune 1000 companies, mid-range companies and upcoming companies via its onsite, Offshore and in-house service models. IMPLIFY is an IT consulting services and software development firm dedicated to business success through long-term relationships with our clients and staff. IMPLIFY has built a dynamic, profitable, service-oriented enterprise, and is positioned to successfully respond to trends and changes in the information technology industry. Job Description Data Scientist Location: Newark NJ Interview process: Phone and In Person (face to face) Job Descripction As a Data Scientist, you solve complex challenges and identify new opportunities using a combination of analytical expertise, business acumen, strategic thinking, and project and relationship management skills. This is an exciting opportunity to be a part of a new strategic initiative & this position is located in Newark, N.J., a quick, easy 15-minute train ride from New York Penn Station. RESPONSIBILITIES • Integrate and mine large data sets, connecting data from disparate sources to identify insights and patterns using predictive and prescriptive analytics, and machine learning techniques • Conduct intermediate and advanced statistical analysis, such as linear regression, ANOVA, time-series analysis, classification models, neural networks, decision trees, as well as analysis of unstructured data (e.g., social media listening, digital footprints, speech analytics) • Prepare and present written and verbal reports, findings, and presentations to key stakeholders, distilling complex statistical information into easy-to-understand business language • Apply knowledge of U.S. businesses and corporate groups and relevant industry knowledge to analysis and insights • Manage project budgets and timelines, ensuring times and on-budget completion QUALIFICATIONS • Experience analyzing large data sets using statistical software, such as SAS, R, Python, and SPSS, to discover new business insights • Excel and PowerPoint a must. SQL Programming and experience of at least one DBMS such as IBM DB2, Oracle, SQL Server or Sybase are required. Java strongly desired • Prior experience with building models, analyzing unstructured data, and/or machine learning • Relevant academic experience and work experience in Statistics, with exposure to data structures and data visualization • Master's degree in Mathematics, Statistics, Engineering, Computer Science, or a quantitative discipline plus a minimum of 2-3 years of work experience, or bachelor's degree plus 5 years work experience • Well-developed written and oral communication skills, with ability to present and explain data to business manager • Strong project management skills / experience managing projects, budgets, and schedules to successful completion • Some prior exposure to financial services or insurance industry desired Additional Information All your information will be kept confidential according to EEO guidelines.
    $76k-107k yearly est. 17h ago
  • Data Scientist

    Zone It Solutions

    Data scientist job in Parsippany-Troy Hills, NJ

    Job Description We are looking for a talented Data Scientist skilled in Python and SQL. In this role, you will analyze large datasets, develop predictive models, and derive actionable insights that will guide business decisions. Requirements Proven experience as a Data Scientist or a similar role, with a strong focus on data analysis and modeling. Proficiency in programming languages, especially Python, and strong SQL skills for database management and querying. Experience with statistical analysis techniques and data visualization tools (e.g., Tableau, Matplotlib, Seaborn). Familiarity with machine learning frameworks and libraries (e.g., Scikit-learn, TensorFlow). Strong analytical skills and the ability to work with large datasets to extract meaningful information. Experience in data preprocessing, feature engineering, and model evaluation. Excellent problem-solving abilities and strong communication skills to present findings effectively. A degree in Computer Science, Mathematics, Statistics, or a related field is preferred. Benefits About Us We specialize in Digital, ERP, and larger IT Services. We offer flexible, efficient and collaborative solutions to any organization that requires IT, experts. Our agile, agnostic, and flexible solutions will help you source the IT Expertise you need. If you are looking for new opportunities, send your profile at ***************************. Also follow our LinkedIn page for new job opportunities and more. Zone IT Solutions is an equal opportunity employer and our recruitment process focuses on essential skills and abilities. We encourage applications from a diverse array of backgrounds, including individuals of various ethnicities, cultures, and linguistic backgrounds, as well as those with disabilities.
    $76k-106k yearly est. Easy Apply 20d ago
  • Principal Predictive Modeler/Data Scientist - Insurance Analytics - NJ #2811

    Right Talent Right Now

    Data scientist job in Jersey City, NJ

    Title Principal Predictive Modeler/Data Scientist - Insurance Analytics - NJ #2811 Company is seeking a Principal Predictive Modeler / Data Scientist. This is a senior-level technical position. Rapidly growing unit aims to design, create and offer innovative data-driven products to the property and casualty insurance as well as life insurance and parts of the credit industry. You will apply your highly developed analytical skills to work on all aspects of the insurance value chain, ranging from pricing models, fraud detection, process triaging, and financial risk models to a variety of other analytics solutions. You will also communicate with customers, external partners and internal departments while effectively leading complex analytics projects. Independently executes data analysis and modeling projects from project/sample design, business review meetings with internal and external clients deriving requirements/deliverables, reception and processing of data, performing analyses and modeling to final reports/presentations, communication of results and sales support. Utilizes advanced statistical techniques to create high-performing predictive models and creative analyses to address business objectives and client needs. Demonstrates to internal and external customers how analysis can be implemented to maximize business strategies including cost benefit analyses. Provides technical sales support, which may include providing strategic consulting, needs assessments and the preparation and presentation of analytical proposals. Tests new statistical analysis methods, software and data sources for continual improvement of quantitative solutions. Creates clear and easy to understand reports and/or PowerPoint decks for client meetings or third party collaborations. Verbally presents analysis ideas, progress and results to internal managers, external partners and customers. Communicates with internal groups on data specifications and with IT engineers on model/algorithm implementation. Provides high quality ongoing customer and sales support; uncovering opportunities, answering questions, resolving problems and building solutions. Assures compliance with regulatory and privacy requirements during design and implementation of modeling and analysis projects. Shares knowledge within the analytics group. Required qualifications: Graduate-level degree with concentration in a quantitative discipline such as statistics computer science, mathematics, economics, or operations research OR Bachelor's degree with CAS Associate or Fellow status or substantial exam progress. Strong verbal and written communications skills, listening and teamwork skills, and effective presentation skills. This is absolutely essential since you will have a lot of exposure to different internal groups (data, IT, product and sales) as well as third-party partners and customers (insurance companies). 8+ years of relevant work experience including 3 years of statistical modeling and data mining in a P&C insurance context (insurance company or insurance consulting firm) using large and complex datasets. Experience with personal lines rating plan modeling is a strong plus. Life or health insurance experience is a bonus. Expertise in statistical modeling techniques such as linear regression, logistic regression, GLM, tree models, cluster analysis, principal components, and feature creation, validation. Programming experience with SAS (STAT, macros, EM), R and other statistical software (CART, Emblem, SPSS, Matlab). Greenplum and UNIX experience is a plus. Aptitude in performing multiple tasks and dealing with changing deadline requirements. This includes knowing when to escalate issues. Maintains a focused, flexible, organized, and proactive manner. Proficiency in Microsoft Office (Excel, Word, PowerPoint). Not just software use but proficiency in creating effective and visually appealing PowerPoint presentations, and well-structured error-free and readable spreadsheets. Bottom line requirements we need notes on with candidate submittal: 1. Graduate degree with concentration in a quantitative discipline or Bachelor's degree with CAS Associate or Fellow status. 2. 8+ years of relevant work experience including 3 years of statistical modeling and data mining in a Property and Casualty Insurance context (insurance company or insurance consulting firm). 3. Expertise in statistical modeling techniques such as linear regression, logistic regression, GLM, tree models, cluster analysis, principal components, and feature creation, validation. 4. Programming experience with SAS (STAT, macros, EM), R and other statistical software (CART, Emblem, SPSS, Matlab). Greenplum and UNIX experience is a plus. Additional Information All your information will be kept confidential according to EEO guidelines.
    $76k-107k yearly est. 17h ago
  • Data Scientist Associate

    Rantodac

    Data scientist job in Jersey City, NJ

    Roles and Responsibilities Introduction As a Data Scientist at Rantodac, you will help transform our clients' data into tangible business value by analyzing information, communicating outcomes and collaborating on product development. Work with Best in Class open source and visual tools, along with the most flexible and scalable deployment options. Whether it's investigating patient trends or weather patterns, you will work to solve real world problems for the industries transforming how we live. Your Role and Responsibilities As an Advisory Data Scientist, you will demonstrate expertise in Advanced people analytics. We use the latest advanced analytics technologies, tools, and techniques to extract insights from our data. We work collaboratively with other teams within the company and apply these insights to make complex and strategic decisions for the benefit of Rantodac. As a Data Scientist, you'll team with some of the best mind in the industry to create innovative world class solutions focused on business needs. You'll do this by designing/developing analytical approaches to solve our client's business problems. Responsibilities: Work with clients in defining the key business problems to be solved while developing, maintaining and leveraging key client relationships. Identifying, gathering, and analyzing complex, multi-dimensional datasets utilizing a variety of tools. Designing Advanced Analytics solutions and services in the area(s) of optimization (linear, mixed integer, constraint programming) or simulations, or predictive analytics. Formulating mathematical approaches to solve those problems potentially utilizing ILOG CPLEX, Standard Query Language (SQL), Apache Spark, Statistical Packages Social Sciences(SPSS), Python, R. Using industry technologies, tools and data mining frameworks for data analytics including data visualization for analyzing and drawing conclusions. Communicating the data insights to key stakeholders and get their buy-ins. Integrating the analytics Into existing business work flows. Measuring business outcome (quantitatively and/or qualitatively) on a periodic basis. Developing junior team members and leading project tasks. Maintaining knowledge and understanding of current and emerging trends within the analytics industry. A good level of business acumen and business domain knowledge is highly preferred Required Technical and Professional Expertise A minimum of 0-2 years of experience in determining mathematical approaches to solve problems, sampling plan and gathering/ analyzing /portraying data Proficient in designing/building/managing solutions utilizing for example ILOG CPlex, Python, Standard Query Language (SQL), Apache Spark, Statistical Package Social Sciences(SPSS), or R Work experience in building predictive models using various ML algorithms, drawing insights from the model output, communicating the results to key stakeholders, integrating the analytics into existing business workflows and measuring business outcome Expertise in statistical analysis and deploying the results of the analysis Experience in data collection/ data mining/text mining Preferred Technical and Professional Expertise A minimum of 0-3 years experience in determining mathematical approaches to solve problems, sampling plan and gathering/ analyzing /portraying data Exposure to unstructured data sets and Natural Language Processing Proven Business acumen and business domain knowledge is highly preferred
    $76k-107k yearly est. 60d+ ago

Learn more about data scientist jobs

How much does a data scientist earn in Bernards, NJ?

The average data scientist in Bernards, NJ earns between $65,000 and $124,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Bernards, NJ

$90,000

What are the biggest employers of Data Scientists in Bernards, NJ?

The biggest employers of Data Scientists in Bernards, NJ are:
  1. InvestCloud
  2. Valley National Bank
  3. Nestlé
Job type you want
Full Time
Part Time
Internship
Temporary