Post job

Data scientist jobs in Guttenberg, NJ

- 909 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
Data Modeler
  • Senior Data Scientist (Senior Consultant)

    Guidehouse 3.7company rating

    Data scientist job in New York, NY

    Job Family: Data Science Consulting Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust About our AI and Data Capability Team Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness. What You Will Do: Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity. Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis. Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights. Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders. Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making. Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations. Skilled in SQL, R, and other languages to assist in database querying and statistical programming. Strong foundational knowledge and experience in statistics, probability, and experimental design. Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker). Experience applying data governance concepts and techniques to assure greater data quality and reliability. he curiosity and creativity to uncover hidden patterns and opportunities. Strong communication skills to bridge technical and business worlds. What Would Be Nice To Have: Hands-on experience with Python, SQL, and modern ML frameworks. Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models. Expertise in Python or Java for data processing. Demonstrated work experience within the public sector. Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics. The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $113k-188k yearly Auto-Apply 1d ago
  • Machine Learning Engineer / Data Scientist / GenAI

    Amtex Systems Inc. 4.0company rating

    Data scientist job in New York, NY

    NYC NY / Hybrid 12+ Months Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system. Must have strong experience with: Llama Python Hadoop MCP Machine Learning (ML) They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines. Thanks and Regards! Lavkesh Dwivedi ************************ Amtex System Inc. 28 Liberty Street, 6th Floor | New York, NY - 10005 ************ ********************
    $78k-104k yearly est. 3d ago
  • Data Scientist

    Marlabs LLC 4.1company rating

    Data scientist job in Parsippany-Troy Hills, NJ

    Data Scientist- Parsippany, NJ (Hybrid) Data Scientist Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC health, prioritize improvements, validate pilot outcomes, and implement AI-driven development lifecycle capabilities. • Responsibilities: o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and delivery KPIs. o Build dashboards, anomaly detection, and data models; implement GenAI solutions (e.g., code suggestion, PR summarization, automated test generation) to improve developer workflows. o Design experiments and validate AI-driven features during the pilot. o Collaborate with engineering and SRE to operationalize models and ensure observability and data governance. • Required skills: o Applied data science/ML in production; hands-on experience with GenAI/LLMs applied to developer workflows or DevOps automation. o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data visualization (Tableau/Power BI). o Experience with observability/telemetry data (logs/metrics/traces) and A/B experiment design. • Preferred: o Experience with model deployment, MLOps, prompt engineering, and cloud data platforms (AWS/GCP/Azure).
    $72k-98k yearly est. 3d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data scientist job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 1d ago
  • Senior Data Engineer

    Godel Terminal

    Data scientist job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 5d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 5d ago
  • Data Engineer

    Company 3.0company rating

    Data scientist job in Fort Lee, NJ

    The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance. Responsibilities Analyze, structure, and interpret raw data. Build and maintain datasets for business use. Design and optimize database tables, schemas, and data structures. Enhance data accuracy, consistency, and overall efficiency. Develop views, functions, and stored procedures. Write efficient SQL queries to support application integration. Create database triggers to support automation processes. Oversee data quality, integrity, and database security. Translate complex data into clear, actionable insights. Collaborate with cross-functional teams on multiple projects. Present data through graphs, infographics, dashboards, and other visualization methods. Define and track KPIs to measure the impact of business decisions. Prepare reports and presentations for management based on analytical findings. Conduct daily system maintenance and troubleshoot issues across all platforms. Perform additional ad hoc analysis and tasks as needed. Qualification Bachelor's Degree in Information Technology or relevant 4+ years of experience as a Data Analyst or Data Engineer, including database design experience. Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations. Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views. Excellent written, verbal, and interpersonal communication skills. Ability to manage multiple tasks in a fast-paced and evolving environment. Strong work ethic, professionalism, and integrity. Advanced proficiency in Microsoft Office applications.
    $93k-132k yearly est. 3d ago
  • Market Data Engineer

    Harrington Starr

    Data scientist job in New York, NY

    🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 1d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data scientist job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 5d ago
  • Data Engineer (Web Scraping technologies)

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Title: Data Engineer (Web Scraping technologies) Duration: FTE/Perm Salary: 125-190k plus bonus Responsibilities: Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users Fielding Questions from users about the scrapes and websites Coordinating with Compliance on approvals and TOU reviews Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift Normalizing/standardizing vendor data, firm data for firm consumption Implement data quality checks to ensure reliability and accuracy of scraped data Coordinate with Internal teams on delivery, access, requests, support Promote Data Engineering best practices Required Skills and Qualifications: Bachelor's degree in computer science, Engineering, Mathematics or related field 2-5 experience in a similar role Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds) Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.) Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.) Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.) Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation) Strong communication skills to work with stakeholders across technology, investment, and operations teams.
    $86k-120k yearly est. 1d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 5d ago
  • Sr. Azure Data Engineer

    Synechron 4.4company rating

    Data scientist job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below). The Role Responsibilities: Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing and streaming architectures. Experience with Snowflake data warehouse platform: data loading, performance tuning, and management. Proficiency in Python scripting and programming for data manipulation and automation. Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams). Knowledge of SQL, data modelling, and ETL/ELT processes. Understanding of cloud platforms (AWS, Azure, GCP) is a plus. Domain Knowledge in any of the below area: Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.). Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes. Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $130k-140k yearly 3d ago
  • Data Engineer

    Neenopal Inc.

    Data scientist job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 2d ago
  • Senior Data Engineer

    Apexon

    Data scientist job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 3d ago
  • Sr Data Modeler with Capital Markets/ Custody

    Ltimindtree

    Data scientist job in Jersey City, NJ

    LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit ******************* Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets Work Location Jersey City, NJ (Onsite, 5 days / week) Job Description: We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain. The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model. A software development background is not required. Key Responsibilities Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics). Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model. Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model. Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality. Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity. Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage. Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation. Champion a detail-oriented and documentation-first culture within the data modeling team. Qualifications Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services. Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics). Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures. Strong skills in data analysis using SQL and/or other data analysis tools. Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation. Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation. Ability to mentor and lead junior team members in data modeling best practices. Passion for detail, documentation, and continuous improvement. Software development background is not required. Preferred Skills Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer). Familiarity with capital markets, business processes and data flows. Knowledge of regulatory and compliance requirements in financial data management. Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases). Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $79k-111k yearly est. 4d ago
  • Staff Data Scientist

    Recursion Pharmaceuticals 4.2company rating

    Data scientist job in New York, NY

    Your work will change lives. Including your own. Please note: Our offices will be closed for our annual winter break from December 22, 2025, to January 2, 2026. Our response to your application will be delayed. The Impact You'll Make As a member of Recursion's AI-driven drug discovery initiatives, you will be at the forefront of reimagining how biological knowledge is generated, stored, accessed, and reasoned upon by LLMs. You will play a key role in developing the biological reasoning infrastructure, connecting large-scale data and codebases with dynamic, agent-driven AI systems.You will be responsible for defining the architecture that grounds our agents in biological truth. This involves integrating biomedical resources to enable AI systems to reason effectively and selecting the most appropriate data retrieval strategies to support those insights. This is a highly collaborative role: you will partner with machine learning engineers, biologists, chemists, and platform teams to build the connective tissue that allows our AI agents to reason like a scientist. The ideal candidate possesses deep expertise in both core bioinformatics/cheminformatics libraries and modern GenAI frameworks (including RAG and MCP), a strong architectural vision, and the ability to translate high-potential prototypes into scalable production workflows. In this role, you will: * Architect and maintain robust infrastructure to keep critical internal and external biological resources (e.g., ChEMBL, Ensembl, Reactome, proprietary assays) up-to-date and accessible to reasoning agents. * Design sophisticated context retrieval strategies, choosing the most effective approach for each biological use case, whether working with structured, entity-focused data, unstructured RAG, or graph-based representations. * Integrate established bioinformatics/cheminformatics libraries into a GenAI ecosystem, creating interfaces (such as via MCP) that allow agents to autonomously query and manipulate biological data. * Pilot methods for tool use by LLMs, enabling the system to perform complex tasks like pathway analysis on the fly rather than relying solely on memorized weights. * Develop scalable, production-grade systems that serve as the backbone for Recursion's automated scientific reasoning capabilities. * Collaborate cross-functionally with Recursion's core biology, chemistry, data science and engineering teams to ensure our biological data and the reasoning engines are accurately reflecting the complexity of disease biology and drug discovery. * Present technical trade-offs (e.g., graph vs. vector) to leadership and stakeholders in a clear, compelling way that aligns technical reality with product vision. The Team You'll Join You'll join a bold, agile team of scientists and engineers dedicated to building comprehensive biological maps by integrating Recursion's in-house datasets, patient data, and external knowledge layers to enable sophisticated agent-based reasoning. Within this cross-functional team, you will design and maintain the biological context and data structures that allow agents to reason accurately and efficiently. You'll collaborate closely with wet-lab biologists and core platform engineers to develop systems that are not only technically robust but also scientifically rigorous. The ideal candidate is curious about emerging AI technologies, passionate about making biological data both machine-readable and machine-understandable, and brings a strong foundation in systems biology, biomedical data analysis, and agentic AI systems. The Experience You'll Need * PhD in a relevant field (Bioinformatics, Cheminformatics, Computational Biology, Computer Science, Systems Biology) with 5+ years of industry experience, or MS in a relevant field with 7+ years of experience, focusing on biological data representation and retrieval. * Proficiency in utilizing major public biological databases (NCBI, Ensembl, STRING, GO) and using standard bioinformatics/cheminformatics toolkits (e.g., RDKit, samtools, Biopython). * Strong skills in designing and maintaining automated data pipelines that support continuous ingestion, transformation, and refresh of biological data without manual intervention. * Ability to work with knowledge graph data models and query languages (e.g., RDF, SPARQL, OWL) and translate graph-structured data into relational or other non-graph representations, with a strong judgment in evaluating trade-offs between different approaches. * Competence in building and operating GenAI stacks, including RAG systems, vector databases, and optimization of context windows for large-scale LLM deployments. * Hands-on expertise with agentic AI frameworks (e.g., MCP, Google ADK, LangChain, AutoGPT) and familiarity with leading LLMs (e.g., Google Gemini/Gemma) in agentic workflows, including benchmarking and evaluating agent performance on bioinformatics/cheminformatics tasks such as structure prediction, target identification, and pathway mapping. * Strong Python skills and adherence to software engineering best practices, including CI/CD, Git-based version control, and modular design. * Excellent cross-functional communication skills, ability to clearly explain complex architectural decisions to both scientific domain experts and technical stakeholders. Nice to Have * Strong background in machine learning and deep learning, including hands-on experience with foundation models and modern neural architectures. * Fine-tuning LLMs on scientific corpora for domain-specific reasoning. * Integrating LLMs with experimental or proprietary assay data in live scientific workflows. * Background in drug discovery and target identification. * Meaningful contributions to open-source libraries, research codebases, or community-driven tools. Working Location & Compensation: This is an office-based, hybrid role in either our Salt Lake City, UT or New York City, NY offices. Employees are expected to work in the office at least 50% of the time. At Recursion, we believe that every employee should be compensated fairly. Based on the skill and level of experience required for this role, the estimated current annual base range for this role is $200,600 - $238,400. You will also be eligible for an annual bonus and equity compensation, as well as a comprehensive benefits package. #LI-DNI The Values We Hope You Share: * We act boldly with integrity. We are unconstrained in our thinking, take calculated risks, and push boundaries, but never at the expense of ethics, science, or trust. * We care deeply and engage directly. Caring means holding a deep sense of responsibility and respect - showing up, speaking honestly, and taking action. * We learn actively and adapt rapidly. Progress comes from doing. We experiment, test, and refine, embracing iteration over perfection. * We move with urgency because patients are waiting. Speed isn't about rushing but about moving the needle every day. * We take ownership and accountability. Through ownership and accountability, we enable trust and autonomy-leaders take accountability for decisive action, and teams own outcomes together. * We are One Recursion. True cross-functional collaboration is about trust, clarity, humility, and impact. Through sharing, we can be greater than the sum of our individual capabilities. Our values underpin the employee experience at Recursion. They are the character and personality of the company demonstrated through how we communicate, support one another, spend our time, make decisions, and celebrate collectively. More About Recursion Recursion (NASDAQ: RXRX) is a clinical stage TechBio company leading the space by decoding biology to radically improve lives. Enabling its mission is the Recursion OS, a platform built across diverse technologies that continuously generate one of the world's largest proprietary biological and chemical datasets. Recursion leverages sophisticated machine-learning algorithms to distill from its dataset a collection of trillions of searchable relationships across biology and chemistry unconstrained by human bias. By commanding massive experimental scale - up to millions of wet lab experiments weekly - and massive computational scale - owning and operating one of the most powerful supercomputers in the world, Recursion is uniting technology, biology and chemistry to advance the future of medicine. Recursion is headquartered in Salt Lake City, where it is a founding member of BioHive, the Utah life sciences industry collective. Recursion also has offices in Toronto, Montréal, New York, London, Oxford area, and the San Francisco Bay area. Learn more at ****************** or connect on X (formerly Twitter) and LinkedIn. Recursion is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other characteristic protected under applicable federal, state, local, or provincial human rights legislation. Accommodations are available on request for candidates taking part in all aspects of the selection process. Recruitment & Staffing Agencies: Recursion Pharmaceuticals and its affiliate companies do not accept resumes from any source other than candidates. The submission of resumes by recruitment or staffing agencies to Recursion or its employees is strictly prohibited unless contacted directly by Recursion's internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Recursion, and Recursion will not owe any referral or other fees. Our team will communicate directly with candidates who are not represented by an agent or intermediary unless otherwise agreed to prior to interviewing for the job.
    $200.6k-238.4k yearly Auto-Apply 3d ago
  • Principal Data Scientist : Product to Market (P2M) Optimization

    The Gap 4.4company rating

    Data scientist job in New York, NY

    About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials. This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team. About the Role Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization. The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels. What You'll Do * Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges. * Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals. * Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment. * Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility. * Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly. * Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights. * Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success. * Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value * Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP. * Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality. Who You Are * Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline. * 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains. * Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes. * Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation. * Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib. * Hands-on experience with enterprise platforms such as Databricks and cloud environments * Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions. * Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences. * Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations. * Experience managing code quality, CI/CD pipelines, and GitHub-based workflows. Preferred Qualifications * Experience shaping and executing multi-year analytics strategies in retail or supply chain domains. * Proven ability to balance long-term innovation with short-term deliverables. * Background in agile product development and stakeholder alignment for enterprise-scale initiatives. Benefits at Gap Inc. * Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees. * One of the most competitive Paid Time Off plans in the industry.* * Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.* * Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.* * Employee stock purchase plan.* * Medical, dental, vision and life insurance.* * See more of the benefits we offer. * For eligible employees Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Salary Range: $201,700 - $267,300 USD Employee pay will vary based on factors such as qualifications, experience, skill level, competencies and work location. We will meet minimum wage or minimum of the pay range (whichever is higher) based on city, county and state requirements.
    $88k-128k yearly est. 28d ago
  • Data Scientist, User Operations

    Openai 4.2company rating

    Data scientist job in New York, NY

    About the Team OpenAI's User Operations organization is building the data and intelligence layer behind AI-assisted operations - the systems that decide when automation should help users, when humans should step in, and how both improve over time. Our flagship platform is transforming customer support into a model for “agent-first” operations across OpenAI. About the Role As a Data Scientist on User Operations, you'll design the models, metrics, and experimentation frameworks that power OpenAI's human-AI collaboration loop. You'll build systems that measure quality, optimize automation, and turn operational data into insights that improve product and user experience at scale. You'll partner closely with Support Automation Engineering, Product, and Data Engineering to ensure our data systems are production-grade, trusted, and impactful. This role is based in San Francisco or New York City. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees. Why it matters Every conversation users have with OpenAI products produces signals about how humans and AI interact. User Ops Data Science turns those signals into insights that shape how we support users today and design agentic systems for tomorrow. This is a unique opportunity to help define how AI collaboration at scale is measured and improved inside OpenAI. In this role, you will: Build and own metrics, classifiers, and data pipelines that determine automation eligibility, effectiveness, and guardrails. Design and evaluate experiments that quantify the impact of automation and AI systems on user outcomes like resolution quality and satisfaction. Develop predictive and statistical models that improve how OpenAI's support systems automate, measure, and learn from user interactions. Partner with engineering and product teams to create feedback loops that continuously improve our AI agents and knowledge systems. Translate complex data into clear, actionable insights for leadership and cross-functional stakeholders. Develop and socialize dashboards, applications, and other ways of enabling the team and company to answer product data questions in a self-serve way Contribute to establishing data science standards and best practices in an AI-native operations environment. Partner with other data scientists across the company to share knowledge and continually synthesize learnings across the organization You might thrive in this role if you have: 10+ years of experience in data science roles within product or technology organizations. Expertise in statistics and causal inference, applied in both experimentation and observational causal inference studies. Expert-level SQL and proficiency in Python for analytics, modeling, and experimentation. Proven experience designing and interpreting experiments and making statistically sound recommendations. Experience building data systems or pipelines that power production workflows or ML-based decisioning. Experience developing and extracting insights from business intelligence tools, such as Mode, Tableau, and Looker. Strategic and impact-driven mindset, capable of translating complex business problems into actionable frameworks. Ability to build relationships with diverse stakeholders and cultivate strong partnerships. Strong communication skills, including the ability to bridge technical and non-technical stakeholders and collaborate across various functions to ensure business impact. Ability to operate effectively in a fast-moving, ambiguous environment with limited structure. Strong communication skills and the ability to translate complex data into stories for non-technical partners. Nice-to-haves: Familiarity with large language models or AI-assisted operations platforms. Experience in operational automation or customer support analytics. Background in experimentation infrastructure or human-AI interaction systems. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-129k yearly est. Auto-Apply 48d ago
  • Data Scientist, Product Analytics

    Airtable 4.2company rating

    Data scientist job in New York, NY

    Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Airtable is seeking a product-focused Data Scientist to join our Analytics & Data Science team. In this high-impact role, you'll partner closely with product development teams to transform raw user data into actionable insights that drive growth for Airtable's self-serve business. You'll own critical data pipelines, design and analyze experiments, build dashboards, and deliver strategic insights that inform executive decision-making. This is a unique opportunity to shape the future of a data-driven, AI-native SaaS company and scale analytics best practices across the organization. What you'll do Own and maintain core product data pipelines across DBT, Looker, and Omni, ensuring reliability, scalability, and minimal downtime Build and refine dashboards that deliver self-serve, real-time insights for high-priority product areas Lead the development and delivery of company-wide strategic insights that connect user behavior patterns and inform executive decisions Partner with product and engineering teams to define tracking requirements, implement instrumentation, validate data, and deliver launch-specific dashboards or reports Establish trusted partnerships with product managers, engineers, analysts, and leadership as the go-to resource for product data insights and technical guidance Collaborate with leadership to define the analytics roadmap, prioritize high-impact initiatives, and assess resource needs for scaling product analytics capabilities Mentor junior team members and cross-functional partners on analytics best practices and data interpretation; create documentation and training materials to scale institutional knowledge Support end-to-end analytics for all product launches, including tracking implementation, validation, and post-launch reporting with documented impact measurements Deliver comprehensive strategic analyses or experiments that connect user behavior patterns and identify new growth opportunities Lead or participate in cross-functional projects where data science contributions directly influence product or strategy decisions Migrate engineering team dashboards to Omni or Databricks, enabling self-serve analytics Who you are Bachelor's degree in computer science, data science, mathematics/statistics, or related field 6+ years of experience as a data scientist, data analyst, or data engineer Experience supporting product development teams and driving product growth insight Background in SaaS, consumer tech, or data-driven product environments preferred Expert in SQL and modern data modeling (e.g., dbt, Databricks, Snowflake, BigQuery); sets standards and mentors others on best practices Deep experience with BI tools and modeling (e.g., Looker, Omni, Hex, Tableau, Mode) Proficient with experimentation platforms and statistical libraries (e.g., Eppo, Optimizely, LaunchDarkly, scipy, statsmodels) Proven ability to apply AI/ML tools - from core libraries (scikit-learn, PyTorch, TensorFlow) to GenAI platforms (ChatGPT, Claude, Gemini) and AI-assisted development (Cursor, GitHub Copilot) Strong statistical foundation; designs and scales experimentation practices that influence product strategy and culture Translates ambiguous business questions into structured analyses, guiding teams toward actionable insights Provides thought leadership on user funnels, retention, and growth analytics Ensures data quality, reliability, and consistency across critical business reporting and analytics workflows Experience at an AI-native company, with exposure to building or scaling products powered by AI Knowledge of product analytics tracking frameworks (e.g., Segment, Amplitude, Mixpanel, GA4) and expertise in event taxonomy design Strong documentation and knowledge-sharing skills; adept at creating technical guides, playbooks, and resources that scale team effectiveness Models curiosity, creativity, and a learner's mindset; thrives in ambiguity and inspires others to do the same Crafts compelling narratives with data, aligning stakeholders at all levels and driving clarity in decision-making Airtable is an equal opportunity employer. We embrace diversity and strive to create a workplace where everyone has an equal opportunity to thrive. We welcome people of different backgrounds, experiences, abilities, and perspectives. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or any characteristic protected by applicable federal and state laws, regulations and ordinances. Learn more about your EEO rights as an applicant. VEVRAA-Federal Contractor If you have a medical condition, disability, or religious belief/practice which inhibits your ability to participate in any part of the application or interview process, please complete our Accommodations Request Form and let us know how we may assist you. Airtable is committed to participating in the interactive process and providing reasonable accommodations to qualified applicants. Compensation awarded to successful candidates will vary based on their work location, relevant skills, and experience. Our total compensation package also includes the opportunity to receive benefits, restricted stock units, and may include incentive compensation. To learn more about our comprehensive benefit offerings, please check out Life at Airtable. For work locations in the San Francisco Bay Area, Seattle, New York City, and Los Angeles, the base salary range for this role is:$205,200-$266,300 USDFor all other work locations (including remote), the base salary range for this role is:$185,300-$240,000 USD Please see our Privacy Notice for details regarding Airtable's collection and use of personal information relating to the application and recruitment process by clicking here. 🔒 Stay Safe from Job Scams All official Airtable communication will come from an @airtable.com email address. We will never ask you to share sensitive information or purchase equipment during the hiring process. If in doubt, contact us at ***************. Learn more about avoiding job scams here.
    $205.2k-266.3k yearly Auto-Apply 4d ago
  • Staff Data Scientist, Personalization & Shopping

    Pinterest 4.6company rating

    Data scientist job in New York, NY

    Millions of people around the world come to our platform to find creative ideas, dream about new possibilities and plan for memories that will last a lifetime. At Pinterest, we're on a mission to bring everyone the inspiration to create a life they love, and that starts with the people behind the product. Discover a career where you ignite innovation for millions, transform passion into growth opportunities, celebrate each other's unique experiences and embrace the flexibility to do your best work. Creating a career you love? It's Possible. Pinterest is the world's leading visual search and discovery platform, serving over 500 million monthly active users globally on their journey from inspiration to action. At Pinterest, Shopping is a strategic initiative that aims to help Pinners take action by surfacing the most relevant content, at the right time, in the best user-friendly way. We do this through a combination of innovative product interfaces, and sophisticated recommendation systems. We are looking for a Staff Data Scientist with experience in machine learning and causal inference to help advance Shopping at Pinterest. In your role you will develop methods and models to explain why certain content is being promoted (or not) for a Pinner. You will work in a highly collaborative and cross-functional environment, and be responsible for partnering with Product Managers and Machine Learning Engineers. You are expected to develop a deep understanding of our recommendation system, and generate insights and robust methodologies to answer the "why". The results of your work will influence our development teams, and drive product innovation. What you'll do: * Ensure that our recommendation systems produce trustworthy, high-quality outputs to maximize our Pinner's shopping experience. * Develop robust frameworks, combining online and offline methods, to comprehensively understand the outputs of our recommendations. * Bring scientific rigor and statistical methods to the challenges of product creation, development and improvement with an appreciation for the behaviors of our Pinners. * Work cross-functionally to build relationships, proactively communicate key insights, and collaborate closely with product managers, engineers, designers, and researchers to help build the next experiences on Pinterest. * Relentlessly focus on impact, whether through influencing product strategy, advancing our north star metrics, or improving a critical process. * Mentor and up-level junior data scientists on the team. What we're looking for: * 7+ years of experience analyzing data in a fast-paced, data-driven environment with proven ability to apply scientific methods to solve real-world problems on web-scale data. * Strong interest and experience in recommendation systems and causal inference. * Strong quantitative programming (Python/R) and data manipulation skills (SQL/Spark). * Ability to work independently and drive your own projects. * Excellent written and communication skills, and able to explain learnings to both technical and non-technical partners. * A team player eager to partner with cross-functional partners to quickly turn insights into actions. * Bachelor's/Master's degree in a relevant field such as Computer Science, or equivalent experience. In-Office Requirement Statement: * We let the type of work you do guide the collaboration style. That means we're not always working in an office, but we continue to gather for key moments of collaboration and connection. * This role will need to be in the office for in-person collaboration 1-2 times/quarter and therefore can be situated anywhere in the country. Relocation Statement: * This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model. #LI-REMOTE #LI-NM4 At Pinterest we believe the workplace should be equitable, inclusive, and inspiring for every employee. In an effort to provide greater transparency, we are sharing the base salary range for this position. The position is also eligible for equity. Final salary is based on a number of factors including location, travel, relevant prior experience, or particular skills and expertise. Information regarding the culture at Pinterest and benefits available for this position can be found here. US based applicants only $164,695-$339,078 USD Our Commitment to Inclusion: Pinterest is an equal opportunity employer and makes employment decisions on the basis of merit. We want to have the best qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, color, ancestry, national origin, religion or religious creed, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, age, marital status, status as a protected veteran, physical or mental disability, medical condition, genetic information or characteristics (or those of a family member) or any other consideration made unlawful by applicable federal, state or local laws. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you require a medical or religious accommodation during the job application process, please complete this form for support.
    $104k-140k yearly est. Auto-Apply 60d+ ago

Learn more about data scientist jobs

How much does a data scientist earn in Guttenberg, NJ?

The average data scientist in Guttenberg, NJ earns between $65,000 and $124,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Guttenberg, NJ

$90,000

What are the biggest employers of Data Scientists in Guttenberg, NJ?

The biggest employers of Data Scientists in Guttenberg, NJ are:
  1. JPMC
  2. JPMorgan Chase & Co.
  3. Google
  4. Amazon
  5. Meta
  6. Unity Technologies
  7. Walmart
  8. Varo
  9. McKinsey & Company Inc
  10. MIRAGE SYSTEMS
Job type you want
Full Time
Part Time
Internship
Temporary