Post job

Data scientist jobs in Oyster Bay, NY

- 786 jobs
All
Data Scientist
Data Engineer
Senior Data Scientist
  • Senior Data Scientist (Senior Consultant)

    Guidehouse 3.7company rating

    Data scientist job in New York, NY

    Job Family: Data Science Consulting Travel Required: Up to 10% Clearance Required: Ability to Obtain Public Trust About our AI and Data Capability Team Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness. What You Will Do: Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity. Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis. Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights. Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders. Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making. Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies. What You Will Need: US Citizenship is required Bachelor's degree is required Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations. Skilled in SQL, R, and other languages to assist in database querying and statistical programming. Strong foundational knowledge and experience in statistics, probability, and experimental design. Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker). Experience applying data governance concepts and techniques to assure greater data quality and reliability. he curiosity and creativity to uncover hidden patterns and opportunities. Strong communication skills to bridge technical and business worlds. What Would Be Nice To Have: Hands-on experience with Python, SQL, and modern ML frameworks. Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models. Expertise in Python or Java for data processing. Demonstrated work experience within the public sector. Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics. The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs. What We Offer: Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. Benefits include: Medical, Rx, Dental & Vision Insurance Personal and Family Sick Time & Company Paid Holidays Position may be eligible for a discretionary variable incentive bonus Parental Leave and Adoption Assistance 401(k) Retirement Plan Basic Life & Supplemental Life Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts Short-Term & Long-Term Disability Student Loan PayDown Tuition Reimbursement, Personal Development & Learning Opportunities Skills Development & Certifications Employee Referral Program Corporate Sponsored Events & Community Outreach Emergency Back-Up Childcare Program Mobility Stipend About Guidehouse Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
    $113k-188k yearly Auto-Apply 2d ago
  • Data Engineer

    Brooksource 4.1company rating

    Data scientist job in New York, NY

    Data Engineer - Data Migration Project 6-Month Contract (ASAP Start) Hybrid - Manhattan, NY (3 days/week) We are seeking a Data Engineer to support a critical data migration initiative for a leading sports entertainment and gaming company headquartered in Manhattan, NY. This role will focus on transitioning existing data workflows and analytics pipelines from Amazon Redshift to Databricks, optimizing performance and ensuring seamless integration across operational reporting systems. The ideal candidate will have strong SQL and Python skills, experience working with Salesforce data, and a background in data engineering, ETL, or analytics pipeline optimization. This is a hybrid role requiring collaboration with cross-functional analytics, engineering, and operations teams to enhance data reliability and scalability. Minimum Qualifications: Advanced proficiency in SQL, Python, and SOQL Hands-on experience with Databricks, Redshift, Salesforce, and DataGrip Experience building and optimizing ETL workflows and pipelines Familiarity with Tableau for analytics and visualization Strong understanding of data migration and transformation best practices Ability to identify and resolve discrepancies between data environments Excellent analytical, troubleshooting, and communication skills Responsibilities: Modify and migrate existing workflows and pipelines from Redshift to Databricks. Rebuild data preprocessing structures that prepare Salesforce data for Tableau dashboards and ad hoc analytics. Identify and map Redshift data sources to their Databricks equivalents, accounting for any structural or data differences. Optimize and consolidate 200+ artifacts to improve efficiency and reduce redundancy. Implement Databricks-specific improvements to leverage platform capabilities and enhance workflow performance. Collaborate with analytics and engineering teams to ensure data alignment across business reporting systems. Apply a “build from scratch” mindset to design scalable, modernized workflows rather than direct lift-and-shift migrations. Identify dependencies on data sources not yet migrated and assist in prioritization efforts with the engineering team. What's in it for you? Opportunity to lead a high-impact data migration initiative at a top-tier gaming and entertainment organization. Exposure to modern data platforms and architecture, including Databricks and advanced analytics workflows. Collaborative environment with visibility across analytics, operations, and engineering functions. Ability to contribute to the foundation of scalable, efficient, and data-driven decision-making processes. EEO Statement: Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $101k-140k yearly est. 2d ago
  • Senior Data Engineer

    Godel Terminal

    Data scientist job in New York, NY

    Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance. We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets. Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you: Minimum qualifications: Able to work out of our Manhattan office minimum 4 days a week 5+ years of experience in a financial or startup environment 5+ years of experience working on live data as well as historical data 3+ years of experience in Java, Python, and SQL Experience managing multiple production ETL pipelines that reliably store and validate financial data Experience launching, scaling, and improving backend services in cloud environments Experience migrating critical data across different databases Experience owning and improving critical data infrastructure Experience teaching best practices to junior developers Preferred qualifications: 5+ years of experience in a fintech startup 5+ years of experience in Java, Kafka, Python, PostgreSQL 5+ years of experience working with Websockets like RXStomp or Socket.io 5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode 2+ years of experience shipping and optimizing Rust applications Demonstrated experience keeping critical systems online Demonstrated creativity and resourcefulness under pressure Experience with corporate debt / bonds and commodities data Salary range begins at $150,000 and increases with experience Benefits: Health Insurance, Vision, Dental To try the product, go to *************************
    $150k yearly 1d ago
  • Data Engineer

    DL Software Inc. 3.3company rating

    Data scientist job in New York, NY

    DL Software produces Godel, a financial information and trading terminal. Role Description This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities. Qualifications Strong proficiency in Data Engineering and Data Modeling Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes Strong Python background Expertise in Extract, Transform, Load (ETL) processes and tools Experience in designing, managing, and optimizing Data Warehousing solutions
    $91k-123k yearly est. 2d ago
  • Data Engineer

    Company 3.0company rating

    Data scientist job in Fort Lee, NJ

    The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance. Responsibilities Analyze, structure, and interpret raw data. Build and maintain datasets for business use. Design and optimize database tables, schemas, and data structures. Enhance data accuracy, consistency, and overall efficiency. Develop views, functions, and stored procedures. Write efficient SQL queries to support application integration. Create database triggers to support automation processes. Oversee data quality, integrity, and database security. Translate complex data into clear, actionable insights. Collaborate with cross-functional teams on multiple projects. Present data through graphs, infographics, dashboards, and other visualization methods. Define and track KPIs to measure the impact of business decisions. Prepare reports and presentations for management based on analytical findings. Conduct daily system maintenance and troubleshoot issues across all platforms. Perform additional ad hoc analysis and tasks as needed. Qualification Bachelor's Degree in Information Technology or relevant 4+ years of experience as a Data Analyst or Data Engineer, including database design experience. Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations. Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views. Excellent written, verbal, and interpersonal communication skills. Ability to manage multiple tasks in a fast-paced and evolving environment. Strong work ethic, professionalism, and integrity. Advanced proficiency in Microsoft Office applications.
    $93k-132k yearly est. 4d ago
  • Senior Data Engineer - Investment & Portfolio Data (PE / Alternatives)

    81 North

    Data scientist job in New York, NY

    About the Opportunity Our client is a global alternative investment firm in a high-growth phase, investing heavily in modernizing its enterprise data platform. With multiple investment strategies and operations across several geographies, the firm is building a scalable, front-to-back investment data environment to support portfolio management, performance reporting, and executive decision-making. This is a hands-on, senior individual contributor role for an engineer who has worked close to investment teams and understands financial and portfolio data, not just generic SaaS analytics. Who This Role Is For This role is ideal for data engineers who have experience in or alongside Private Equity, Hedge Funds, Asset Management, or Capital Markets environments and are comfortable owning complex financial data pipelines end-to-end. This is not a traditional BI, marketing, or consumer data role. Candidates coming purely from ad-tech, healthcare, or non-financial SaaS backgrounds may not find this a fit. What You'll Be Doing Design, build, and maintain scalable data pipelines supporting investment, portfolio, and fund-level data Partner closely with technology leadership and investment stakeholders to translate business and investment use cases into technical solutions Contribute to the buildout of a modern data lake / lakehouse architecture (medallion-style or similar) Integrate data across the full investment lifecycle, including: Deal and transaction data Portfolio company metrics Fund performance, AUM, and reporting data Ensure data quality, lineage, and reliability across multiple strategies and entities Operate as a senior, hands-on engineer - designing, building, and troubleshooting in the weeds when needed Required Experience 7+ years of experience as a Data Engineer or similar role Strong background supporting financial services data, ideally within: Private Equity Hedge Funds Asset Management Investment Banking / Capital Markets Experience working with complex, multi-entity datasets tied to investments, portfolios, or funds Strong SQL skills and experience building production-grade data pipelines Experience with modern cloud data platforms and architectures Comfortable working in a fast-moving, evolving environment with senior stakeholders Nice to Have Experience in environments similar to global PE firms, hedge funds, or institutional asset managers Exposure to front-to-back investment data (from source systems through reporting) Experience with Microsoft-centric data stacks (e.g., Azure, Fabric) or comparable cloud platforms Familiarity with performance, valuation, or risk-related datasets Work Environment & Compensation Hybrid role with regular collaboration in the New York office Competitive compensation aligned with senior financial services engineering talent Opportunity to help shape a firm-wide data platform during a critical growth phase
    $90k-123k yearly est. 4d ago
  • Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco

    Saragossa

    Data scientist job in New York, NY

    Are you a data engineer who loves building systems that power real impact in the world? A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups. In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications. To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
    $90k-123k yearly est. 1d ago
  • Market Data Engineer

    Harrington Starr

    Data scientist job in New York, NY

    🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide. What You'll Do Own large-scale, time-sensitive market data capture + normalization pipelines Improve internal data formats and downstream datasets used by research and quantitative teams Partner closely with infrastructure to ensure reliability of packet-capture systems Build robust validation, QA, and monitoring frameworks for new market data sources Provide production support, troubleshoot issues, and drive quick, effective resolutions What You Bring Experience building or maintaining large-scale ETL pipelines Strong proficiency in Python + Bash, with familiarity in C++ Solid understanding of networking fundamentals Experience with workflow/orchestration tools (Airflow, Luigi, Dagster) Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.) Bonus Skills Experience working with binary market data protocols (ITCH, MDP3, etc.) Understanding of high-performance filesystems and columnar storage formats
    $90k-123k yearly est. 2d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 1d ago
  • Data Engineer

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves. Responsibilities Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries. Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring. Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines. Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift. Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm. Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets. Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users. Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns. Required Qualifications Bachelor's degree in Computer Science, Engineering, Mathematics, or related field. 2-5 years of experience in a similar Data Engineering or Web Scraping role. Capital markets knowledge with familiarity across asset classes and experience supporting trading systems. Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift). Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright). Strong Python programming skills and experience with SQL and NoSQL databases. Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus. Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
    $86k-120k yearly est. 2d ago
  • Lead Data Engineer with Banking

    Synechron 4.4company rating

    Data scientist job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are seeking an experienced Lead Data Engineer to spearhead our data infrastructure initiatives. The ideal candidate will have a strong background in building scalable data pipelines, with hands-on expertise in Kafka, Snowflake, and Python. As a key technical leader, you will design and maintain robust streaming and batch data architectures, optimize data loads in Snowflake, and drive automation and best practices across our data platform. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $140k/year & benefits (see below). The Role Responsibilities: Design, develop, and maintain reliable, scalable data pipelines leveraging Kafka, Snowflake, and Python. Lead the implementation of distributed data processing and real-time streaming solutions. Manage Snowflake data warehouse environments, including data loading, tuning, and optimization for performance and cost-efficiency. Develop and automate data workflows and transformations using Python scripting. Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions. Monitor, troubleshoot, and optimize data pipelines and platform performance. Ensure data quality, governance, and security standards are upheld. Guide and mentor junior team members and foster best practices in data engineering. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing frameworks and streaming architectures. Hands-on experience with Snowflake data warehouse platform, including data ingestion, performance tuning, and management. Proficiency in Python for data manipulation, automation, and scripting. Familiarity with Kafka ecosystem tools such as Confluent, Kafka Connect, and Kafka Streams. Solid understanding of SQL, data modeling, and ETL/ELT processes. Knowledge of cloud platforms (AWS, Azure, GCP) is advantageous. Strong troubleshooting skills and ability to optimize data workflows. Excellent communication and collaboration skills. Preferred, but not required: Bachelor's or Master's degree in Computer Science, Information Systems, or related field. Experience with containerization (Docker, Kubernetes) is a plus. Knowledge of data security best practices and GDPR compliance. Certifications related to cloud platforms or data engineering preferred. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. SYNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $135k-140k yearly 3d ago
  • C++ Market Data Engineer

    TBG | The Bachrach Group

    Data scientist job in Stamford, CT

    We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making. What You'll Do: Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design Ensure reliable data delivery with failover, gap recovery, and replay mechanisms Collaborate with researchers and engineers to align data formats for trading and simulation Instrument and test systems for continuous performance improvements What We're Looking For: 3+ years of C++ development experience (low-latency, high-throughput systems) Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH) Strong knowledge of concurrency, memory models, and compiler optimizations Python scripting skills for testing and automation Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
    $84k-114k yearly est. 6d ago
  • Distinguished Data Engineer - Card Data

    Capital One 4.7company rating

    Data scientist job in New York, NY

    Distinguished Data Engineers are individual contributors who strive to be diverse in thought so we visualize the problem space. At Capital One, we believe diversity of thought strengthens our ability to influence, collaborate and provide the most innovative solutions across organizational boundaries. Distinguished Engineers will significantly impact our trajectory and devise clear roadmaps to deliver next generation technology solutions. About the Team: Capital One is seeking a Distinguished Data Engineer, to work in our Credit Card Technology Data Engineering Team and build the future of financial services. We are a fast-paced, mission-driven group responsible for managing and leveraging petabytes of sensitive, real-time and batch data that powers everything from fraud detection models and personalized reward systems to regulatory compliance reporting. As a leader in Data Engineering, you won't just move data; you'll architect high-availability that directly influence millions of customer experiences and secure billions in transactions daily. You'll own critical data domains end-to-end, working cross-functionally with ML Scientists, Product Managers, and Business Analysts teams etc to solve complex, high-stakes problems with cutting-edge cloud technologies (like Snowflake, Kafka, and AWS). If you thrive on technical challenges, demand data integrity, and want your work to have a clear, measurable impact on the bank's core profitability and security, this is your team. This leader must have the ability to attract and recruit the industry's best talent, and simultaneously have the technical chops to ensure that we build compelling, customer-oriented solutions in an iterative methodology. Success in the role requires an innovative mind, a proven track record of delivering next generation software and data products, rigorous analytical skills, and a passion for delivering customer value through automation, machine learning and predictive analytics. Our Distinguished Engineers Are: Deep technical experts and thought leaders that help accelerate adoption of the very best engineering practices, while maintaining knowledge on industry innovations, trends and practices Visionaries, collaborating on Capital One's toughest issues, to deliver on business needs that directly impact the lives of our customers and associates Role models and mentors, helping to coach and strengthen the technical expertise and know-how of our engineering and product community Evangelists, both internally and externally, helping to elevate the Distinguished Engineering community and establish themselves as a go-to resource on given technologies and technology-enabled capabilities Responsibilities: Build awareness, increase knowledge and drive adoption of modern technologies, sharing consumer and engineering benefits to gain buy-in Strike the right balance between lending expertise and providing an inclusive environment where others' ideas can be heard and championed; leverage expertise to grow skills in the broader Capital One team Promote a culture of engineering excellence, using opportunities to reuse and innersource solutions where possible Effectively communicate with and influence key stakeholders across the enterprise, at all levels of the organization Operate as a trusted advisor for a specific technology, platform or capability domain, helping to shape use cases and implementation in an unified manner Lead the way in creating next-generation talent for Tech, mentoring internal talent and actively recruiting external talent to bolster Capital One's Tech talent Basic Qualifications: Bachelor's Degree At least 7 years of experience in data engineering At least 3 years of experience in data architecture At least 2 years of experience building applications in AWS Preferred Qualifications: Masters' Degree 9+ years of experience in data engineering 3+ years of data modeling experience 2+ years of experience with ontology standards for defining a domain 2+ years of experience using Python, SQL or Scala 1+ year of experience deploying machine learning models 3+ years of experience implementing big data processing solutions on AWS Capital One will consider sponsoring a new qualified applicant for employment authorization for this position The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. Chicago, IL: $239,900 - $273,800 for Distinguished Data Engineer McLean, VA: $263,900 - $301,200 for Distinguished Data Engineer New York, NY: $287,800 - $328,500 for Distinguished Data Engineer Richmond, VA: $239,900 - $273,800 for Distinguished Data Engineer San Francisco, CA: $287,800 - $328,500 for Distinguished Data Engineer Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
    $86k-111k yearly est. 17h ago
  • Staff Data Scientist

    Recursion 4.2company rating

    Data scientist job in Saltaire, NY

    Your work will change lives. Including your own. Please note: Our offices will be closed for our annual winter break from December 22, 2025, to January 2, 2026. Our response to your application will be delayed. The Impact You'll Make As a member of Recursion's AI-driven drug discovery initiatives, you will be at the forefront of reimagining how biological knowledge is generated, stored, accessed, and reasoned upon by LLMs. You will play a key role in developing the biological reasoning infrastructure, connecting large-scale data and codebases with dynamic, agent-driven AI systems.You will be responsible for defining the architecture that grounds our agents in biological truth. This involves integrating biomedical resources to enable AI systems to reason effectively and selecting the most appropriate data retrieval strategies to support those insights. This is a highly collaborative role: you will partner with machine learning engineers, biologists, chemists, and platform teams to build the connective tissue that allows our AI agents to reason like a scientist. The ideal candidate possesses deep expertise in both core bioinformatics/cheminformatics libraries and modern GenAI frameworks (including RAG and MCP), a strong architectural vision, and the ability to translate high-potential prototypes into scalable production workflows. In this role, you will: Architect and maintain robust infrastructure to keep critical internal and external biological resources (e.g., ChEMBL, Ensembl, Reactome, proprietary assays) up-to-date and accessible to reasoning agents. Design sophisticated context retrieval strategies, choosing the most effective approach for each biological use case, whether working with structured, entity-focused data, unstructured RAG, or graph-based representations. Integrate established bioinformatics/cheminformatics libraries into a GenAI ecosystem, creating interfaces (such as via MCP) that allow agents to autonomously query and manipulate biological data. Pilot methods for tool use by LLMs, enabling the system to perform complex tasks like pathway analysis on the fly rather than relying solely on memorized weights. Develop scalable, production-grade systems that serve as the backbone for Recursion's automated scientific reasoning capabilities. Collaborate cross-functionally with Recursion's core biology, chemistry, data science and engineering teams to ensure our biological data and the reasoning engines are accurately reflecting the complexity of disease biology and drug discovery. Present technical trade-offs (e.g., graph vs. vector) to leadership and stakeholders in a clear, compelling way that aligns technical reality with product vision. The Team You'll Join You'll join a bold, agile team of scientists and engineers dedicated to building comprehensive biological maps by integrating Recursion's in-house datasets, patient data, and external knowledge layers to enable sophisticated agent-based reasoning. Within this cross-functional team, you will design and maintain the biological context and data structures that allow agents to reason accurately and efficiently. You'll collaborate closely with wet-lab biologists and core platform engineers to develop systems that are not only technically robust but also scientifically rigorous. The ideal candidate is curious about emerging AI technologies, passionate about making biological data both machine-readable and machine-understandable, and brings a strong foundation in systems biology, biomedical data analysis, and agentic AI systems. The Experience You'll Need PhD in a relevant field (Bioinformatics, Cheminformatics, Computational Biology, Computer Science, Systems Biology) with 5+ years of industry experience, or MS in a relevant field with 7+ years of experience, focusing on biological data representation and retrieval. Proficiency in utilizing major public biological databases (NCBI, Ensembl, STRING, GO) and using standard bioinformatics/cheminformatics toolkits (e.g., RDKit, samtools, Biopython). Strong skills in designing and maintaining automated data pipelines that support continuous ingestion, transformation, and refresh of biological data without manual intervention. Ability to work with knowledge graph data models and query languages (e.g., RDF, SPARQL, OWL) and translate graph-structured data into relational or other non-graph representations, with a strong judgment in evaluating trade-offs between different approaches. Competence in building and operating GenAI stacks, including RAG systems, vector databases, and optimization of context windows for large-scale LLM deployments. Hands-on expertise with agentic AI frameworks (e.g., MCP, Google ADK, LangChain, AutoGPT) and familiarity with leading LLMs (e.g., Google Gemini/Gemma) in agentic workflows, including benchmarking and evaluating agent performance on bioinformatics/cheminformatics tasks such as structure prediction, target identification, and pathway mapping. Strong Python skills and adherence to software engineering best practices, including CI/CD, Git-based version control, and modular design. Excellent cross-functional communication skills, ability to clearly explain complex architectural decisions to both scientific domain experts and technical stakeholders. Nice to Have Strong background in machine learning and deep learning, including hands-on experience with foundation models and modern neural architectures. Fine-tuning LLMs on scientific corpora for domain-specific reasoning. Integrating LLMs with experimental or proprietary assay data in live scientific workflows. Background in drug discovery and target identification. Meaningful contributions to open-source libraries, research codebases, or community-driven tools. Working Location & Compensation: This is an office-based, hybrid role in either our Salt Lake City, UT or New York City, NY offices. Employees are expected to work in the office at least 50% of the time. At Recursion, we believe that every employee should be compensated fairly. Based on the skill and level of experience required for this role, the estimated current annual base range for this role is $200,600 - $238,400. You will also be eligible for an annual bonus and equity compensation, as well as a comprehensive benefits package. #LI-DNI The Values We Hope You Share: We act boldly with integrity. We are unconstrained in our thinking, take calculated risks, and push boundaries, but never at the expense of ethics, science, or trust. We care deeply and engage directly. Caring means holding a deep sense of responsibility and respect - showing up, speaking honestly, and taking action. We learn actively and adapt rapidly. Progress comes from doing. We experiment, test, and refine, embracing iteration over perfection. We move with urgency because patients are waiting. Speed isn't about rushing but about moving the needle every day. We take ownership and accountability. Through ownership and accountability, we enable trust and autonomy-leaders take accountability for decisive action, and teams own outcomes together. We are One Recursion. True cross-functional collaboration is about trust, clarity, humility, and impact. Through sharing, we can be greater than the sum of our individual capabilities. Our values underpin the employee experience at Recursion. They are the character and personality of the company demonstrated through how we communicate, support one another, spend our time, make decisions, and celebrate collectively. More About Recursion Recursion (NASDAQ: RXRX) is a clinical stage TechBio company leading the space by decoding biology to radically improve lives. Enabling its mission is the Recursion OS, a platform built across diverse technologies that continuously generate one of the world's largest proprietary biological and chemical datasets. Recursion leverages sophisticated machine-learning algorithms to distill from its dataset a collection of trillions of searchable relationships across biology and chemistry unconstrained by human bias. By commanding massive experimental scale - up to millions of wet lab experiments weekly - and massive computational scale - owning and operating one of the most powerful supercomputers in the world, Recursion is uniting technology, biology and chemistry to advance the future of medicine. Recursion is headquartered in Salt Lake City, where it is a founding member of BioHive, the Utah life sciences industry collective. Recursion also has offices in Toronto, Montréal, New York, London, Oxford area, and the San Francisco Bay area. Learn more at ****************** or connect on X (formerly Twitter) and LinkedIn. Recursion is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, or any other characteristic protected under applicable federal, state, local, or provincial human rights legislation. Accommodations are available on request for candidates taking part in all aspects of the selection process. Recruitment & Staffing Agencies: Recursion Pharmaceuticals and its affiliate companies do not accept resumes from any source other than candidates. The submission of resumes by recruitment or staffing agencies to Recursion or its employees is strictly prohibited unless contacted directly by Recursion's internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Recursion, and Recursion will not owe any referral or other fees. Our team will communicate directly with candidates who are not represented by an agent or intermediary unless otherwise agreed to prior to interviewing for the job.
    $200.6k-238.4k yearly Auto-Apply 1d ago
  • Data Scientist, User Operations

    Openai 4.2company rating

    Data scientist job in New York, NY

    About the Team OpenAI's User Operations organization is building the data and intelligence layer behind AI-assisted operations - the systems that decide when automation should help users, when humans should step in, and how both improve over time. Our flagship platform is transforming customer support into a model for "agent-first" operations across OpenAI. About the Role As a Data Scientist on User Operations, you'll design the models, metrics, and experimentation frameworks that power OpenAI's human-AI collaboration loop. You'll build systems that measure quality, optimize automation, and turn operational data into insights that improve product and user experience at scale. You'll partner closely with Support Automation Engineering, Product, and Data Engineering to ensure our data systems are production-grade, trusted, and impactful. This role is based in San Francisco or New York City. We use a hybrid work model of three days in the office per week and offer relocation assistance to new employees. Why it matters Every conversation users have with OpenAI products produces signals about how humans and AI interact. User Ops Data Science turns those signals into insights that shape how we support users today and design agentic systems for tomorrow. This is a unique opportunity to help define how AI collaboration at scale is measured and improved inside OpenAI. In this role, you will: * Build and own metrics, classifiers, and data pipelines that determine automation eligibility, effectiveness, and guardrails. * Design and evaluate experiments that quantify the impact of automation and AI systems on user outcomes like resolution quality and satisfaction. * Develop predictive and statistical models that improve how OpenAI's support systems automate, measure, and learn from user interactions. * Partner with engineering and product teams to create feedback loops that continuously improve our AI agents and knowledge systems. * Translate complex data into clear, actionable insights for leadership and cross-functional stakeholders. * Develop and socialize dashboards, applications, and other ways of enabling the team and company to answer product data questions in a self-serve way * Contribute to establishing data science standards and best practices in an AI-native operations environment. * Partner with other data scientists across the company to share knowledge and continually synthesize learnings across the organization You might thrive in this role if you have: * 10+ years of experience in data science roles within product or technology organizations. * Expertise in statistics and causal inference, applied in both experimentation and observational causal inference studies. * Expert-level SQL and proficiency in Python for analytics, modeling, and experimentation. * Proven experience designing and interpreting experiments and making statistically sound recommendations. * Experience building data systems or pipelines that power production workflows or ML-based decisioning. * Experience developing and extracting insights from business intelligence tools, such as Mode, Tableau, and Looker. * Strategic and impact-driven mindset, capable of translating complex business problems into actionable frameworks. * Ability to build relationships with diverse stakeholders and cultivate strong partnerships. * Strong communication skills, including the ability to bridge technical and non-technical stakeholders and collaborate across various functions to ensure business impact. * Ability to operate effectively in a fast-moving, ambiguous environment with limited structure. * Strong communication skills and the ability to translate complex data into stories for non-technical partners. Nice-to-haves: * Familiarity with large language models or AI-assisted operations platforms. * Experience in operational automation or customer support analytics. * Background in experimentation infrastructure or human-AI interaction systems. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI's Affirmative Action and Equal Employment Opportunity Policy Statement. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link. OpenAI Global Applicant Privacy Policy At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
    $88k-129k yearly est. 18d ago
  • Principal Data Scientist : Product to Market (P2M) Optimization

    The Gap 4.4company rating

    Data scientist job in New York, NY

    About Gap Inc. Our brands bridge the gaps we see in the world. Old Navy democratizes style to ensure everyone has access to quality fashion at every price point. Athleta unleashes the potential of every woman, regardless of body size, age or ethnicity. Banana Republic believes in sustainable luxury for all. And Gap inspires the world to bring individuality to modern, responsibly made essentials. This simple idea-that we all deserve to belong, and on our own terms-is core to who we are as a company and how we make decisions. Our team is made up of thousands of people across the globe who take risks, think big, and do good for our customers, communities, and the planet. Ready to learn fast, create with audacity and lead boldly? Join our team. About the Role Gap Inc. is seeking a Principal Data Scientist with deep expertise in operations research and machine learning to lead the design and deployment of advanced analytics solutions across the Product-to-Market (P2M) space. This role focuses on driving enterprise-scale impact through optimization and data science initiatives spanning pricing, inventory, and assortment optimization. The Principal Data Scientist serves as a senior technical and strategic thought partner, defining solution architectures, influencing product and business decisions, and ensuring that analytical solutions are both technically rigorous and operationally viable. The ideal candidate can lead end-to-end solutioning independently, manage ambiguity and complex stakeholder dynamics, and communicate technical and business risk effectively across teams and leadership levels. What You'll Do * Lead the framing, design, and delivery of advanced optimization and machine learning solutions for high-impact retail supply chain challenges. * Partner with product, engineering, and business leaders to define analytics roadmaps, influence strategic priorities, and align technical investments with business goals. * Provide technical leadership to other data scientists through mentorship, design reviews, and shared best practices in solution design and production deployment. * Evaluate and communicate solution risks proactively, grounding recommendations in realistic assessments of data, system readiness, and operational feasibility. * Evaluate, quantify, and communicate the business impact of deployed solutions using statistical and causal inference methods, ensuring benefit realization is measured rigorously and credibly. * Serve as a trusted advisor by effectively managing stakeholder expectations, influencing decision-making, and translating analytical outcomes into actionable business insights. * Drive cross-functional collaboration by working closely with engineering, product management, and business partners to ensure model deployment and adoption success. * Quantify business benefits from deployed solutions using rigorous statistical and causal inference methods, ensuring that model outcomes translate into measurable value * Design and implement robust, scalable solutions using Python, SQL, and PySpark on enterprise data platforms such as Databricks and GCP. * Contribute to the development of enterprise standards for reproducible research, model governance, and analytics quality. Who You Are * Master's or Ph.D. in Operations Research, Operations Management, Industrial Engineering, Applied Mathematics, or a closely related quantitative discipline. * 10+ years of experience developing, deploying, and scaling optimization and data science solutions in retail, supply chain, or similar complex domains. * Proven track record of delivering production-grade analytical solutions that have influenced business strategy and delivered measurable outcomes. * Strong expertise in operations research methods, including linear, nonlinear, and mixed-integer programming, stochastic modeling, and simulation. * Deep technical proficiency in Python, SQL, and PySpark, with experience in optimization and ML libraries such as Pyomo, Gurobi, OR-Tools, scikit-learn, and MLlib. * Hands-on experience with enterprise platforms such as Databricks and cloud environments * Demonstrated ability to assess, communicate, and mitigate risk across analytical, technical, and business dimensions. * Excellent communication and storytelling skills, with a proven ability to convey complex analytical concepts to technical and non-technical audiences. * Strong collaboration and influence skills, with experience leading cross-functional teams in matrixed organizations. * Experience managing code quality, CI/CD pipelines, and GitHub-based workflows. Preferred Qualifications * Experience shaping and executing multi-year analytics strategies in retail or supply chain domains. * Proven ability to balance long-term innovation with short-term deliverables. * Background in agile product development and stakeholder alignment for enterprise-scale initiatives. Benefits at Gap Inc. * Merchandise discount for our brands: 50% off regular-priced merchandise at Old Navy, Gap, Banana Republic and Athleta, and 30% off at Outlet for all employees. * One of the most competitive Paid Time Off plans in the industry.* * Employees can take up to five "on the clock" hours each month to volunteer at a charity of their choice.* * Extensive 401(k) plan with company matching for contributions up to four percent of an employee's base pay.* * Employee stock purchase plan.* * Medical, dental, vision and life insurance.* * See more of the benefits we offer. * For eligible employees Gap Inc. is an equal-opportunity employer and is committed to providing a workplace free from harassment and discrimination. We are committed to recruiting, hiring, training and promoting qualified people of all backgrounds, and make all employment decisions without regard to any protected status. We have received numerous awards for our long-held commitment to equality and will continue to foster a diverse and inclusive environment of belonging. In 2022, we were recognized by Forbes as one of the World's Best Employers and one of the Best Employers for Diversity. Salary Range: $201,700 - $267,300 USD Employee pay will vary based on factors such as qualifications, experience, skill level, competencies and work location. We will meet minimum wage or minimum of the pay range (whichever is higher) based on city, county and state requirements.
    $88k-128k yearly est. 29d ago
  • Data Scientist, GTM Analytics

    Airtable 4.2company rating

    Data scientist job in New York, NY

    Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Airtable is the no-code app platform that empowers people closest to the work to accelerate their most critical business processes. More than 500,000 organizations, including 80% of the Fortune 100, rely on Airtable to transform how work gets done. Our data team's mission is to fuel Airtable's growth and operations. We are a strategic enabler, by building high-quality and customer-centric data products and solutions. We are looking for a Data Scientist to work directly with Airtable's business stakeholders. Your data product will be instrumental in accelerating the efficiency of Customer Engagement (CE) organizations including sales, CSG and revenue operations teams. This role offers the opportunity to significantly impact Airtable's strategy and go-to-market execution, providing you with a platform to deploy your data skills in a way that directly contributes to our company's growth and success. What you'll do Champion AI Driven Data Product with Scalability: Design and implement ML models and AI solutions to enable CE team with actionable insights and recommendations. Build scalable data pipelines and automated workflows with MLOps best practices. Support Key Business Processes: Provide strategic insights, repeatable frameworks and thought partnership independently to support key CE business processes like territory carving, annual planning, pricing optimization and performance attribution, etc.. Strategic Analysis: Drive in-depth deep-dive analysis to ensure accuracy and relevance. Influence the business stakeholders with a good story telling of the data. Tackle ambiguous problems to uncover business value with minimal oversight. Develop Executive Dashboards: Design, build, and maintain high-quality dashboards and BI tools. Partner with Revenue Operations team to enable vast roles of CE team efficiently with the data products. Strong Communication Skills: Effectively communicate the “so-what” of an analysis, illustrating how insights can be leveraged to drive business impact across the organization. Who you are Education: Bachelor degree in a quantitative discipline (Math, Statistics, Operations Research, Economics, Engineering, or CS), MS/MBA preferred. Industry Experience: 4+ years of working experience as a data scientist / analytics engineer in high-growth B2B SaaS, preferably supporting sales, CSG or other go-to-market stakeholders. Demonstrated business acumen with a deep understanding of Enterprise Sales strategies (sales pipeline, forecast models, sales capacity, sales segmentation, quota planning), CSG strategies (customer churn risk model, performance attribution) and Enterprise financial metrics (ACV, ARR, NDR) Familiar with CRM platforms (i.e., Salesforce) Technical Proficiency: 6+ years of experience working with SQL in modern data platforms, such as Databricks, Snowflake, Redshift, BigQuery 6+ years of experience working with Python or R for analytics or data science projects 6+ years of experience building business facing dashboards and data models using modern BI tools like Looker, Tableau, etc. Proficient-level experience developing automated solutions to collect, transform, and clean data from various sources, by using tools such as dbt, Fivetran Proficient knowledge of data science models, such as regression, classification, clustering, time series analysis, and experiment design Hands-on experience with batch LLM pipeline is preferred Excellent communication skills to present findings to both technical and non-technical audiences. Passionate to thrive in a dynamic environment. That means being flexible and willing to jump in and do whatever it takes to be successful. Airtable is an equal opportunity employer. We embrace diversity and strive to create a workplace where everyone has an equal opportunity to thrive. We welcome people of different backgrounds, experiences, abilities, and perspectives. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or any characteristic protected by applicable federal and state laws, regulations and ordinances. Learn more about your EEO rights as an applicant. VEVRAA-Federal Contractor If you have a medical condition, disability, or religious belief/practice which inhibits your ability to participate in any part of the application or interview process, please complete our Accommodations Request Form and let us know how we may assist you. Airtable is committed to participating in the interactive process and providing reasonable accommodations to qualified applicants. Compensation awarded to successful candidates will vary based on their work location, relevant skills, and experience. Our total compensation package also includes the opportunity to receive benefits, restricted stock units, and may include incentive compensation. To learn more about our comprehensive benefit offerings, please check out Life at Airtable. For work locations in the San Francisco Bay Area, Seattle, New York City, and Los Angeles, the base salary range for this role is:$179,500-$221,500 USDFor all other work locations (including remote), the base salary range for this role is:$161,500-$199,300 USD Please see our Privacy Notice for details regarding Airtable's collection and use of personal information relating to the application and recruitment process by clicking here. 🔒 Stay Safe from Job Scams All official Airtable communication will come from an @airtable.com email address. We will never ask you to share sensitive information or purchase equipment during the hiring process. If in doubt, contact us at ***************. Learn more about avoiding job scams here.
    $179.5k-221.5k yearly Auto-Apply 5d ago
  • Network Planning Data Scientist (Manager)

    Atlas Air 4.9company rating

    Data scientist job in White Plains, NY

    Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes. This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency. Responsibilities * Analyze and Monitor Network Performance * Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements. * Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness. * Modeling & Optimization * Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions. * Support the design, testing, and implementation of software optimization tools to enhance operational decision-making. * Reporting & Governance * Develop periodic performance and reliability reports for customers, assisting in presentation creation * Produce regular and ad hoc reports to monitor compliance with established operating parameters. * Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets. * Cross-Functional Collaboration * Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution. * Provide insights that inform schedule adjustments, fleet utilization, and contingency planning. * Innovation & Continuous Improvement * Identify opportunities to streamline workflows and automate recurring analyses. * Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility. Qualifications * Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI) * Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience. * 5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning * Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation. * Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment. * Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders. * A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement. * Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations Salary Range: $131,500 - $177,500 Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset. The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws. If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ****************************************** To view our Pay Transparency Statement, please click here: Pay Transparency Statement "Know Your Rights: Workplace Discrimination is Illegal" Poster The "EEO Is The Law" Poster
    $131.5k-177.5k yearly Auto-Apply 8d ago
  • Cloud Data Engineer

    Gotham Technology Group 4.5company rating

    Data scientist job in New York, NY

    Title: Enterprise Data Management - Data Cloud, Senior Developer I Duration: FTE/Permanent Salary: 130-165k The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools. Responsibilities: Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures. Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications. Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools. Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency. Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation. Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications. Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access. Champion data engineering standards, best practices, and governance across the enterprise. Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning. Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions. Qualifications: Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent. 10+ years in data engineering, cloud platform development, or analytics engineering. Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies. Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers). Preferred: Experience containerizing data workloads with Docker and Kubernetes. Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios. Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products. Bonus: Background in data mesh designs and automated data access systems. Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow. Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines. Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling. Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role. Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
    $86k-120k yearly est. 2d ago
  • Sr. Azure Data Engineer

    Synechron 4.4company rating

    Data scientist job in New York, NY

    We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets. Our challenge We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain. Additional Information* The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below). The Role Responsibilities: Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python. Requirements: Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python. Strong expertise in distributed data processing and streaming architectures. Experience with Snowflake data warehouse platform: data loading, performance tuning, and management. Proficiency in Python scripting and programming for data manipulation and automation. Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams). Knowledge of SQL, data modelling, and ETL/ELT processes. Understanding of cloud platforms (AWS, Azure, GCP) is a plus. Domain Knowledge in any of the below area: Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.). Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes. Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management. We offer: A highly competitive compensation and benefits package. A multinational organization with 58 offices in 21 countries and the possibility to work abroad. 10 days of paid annual leave (plus sick leave and national holidays). Maternity & paternity leave plans. A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region). Retirement savings plans. A higher education certification policy. Commuter benefits (varies by region). Extensive training opportunities, focused on skills, substantive knowledge, and personal development. On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses. Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups. Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms. A flat and approachable organization. A truly diverse, fun-loving, and global work culture. S YNECHRON'S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
    $130k-140k yearly 4d ago

Learn more about data scientist jobs

How much does a data scientist earn in Oyster Bay, NY?

The average data scientist in Oyster Bay, NY earns between $72,000 and $136,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Oyster Bay, NY

$99,000

What are the biggest employers of Data Scientists in Oyster Bay, NY?

The biggest employers of Data Scientists in Oyster Bay, NY are:
  1. The NPD Group
Job type you want
Full Time
Part Time
Internship
Temporary