Senior Data Scientist Agentic AI
New York, NY jobs
My name is Bill Stevens, and I have a new three month plus contract to hire Senior Data Scientist Agentic AI opportunity available for a major firm with offices located in Midtown, Manhattan on the West Side and Holmdel, New Jersey that could be of interest to you, please review my specification below and I am available at any time to speak with you so please feel free to call me. The work week schedule will be hybrid, three days a week in either of the firms' offices and two days remote. The onsite work site will be determined by the candidate.
The ideal candidate should also possess a green card or be of citizenship. No Visa entanglements and no H1-B holding company submittals.
The firms Data & AI team spearheads a culture of intelligence and automation across the enterprise, creating business value from advanced data and AI solutions. Their team includes data scientists, engineers, analysts, and product leaders working together to deliver AI-driven products that power growth, improve risk management, and elevate customer experience.
The firm created the Data Science Lab (DSL) to reimagine emerging technologies, evolving consumer needs, and rapid advances in AI. The DSL expedites transition to data-driven decision making and fosters innovation by rapidly testing, scaling, and operationalizing state-of-the-art AI.
We are seeking a Senior Data Scientist Engineer, Agentic AI who is an experienced individual contributor with deep expertise in AI/ML and a track record of turning advanced research into practical, impactful enterprise solutions. This role focuses on building, deploying, and scaling agentic AI systems, large language models, and intelligent automation solutions that reshape how the firm operates, serves customers, and drives growth. You'll collaborate directly with senior executives on high-visibility projects to bring next-generation AI to life across the firm's products and services.
Key Responsibilities:
Design and deploy Agentic AI solutions to automate complex business workflows, enhance decision-making, and improve customer and employee experiences.
Operationalize cutting-edge LLMs and generative AI to process and understand unstructured data such as contracts, claims, medical records, and customer interactions.
Build autonomous agents and multi-step reasoning systems that integrate with the firm's core platforms to deliver measurable business impact.
Partner with data engineers and AIOps teams to ensure AI models are production-ready, scalable, and robust, from prototype to enterprise deployment.
Translate research in agentic AI, reinforcement learning, and reasoning into practical solutions that support underwriting, claims automation, customer servicing, and risk assessment.
Collaborate with product owners, engineers, and business leaders to define use cases, design solutions, and measure ROI.
Contribute to the Data Science Lab by establishing repeatable frameworks for developing, testing, and deploying agentic AI solutions.
Mentor junior data scientists and contribute to the standardization of AI/ML practices, tools, and frameworks across the firm.
You are:
Passionate about pushing the frontier of AI while applying it to solve real-world business problems.
Excited by the potential of agentic AI, autonomous systems, and LLM-based solutions to transform industries.
A hands-on builder who thrives on seeing AI solutions move from proof-of-concept to real-world deployment.
Comfortable working in multi-disciplinary teams and engaging with senior business leaders to align AI solutions with enterprise goals.
You have:
PhD with 2+ years of experience OR have a Master's degree with 4+ years of experience in Statistics, Computer Science, Engineering, Applied mathematics or related field
3+ years of hands-on AI modeling/development experience
Strong theoretical foundations in probability & statistics
Strong programming skills in Python including PyTorch, Tensorflow, LangGraph
Solid background in machine learning algorithms, optimization, and statistical modeling
Excellent communication skills and ability to work and collaborating cross-functionally with Product, Engineering, and other disciplines at both the leadership and hands-on level
Excellent analytical and problem-solving abilities with superb attention to detail
Proven leadership in providing technical leadership and mentoring to data scientists and strong management skills with ability to monitor/track performance for enterprise success
This position pays $150.00 per hour on a w-2 hourly basis or $175.00 per hour on a Corp basis. The Corp rate is for independent contractors only and not third-party firms. No Visa entanglements and no H1-B holding companies.
The interview process will include an initial phone or virtual interview screening.
Please let me know your interest in this position, availability to interview and start for this position along with a copy of your recent resume or please feel free to call me at any time with any questions.
Regards
Bill Stevens
Senior Technical Recruiter
PRI Technology
Denville, New Jersey 07834
**************
******************************
Machine Learning Engineer / Data Scientist / GenAI
New York, NY jobs
NYC NY / Hybrid
12+ Months
Project - Leveraging Llama to extract cybersecurity insights out of unstructured data from their ticketing system.
Must have strong experience with:
Llama
Python
Hadoop
MCP
Machine Learning (ML)
They need a strong developer - using llama and Hadoop (this is where the data sits), experience with MCP. They have various ways to pull the data out of their tickets but want someone who can come in and make recommendations on the best way to do it and then get it done. They have tight timelines.
Thanks and Regards!
Lavkesh Dwivedi
************************
Amtex System Inc.
28 Liberty Street, 6th Floor | New York, NY - 10005
************
********************
Data Engineer
New York, NY jobs
DL Software produces Godel, a financial information and trading terminal.
Role Description
This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities.
Qualifications
Strong proficiency in Data Engineering and Data Modeling
Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes
Strong Python background
Expertise in Extract, Transform, Load (ETL) processes and tools
Experience in designing, managing, and optimizing Data Warehousing solutions
Lead Data Engineer
New York, NY jobs
Job title: Lead Software Engineer
Duration: Fulltime/Contract to Hire
Role description:
The successful candidate will be a key member of the HR Technology team, responsible for developing and maintaining global HR applications with a primary focus on HR Analytics ecosystem. This role combines technical expertise with HR domain knowledge to deliver robust data solutions that enable advanced analytics and data science initiatives.
Key Responsibilities:
Manage and support HR business applications, including problem resolution and issue ownership
Design and develop ETL/ELT layer for HR data integration and ensure data quality and consistency
Provide architecture solutions for Data Modeling, Data Warehousing, and Data Governance
Develop and maintain data ingestion processes using Informatica, Python, and related technologies
Support data analytics and data science initiatives with optimized data structures and AI/ML tools
Manage vendor products and their integrations with internal/external applications
Gather requirements and translate functional needs into technical specifications
Perform QA testing and impact analysis across the BI ecosystem
Maintain system documentation and knowledge repositories
Provide technical guidance and manage stakeholder communications
Required Skills & Experience:
Bachelor's degree in computer science or engineering with 4+ years of delivery and maintenance work experience in the Data and Analytics space.
Strong hands-on experience with data management, data warehouse/data lake design, data modeling, ETL Tools, advanced SQL and Python programming.
Exposure to AI & ML technologies and experience tuning models and building LLM integrations.
Experience conducting Exploratory Data Analysis (EDA) to identify trends and patterns, report key metrics.
Extensive database development experience in MS SQL Server/ Oracle and SQL scripting.
Demonstrable working knowledge of tools in CI/CD pipeline primarily GitLab and Jenkins
Proficiency in using collaboration tools like Confluence, SharePoint, JIRA
Analytical skills to model business functions, processes and dataflow within or between systems.
Strong problem-solving skills to debug complex, time-critical production incidents.
Good interpersonal skills to engage with senior stakeholders in functional business units and IT teams.
Experience with Cloud Data Lake technologies such as Snowflake and knowledge of HR data model would be a plus.
Data Engineer (Web Scraping technologies)
New York, NY jobs
Title: Data Engineer (Web Scraping technologies)
Duration: FTE/Perm
Salary: 125-190k plus bonus
Responsibilities:
Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability
Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users
Fielding Questions from users about the scrapes and websites
Coordinating with Compliance on approvals and TOU reviews
Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift
Normalizing/standardizing vendor data, firm data for firm consumption
Implement data quality checks to ensure reliability and accuracy of scraped data
Coordinate with Internal teams on delivery, access, requests, support
Promote Data Engineering best practices
Required Skills and Qualifications:
Bachelor's degree in computer science, Engineering, Mathematics or related field
2-5 experience in a similar role
Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds)
Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems
AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.)
Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.)
Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools
Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.)
Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation)
Strong communication skills to work with stakeholders across technology, investment, and operations teams.
Cloud Data Engineer
New York, NY jobs
Title: Enterprise Data Management - Data Cloud, Senior Developer I
Duration: FTE/Permanent
Salary: 130-165k
The Data Engineering team oversees the organization's central data infrastructure, which powers enterprise-wide data products and advanced analytics capabilities in the investment management sector. We are seeking a senior cloud data engineer to spearhead the architecture, development, and rollout of scalable, reusable data pipelines and products, emphasizing the creation of semantic data layers to support business users and AI-enhanced analytics. The ideal candidate will work hand-in-hand with business and technical groups to convert intricate data needs into efficient, cloud-native solutions using cutting-edge data engineering techniques and automation tools.
Responsibilities:
Collaborate with business and technical stakeholders to collect requirements, pinpoint data challenges, and develop reliable data pipeline and product architectures.
Design, build, and manage scalable data pipelines and semantic layers using platforms like Snowflake, dbt, and similar cloud tools, prioritizing modularity for broad analytics and AI applications.
Create semantic layers that facilitate self-service analytics, sophisticated reporting, and integration with AI-based data analysis tools.
Build and refine ETL/ELT processes with contemporary data technologies (e.g., dbt, Python, Snowflake) to achieve top-tier reliability, scalability, and efficiency.
Incorporate and automate AI analytics features atop semantic layers and data products to enable novel insights and process automation.
Refine data models (including relational, dimensional, and semantic types) to bolster complex analytics and AI applications.
Advance the data platform's architecture, incorporating data mesh concepts and automated centralized data access.
Champion data engineering standards, best practices, and governance across the enterprise.
Establish CI/CD workflows and protocols for data assets to enable seamless deployment, monitoring, and versioning.
Partner across Data Governance, Platform Engineering, and AI groups to produce transformative data solutions.
Qualifications:
Bachelor's or Master's in Computer Science, Information Systems, Engineering, or equivalent.
10+ years in data engineering, cloud platform development, or analytics engineering.
Extensive hands-on work designing and tuning data pipelines, semantic layers, and cloud-native data solutions, ideally with tools like Snowflake, dbt, or comparable technologies.
Expert-level SQL and Python skills, plus deep familiarity with data tools such as Spark, Airflow, and cloud services (e.g., Snowflake, major hyperscalers).
Preferred: Experience containerizing data workloads with Docker and Kubernetes.
Track record architecting semantic layers, ETL/ELT flows, and cloud integrations for AI/analytics scenarios.
Knowledge of semantic modeling, data structures (relational/dimensional/semantic), and enabling AI via data products.
Bonus: Background in data mesh designs and automated data access systems.
Skilled in dev tools like Azure DevOps equivalents, Git-based version control, and orchestration platforms like Airflow.
Strong organizational skills, precision, and adaptability in fast-paced settings with tight deadlines.
Proven self-starter who thrives independently and collaboratively, with a commitment to ongoing tech upskilling.
Bonus: Exposure to BI tools (e.g., Tableau, Power BI), though not central to the role.
Familiarity with investment operations systems (e.g., order management or portfolio accounting platforms).
Data Engineer
New York, NY jobs
Our client is seeking a Data Engineer with hands-on experience in Web Scraping technologies to help build and scale a new scraping capability within their Data Engineering team. This role will work directly with Technology, Operations, and Compliance to source, structure, and deliver alternative data from websites, APIs, files, and internal systems. This is a unique opportunity to shape a new service offering and grow into a senior engineering role as the platform evolves.
Responsibilities
Develop scalable Web Scraping solutions using AI-assisted tools, Python frameworks, and modern scraping libraries.
Manage the full lifecycle of scraping requests, including intake, feasibility assessment, site access evaluation, extraction approach, data storage, validation, entitlement, and ongoing monitoring.
Coordinate with Compliance to review Terms of Use, secure approvals, and ensure all scrapes adhere to regulatory and internal policy guidelines.
Build and support AWS-based data pipelines using tools such as Cron, Glue, EventBridge, Lambda, Python ETL, and Redshift.
Normalize and standardize raw, vendor, and internal datasets for consistent consumption across the firm.
Implement data quality checks and monitoring to ensure the reliability, historical continuity, and operational stability of scraped datasets.
Provide operational support, troubleshoot issues, respond to inquiries about scrape behavior or data anomalies, and maintain strong communication with users.
Promote data engineering best practices, including automation, documentation, repeatable workflows, and scalable design patterns.
Required Qualifications
Bachelor's degree in Computer Science, Engineering, Mathematics, or related field.
2-5 years of experience in a similar Data Engineering or Web Scraping role.
Capital markets knowledge with familiarity across asset classes and experience supporting trading systems.
Strong hands-on experience with AWS services (S3, Lambda, EventBridge, Cron, Glue, Redshift).
Proficiency with modern Web Scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright).
Strong Python programming skills and experience with SQL and NoSQL databases.
Familiarity with market data and time series datasets (Bloomberg, Refinitiv) is a plus.
Experience with DevOps/IaC tooling such as Terraform or CloudFormation is desirable.
Lead Data Engineer with Banking
New York, NY jobs
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our challenge
We are seeking an experienced Lead Data Engineer to spearhead our data infrastructure initiatives. The ideal candidate will have a strong background in building scalable data pipelines, with hands-on expertise in Kafka, Snowflake, and Python. As a key technical leader, you will design and maintain robust streaming and batch data architectures, optimize data loads in Snowflake, and drive automation and best practices across our data platform.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York, NY is $135k - $140k/year & benefits (see below).
The Role
Responsibilities:
Design, develop, and maintain reliable, scalable data pipelines leveraging Kafka, Snowflake, and Python.
Lead the implementation of distributed data processing and real-time streaming solutions.
Manage Snowflake data warehouse environments, including data loading, tuning, and optimization for performance and cost-efficiency.
Develop and automate data workflows and transformations using Python scripting.
Collaborate with data scientists, analysts, and stakeholders to translate business requirements into technical solutions.
Monitor, troubleshoot, and optimize data pipelines and platform performance.
Ensure data quality, governance, and security standards are upheld.
Guide and mentor junior team members and foster best practices in data engineering.
Requirements:
Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python.
Strong expertise in distributed data processing frameworks and streaming architectures.
Hands-on experience with Snowflake data warehouse platform, including data ingestion, performance tuning, and management.
Proficiency in Python for data manipulation, automation, and scripting.
Familiarity with Kafka ecosystem tools such as Confluent, Kafka Connect, and Kafka Streams.
Solid understanding of SQL, data modeling, and ETL/ELT processes.
Knowledge of cloud platforms (AWS, Azure, GCP) is advantageous.
Strong troubleshooting skills and ability to optimize data workflows.
Excellent communication and collaboration skills.
Preferred, but not required:
Bachelor's or Master's degree in Computer Science, Information Systems, or related field.
Experience with containerization (Docker, Kubernetes) is a plus.
Knowledge of data security best practices and GDPR compliance.
Certifications related to cloud platforms or data engineering preferred.
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
SYNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Sr. Azure Data Engineer
New York, NY jobs
We are
At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 14,500+, and has 58 offices in 21 countries within key global markets.
Our challenge
We are looking for a candidate will be responsible for designing, implementing, and managing data solutions on the Azure platform in Financial / Banking domain.
Additional Information*
The base salary for this position will vary based on geography and other factors. In accordance with law, the base salary for this role if filled within New York City, NY is $130k - $140k/year & benefits (see below).
The Role
Responsibilities:
Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance.
Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake.
Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery.
Evaluates technical performance challenges and recommend tuning solutions.
Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python.
Requirements:
Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python.
Strong expertise in distributed data processing and streaming architectures.
Experience with Snowflake data warehouse platform: data loading, performance tuning, and management.
Proficiency in Python scripting and programming for data manipulation and automation.
Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams).
Knowledge of SQL, data modelling, and ETL/ELT processes.
Understanding of cloud platforms (AWS, Azure, GCP) is a plus.
Domain Knowledge in any of the below area:
Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.).
Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes.
Funding Support, Planning & Analysis, Regulatory reporting & Compliance.
Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management.
We offer:
A highly competitive compensation and benefits package.
A multinational organization with 58 offices in 21 countries and the possibility to work abroad.
10 days of paid annual leave (plus sick leave and national holidays).
Maternity & paternity leave plans.
A comprehensive insurance plan including medical, dental, vision, life insurance, and long-/short-term disability (plans vary by region).
Retirement savings plans.
A higher education certification policy.
Commuter benefits (varies by region).
Extensive training opportunities, focused on skills, substantive knowledge, and personal development.
On-demand Udemy for Business for all Synechron employees with free access to more than 5000 curated courses.
Coaching opportunities with experienced colleagues from our Financial Innovation Labs (FinLabs) and Center of Excellences (CoE) groups.
Cutting edge projects at the world's leading tier-one banks, financial institutions and insurance firms.
A flat and approachable organization.
A truly diverse, fun-loving, and global work culture.
S YNECHRON'S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference' is committed to fostering an inclusive culture - promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant's gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Senior Data Scientist/Research Economist
New York, NY jobs
info_outline
XApplicants in San Francisco: Qualified applications with arrest or conviction records will be considered for employment in accordance with the San Francisco Fair Chance Ordinance for Employers and the California Fair Chance Act.Note: By applying to this position you will have an opportunity to share your preferred working location from the following: New York, NY, USA; San Francisco, CA, USA.
Minimum qualifications:
Master's degree in Economics, a related field (e.g., Statistics, Data Science, Public Policy, Business, Finance), or equivalent practical experience.
4 years of experience using analytics to solve product or business problems, economic research, coding (e.g., Python, R, SQL), querying databases or statistical analysis.
Preferred qualifications:
PhD in Economics or a related field with a focus on labor economics, policy evaluation, industrial organization, or applied econometrics.
6 years of experience using analytics to solve product or business problems, economic research, coding (e.g., Python, R, SQL), querying databases or statistical analysis.
Experience conducting research or other innovative analyses, involving novel methodologies or data sources, working with text data or datasets.
Familiarity with AI tools and technologies, with an understanding of their broader economic impact.
Ability to communicate complex technical analyses to non-technical stakeholders and executive management.
About the job
Google's AI and Economy program is a high-priority, cross-functional initiative focused on producing research, engaging top academics and policymakers, and building new data products to understand AI's economic impact. We are expanding our ambition to establish Google as a leading voice in the public AI and Economy conversation.
In this role, you will be joining the Economics team to design and deliver marquee project on measuring and communicating to the economically meaningful AI usage in Google products, and its implications for the economy and support our research efforts.
The US base salary range for this full-time position is $166,000-$244,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.
Responsibilities
Design and conduct research on the economic impacts of AI technologies by pioneering novel datasets and empirical methodologies.
Develop new conceptual frameworks and taxonomies to systematize ways in which users and businesses engage with AI products, drawing as needed on existing economic research and input from academic advisors.
Communicate the research findings externally through high-impact research publications, media articles, and stakeholder presentations.
Position Google at the forefront of economic AI research by building strategic partnerships with external academic partners, global policy institutions and think tanks, and leverage these partnerships to improve our research and amplify its impact on evidence-based policymaking.
Maintain effective working relationships with partners across Google, including GDM, Research, Public Policy, Search, Google Trends, Behavioral Economics, to leverage inputs from other teams and communicate research findings to support product strategy decisions.
Senior Product Data Scientist, Geo Developer and Sustainability
New York, NY jobs
_corporate_fare_ Google _place_ Mountain View, CA, USA; New York, NY, USA **Mid** Experience driving progress, solving problems, and mentoring more junior team members; deeper expertise and applied knowledge within relevant area. _info_outline_ XNote: By applying to this position you will have an opportunity to share your preferred working location from the following: **Mountain View, CA, USA; New York, NY, USA** .
**Minimum qualifications:**
+ Bachelor's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
+ 8 years of work experience using analytics to solve product or business problems, performing statistical analysis, and coding (e.g., Python, R, SQL) or 5 years work experience with a Master's degree.
**Preferred qualifications:**
+ Master's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
**About the job**
Help serve Google's worldwide user base of more than a billion people. Data Scientists provide quantitative support, market understanding and a strategic perspective to our partners throughout the organization. As a data-loving member of the team, you serve as an analytics expert for your partners, using numbers to help them make better decisions. You will weave stories with meaningful insight from data. You'll make critical recommendations for your fellow Googlers in Engineering and Product Management. You relish tallying up the numbers one minute and communicating your findings to a team leader the next.
Geo Developer and Sustainability team's mission is to apply data science to enable Geo in driving developer growth and planetary sustainability. We enable driven decision making across the Geo Makers organization (made up of Geo Developer and Geo Sustainability areas) to influence strategy, drive impact and unlock sustainable product growth. We do this by enabling insights to help identify and prioritize strategic bets that unlock sustainable user growth and business impact, being thought partners for organizational leadership to support delivery of in-year product commitments and initiatives across strategic areas, steering the organization towards a data driven operating cadence by building self-serve data solutions, in close partnership with our data engineering platform team.
The US base salary range for this full-time position is $156,000-$229,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more aboutbenefits at Google (************************************* .
**Responsibilities**
+ Design, develop and enhance metrics to provide a comprehensive view of developer behavior across developer engagement surfaces.
+ Own and operate high visibility organizational KR metrics - set data-driven goals, develop forecasting models to predict growth, and assess impact of high-priority growth initiatives and develop a long-term plan for enhancing Geo Developer's attribution methodologies.
+ Collaborate on investigative projects, uncover insights to drive product enhancements and improve user experience (e.g., long tail developer pricing strategy to maximize engagement, maps and immersive product deep dives etc).
+ Utilize technology to address new problem areas with segmentation analysis, recommender systems and GenAI applications.
+ Build self-serve tooling and dash-boarding to enable data driven decision making across the organization at scale.
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google'sApplicant and Candidate Privacy Policy (./privacy-policy) .
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy (******************************************************* ,Know your rights: workplace discrimination is illegal (**************************************************************************** ,Belonging at Google (******************************** , and How we hire (**************************************** .
If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form (*************************************** .
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also ******************************* and ************************************************************* If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: ***************************************
Senior Data Scientist, Product, Ads Privacy and Safety
New York, NY jobs
_corporate_fare_ Google _place_ Kirkland, WA, USA; Mountain View, CA, USA; +3 more; +2 more **Mid** Experience driving progress, solving problems, and mentoring more junior team members; deeper expertise and applied knowledge within relevant area. _info_outline_
XNote: By applying to this position you will have an opportunity to share your preferred working location from the following: **Kirkland, WA, USA; Mountain View, CA, USA; New York, NY, USA; Pittsburgh, PA, USA** .
**Minimum qualifications:**
+ Bachelor's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
+ 8 years of experience using analytics to solve product or business problems, performing statistical analysis, and coding (e.g., Python, R, SQL) or 5 years of experience with a Master's degree.
**Preferred qualifications:**
+ Master's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
+ Knowledge of financial forecasting, scenario analysis and risk assessment for Ads.
+ Familiarity with global privacy regulations (e.g., General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), Digital Markets Act (DMA)) and their implications relevant to technology companies.
**About the job**
The Ads Privacy and Safety team (APaS) is dedicated to fostering trust and transparency within the Google Ads ecosystem. This involves ensuring safety and respect for users, advertisers, and publishers by combating invalid traffic, promoting privacy-respecting business generation practices that empower user control, and advancing content understanding through human and machine intelligence.
The APaS Data Science team plays a crucial role in safeguarding the integrity of Google's advertising platform. By focusing on data-driven objectivity, accountability, and user-centricity, this team develops unbiased frameworks to measure business health and deliver impact assessments across key areas like risk, business, and user trust. They proactively counter threats by enabling precise measurement and ensuring the focus remains on the right problems. Through close partnerships across APaS, the team provides continuous measurement and influences strategic decisions with objective insights.
As a Senior Data Scientist, you will join our Ads privacy and regulations team. In this crucial role, you will drive data-driven decision-making to ensure regulatory compliance and unlock growth opportunities within Google Ads, safeguarding billions in business while enhancing user trust. You will be a key player in navigating the complex landscape of privacy laws and regulations, developing quantitative models and frameworks that enable Google Ads to adapt and grow. You will be responsible for analyzing the impact of evolving regulations, quantifying risks and opportunities, and generating actionable insights that inform product development, policy adjustments, and using user preference and consented signals effectively for Ads targeting.
The US base salary range for this full-time position is $156,000-$229,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more aboutbenefits at Google (************************************* .
**Responsibilities**
+ Partner with cross-functional teams and deliver data driven insights to stakeholders across Ads, focusing on advertiser, publisher, and user trust and experience.
+ Develop and implement quantitative frameworks to assess the impact of Ads safety, traffic quality, user privacy and regulatory compliance on Ads business, user experience, and product capabilities.
+ Identify areas for optimization in response to evolving trends in the Ads industry and develop models to improve product features against business impact and new threats.
+ Build and automate reports, iteratively build and prototype dashboards to provide insights at scale, solving for analytical need.
+ Deliver effective presentations of findings and recommendations to multiple levels of leadership, creating visual displays of quantitative information.
Information collected and processed as part of your Google Careers profile, and any job applications you choose to submit is subject to Google'sApplicant and Candidate Privacy Policy (./privacy-policy) .
Google is proud to be an equal opportunity and affirmative action employer. We are committed to building a workforce that is representative of the users we serve, creating a culture of belonging, and providing an equal employment opportunity regardless of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), expecting or parents-to-be, criminal histories consistent with legal requirements, or any other basis protected by law. See also Google's EEO Policy (******************************************************* ,Know your rights: workplace discrimination is illegal (**************************************************************************** ,Belonging at Google (******************************** , and How we hire (**************************************** .
If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form (*************************************** .
Google is a global company and, in order to facilitate efficient collaboration and communication globally, English proficiency is a requirement for all roles unless stated otherwise in the job posting.
To all recruitment agencies: Google does not accept agency resumes. Please do not forward resumes to our jobs alias, Google employees, or any other organization location. Google is not responsible for any fees related to unsolicited resumes.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also ******************************* and ************************************************************* If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: ***************************************
Senior Data Scientist, GeminiApp, Ecosystems
New York, NY jobs
About Us Artificial Intelligence could be one of humanity's most useful inventions. At Google DeepMind, we're a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence.
We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
The Role + Our team, GeminiApp, is on a mission to build a universal AI assistant that will empower billions of people.
We are creating a personal, proactive, and powerful life assistant that will be used multiple times a day to increase productivity and creativity by 10 to 100-fold.
Our work is shaping how humanity interacts with AI at scale.
+ As a Data Scientist on the GeminiApp team, you are a key partner and co-creator in our product strategy.
You will be instrumental in building a uniquely proactive and powerful assistant by ensuring our strategic decisions are grounded in data.
This is a high-impact role for a data scientist who is excited about working in a fast-paced, innovative environment and who is passionate about building user-centered experiences that will redefine our relationship with technology.
+ As part of the Ecosystem Data Science team, you will use data to produce insights on emerging trends across all of GeminiApp and our competitors.
Your work will be highly visible and highly impactful: this team's output regularly influences decision-making at the VP+ levels.
Key responsibilities: + Translate ambiguous questions into well-defined problems + Analyze large complex datasets to produce concise, actionable insights + Communicate findings and recommendations to executive stakeholders, including visualizing data in a clear, compelling way + Develop, implement, and track top-level product and business metrics + Dive into metric developments and changes, and identify key drivers and root causes + Automate currently manual metric reporting flows and outputs + Build and deploy statistical/ML models to understand our users and product capabilities + Partner with product, engineering, and UX to develop data-driven product insights and strategies + Champion data-driven culture by feeding user engagement insights back into models About You In order to set you up for success as a Data Scientist at Google DeepMind, we look for the following skills and experience: + Bachelor's degree in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.
+ 5 years of experience with analysis applications (e.
g.
, extracting insights, performing statistical analysis, or solving business problems), and coding (e.
g.
, Python, R, SQL) or 2 years of experience with a Master's degree.
+ 2 years of work experience identifying opportunities for business/product improvement and then defining/measuring the success of those initiatives.
In addition, the following would be an advantage: + Proven experience in identifying data-driven opportunities for business/product improvement and defining/measuring the success of those initiatives + Proven experience in setting up, maintaining, and reporting on top-line product performance and business metrics + Strong communication, writing, and presentation skills + Past experience on a performance or growth data science or similar team + Experience with experimental design and analysis + Experience working with large and messy datasets to solve ambiguous business problems + Ability to self-start and self-direct work in an unstructured, fast-paced, challenging environment + A bias for action and creative problem-solving and ability to work effectively across functions and PAs Why You'll Love Working Here Impact: You'll have a direct and meaningful impact on a product designed to empower billions of people and be one of the greatest forces for good in the world.
Growth: We're a fast-growing team within Google, and you'll have the opportunity to evolve quickly to meet changing user needs.
Team & Culture: You'll work with a talented and passionate team of people who are excited about what they do and have fun doing it.
The US base salary range for this full-time position is between $156,000- $229,000 + bonus + equity + benefits.
Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Application deadline: November 14, 2025 Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf.
For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact.
We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law.
If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Data Scientist, Product
New York, NY jobs
Job Description
Mirage is the leading AI short-form video company. We're building full-stack foundation models and products that redefine video creation, production and editing. Over 20 million creators and businesses use Mirage's products to reach their full creative and commercial potential.
We are a rapidly growing team of ambitious, experienced, and devoted engineers, researchers, designers, marketers, and operators based in NYC. As an early member of our team, you'll have an opportunity to have an outsized impact on our products and our company's culture.
Our Products
Captions
Mirage Studio
Our Technology
AI Research @ Mirage
Mirage Model Announcement
Seeing Voices (white-paper)
Press Coverage
TechCrunch
Lenny's Podcast
Forbes AI 50
Fast Company
Our Investors
We're very fortunate to have some the best investors and entrepreneurs backing us, including Index Ventures, Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, Uncommon Projects, Kevin Systrom, Mike Krieger, Lenny Rachitsky, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, and more.
** Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square)
We do not work with third-party recruiting agencies, please do not contact us**
About the Role:
As a member of our Data Science staff, you'll own end-to-end measurement by defining north-star metrics, architecting rigorous experiments, and converting insights into roadmap-shaping decisions. You'll partner with Product, Design, and Engineering to elevate instrumentation and data quality and deliver quantified recommendations that improve activation, engagement, retention, and revenue.
Key Responsibilities:
Lead analysis and exploration to uncover trends in product and user behavior and prioritize the highest leverage opportunities
Design and evaluate rigorous A/B tests and set success criteria and power and guardrails and deliver clear post experiment readouts
Collaborate with cross functional leaders to define core metrics and establish data driven goals for product experimentation
Develop and maintain analytical models and reproducible pipelines that support full funnel analytics
Apply statistical techniques to identify patterns and opportunities in large datasets and quantify expected impact and risk
Create executive ready visualizations and dashboards that drive alignment and action among stakeholders
Embed with designers and engineers to guide instrumentation and ensure data informed decisions throughout the product lifecycle
Provide recommendations to improve user experience and conversion and overall product performance and track realized impact
Stay current on industry trends and emerging technologies in analytics and introduce pragmatic improvements to tools and methods
Requirements:
5+ years as a data scientist or in a similar role with a focus on product analytics
High proficiency in SQL and either Python or R for data manipulation, analysis, and modeling
Expertise in statistical analysis, hypothesis testing, and experimental design for A/B testing and product experimentation
In-depth knowledge of data visualization tools and techniques to effectively communicate insights and storytelling
Strong analytical thinking and problem-solving skills, with the ability to translate complex data into actionable recommendations
Strong written and verbal communication skills and the ability to partner with cross-functional teams to build a great business
Bonus Points:
Master's or PhD in a quantitative field (Statistics, Mathematics, Computer Science, or related)
Full-funnel analytics experience, including data collection, cleaning, transformation, analysis, and performance measurement
Familiarity with modern data warehousing and large-scale data processing; comfortable working with very large datasets and distributed computing frameworks
Benefits:
Comprehensive medical, dental, and vision plans
401K with employer match
Commuter Benefits
Catered lunch multiple days per week
Dinner stipend every night if you're working late and want a bite!
Grubhub subscription
Health & Wellness Perks (Talkspace, Kindbody, One Medical subscription, HealthAdvocate, Teladoc)
Multiple team offsites per year with team events every month
Generous PTO policy
Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Please note benefits apply to full time employees only.
Compensation Range: $160K - $220K
Senior Applied Scientist - Observability Data Platform
New York, NY jobs
The Observability Data Platform (ODP) powers the core of Datadog's telemetry systems, handling exabytes of multimodal observability data. As AI agents become first-class consumers of telemetry, ODP is evolving to meet their demands - scaling with explosive data growth, exposing new query mechanisms, rethinking how telemetry is stored, transformed, and served, and enforcing guardrails that ensure security and reliability.
Our team's new focus is to build an intelligent control plane for production systems. This involves moving beyond passive monitoring to create a platform where AI agents can safely and effectively take action in live environments. To achieve this, we are integrating techniques from symbolic reasoning, formal methods, and generative AI.
We are looking for an experienced Senior Applied Scientist with a background that spans systems engineering, AI, and formal reasoning. You have expertise in areas like causal modeling, generative simulation, runtime verification, or reinforcement learning, and are motivated to apply these skills to build reliable systems.
You will join the team behind Datadog's most ambitious projects: evolving observability infrastructure for stochastic, self-improving systems.
At Datadog, we place value in our office culture - the relationships and collaboration it builds and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them.
What You'll Do:
Design and prototype intelligent systems for AI-native observability, including cost-aware agent orchestration, adaptive query execution, and self-optimizing system components.
Apply reinforcement learning, search, or hybrid approaches to infrastructure-level decision-making, such as autoscaling, scheduling, or load shaping.
Collaborate with AI researchers and platform engineers to design experimentation loops and verifiers that guide LLM outputs using runtime metrics and formal models.
Explore emerging paradigms like AI compilers, “programming after code,” and runtime-aware prompt engineering to inform Datadog's infrastructure and product design.
Help define the direction of BitsEvolve - Datadog's optimization agent that uses LLMs and evolutionary search to discover code improvements, optimize GPU kernels, and tune configurations to improve performance.
Partner with product teams and platform stakeholders to ensure scientific advances translate into measurable improvements in cost, performance, and observability depth.
Who You Are:
You have a BS/MS/PhD in a scientific field or equivalent experience
You have 8+ years of experience in systems engineering, database internals, or infrastructure research, including hands-on experience in a production environment
You have a strong software engineering foundation, ideally in C++, Rust, Go, or Python, and are comfortable writing performant, maintainable code
You have deep expertise in at least one of the following areas: query optimization, data center scheduling, compiler design, reinforcement learning, or distributed systems design
You have experience applying search, planning, or learning techniques to solve real-world optimization problems
You are excited by systems that learn, adapt, and improve over time using feedback from runtime metrics and human-defined objectives
You are hypothesis-driven and enjoy designing experiments and evaluation loops, whether through simulations, benchmarks, or live systems
You thrive in ambiguity, enjoy reading papers and building prototypes, and want to help shape the future of infrastructure in the AI era
You enjoy collaborating across research, engineering, and product to bring scientific insights to practical outcomes
Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you're passionate about technology and want to grow your skills, we encourage you to apply.
Benefits and Growth:
Get to build tools for software engineers, just like yourself. And use the tools we build to accelerate our development.
Have a lot of influence on product direction and impact on the business .
Work with skilled, knowledgeable, and kind teammates who are happy to teach and learn
Competitive global benefits
Continuous professional development
Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog.
Datadog offers a competitive salary and equity package, and may include variable compensation. Actual compensation is based on factors such as the candidate's skills, qualifications, and experience. In addition, Datadog offers a wide range of best in class, comprehensive and inclusive employee benefits for this role including healthcare, dental, parental planning, and mental health benefits, a 401(k) plan and match, paid time off, fitness reimbursements, and a discounted employee stock purchase plan.
The reasonably estimated yearly salary for this role at Datadog is:$187,000-$240,000 USD
About Datadog:
Datadog (NASDAQ: DDOG) is a global SaaS business, delivering a rare combination of growth and profitability. We are on a mission to break down silos and solve complexity in the cloud age by enabling digital transformation, cloud migration, and infrastructure monitoring of our customers' entire technology stacks. Built by engineers, for engineers, Datadog is used by organizations of all sizes across a wide range of industries. Together, we champion professional development, diversity of thought, innovation, and work excellence to empower continuous growth. Join the pack and become part of a collaborative, pragmatic, and thoughtful people-first community where we solve tough problems, take smart risks, and celebrate one another. Learn more about #DatadogLife on Instagram, LinkedIn, and Datadog Learning Center.
Equal Opportunity at Datadog:
Datadog is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and other characteristics protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Here are our Candidate Legal Notices for your reference.
Datadog endeavors to make our Careers Page accessible to all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please complete this form. This form is for accommodation requests only and cannot be used to inquire about the status of applications.
Privacy and AI Guidelines:
Any information you submit to Datadog as part of your application will be processed in accordance with Datadog's Applicant and Candidate Privacy Notice. For information on our AI policy, please visit Interviewing at Datadog AI Guidelines.
Auto-ApplySenior Customer Data Scientist
New York, NY jobs
Datadog helps developers, data teams, and business users ship and learn fast-combining observability, security, feature flagging, and experimentation into one unified platform. Our tools power innovation across the modern enterprise, helping organizations deliver faster, more reliable digital experiences while continuously learning from their data. We're growing rapidly and building new capabilities that put Datadog at the forefront of experimentation, inference, and product development tooling.
About the Team
Eppo, now part of Datadog, is building the experimentation platform that powers the world's most innovative companies-including Coinbase, Perplexity, and DraftKings. Our team of experimenters helps organizations run trustworthy, high-velocity experiments that drive real business outcomes. We bring cutting-edge causal inference methods to bear across a wide range of experiment types and literacy levels, from simple A/B tests to advanced techniques like holdouts, bandits, and synthetic controls.
As part of the Eppo Solutions team within Datadog, you'll work closely with customers to help them succeed with experimentation-guiding them through design, interpretation, and operational best practices, and ensuring that experimentation becomes a core capability within their business. You'll be at the forefront of shaping how the world's leading companies run experiments at scale. At Datadog, you'll join a collaborative, growth-oriented environment with competitive salary, equity, and benefits-where your expertise in data science and experimentation directly contributes to the success of our customers and our business.
At Datadog, we place value in our office culture - the relationships and collaboration it builds, and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them.
What You'll Do
Serve as a technical and analytical partner to customers evaluating Datadog's experimentation capabilities during pre-sales engagements.
Support proofs of concept by helping customers configure metrics, validate data pipelines, and design statistically sound experiments.
Educate customers on best practices in experiment design, metric definition, and interpretation-spanning product A/B tests, marketing lifecycle tests, pricing and packaging, and AI/ML experimentation.
Provide post-sales enablement and training, ensuring customers adopt strong analytical and experimental habits within Eppo and Datadog.
Collaborate with Product and Engineering to represent customer feedback and influence product direction.
Partner with Sales and Solutions leadership to identify opportunities, overcome objections, and build champions within customer organizations.
Contribute to internal documentation, playbooks, and presentations that raise the bar for data-driven experimentation across the company.
Who You Are
You have 4+ years of professional data science experience with a strong foundation in statistics, inference, and experimental design.
You have run many A/B experiments across multiple companies or products, and are comfortable applying advanced methods such as holdouts, bandits, synthetic controls, or geolift tests.
You're fluent in SQL and experienced enough with data engineering concepts to diagnose issues in customer data warehouses and pipelines.
You thrive in customer-facing settings-able to communicate complex analytical ideas clearly, handle objections thoughtfully, and build trust with both technical and non-technical audiences.
You are an active listener, able to sense what's said and what's not said in conversations, and you adjust your approach accordingly.
You enjoy teaching, simplifying, and elevating others' analytical capabilities.
You are motivated by impact-you want your work to influence real business outcomes like customer success, win rates, and revenue growth.
Preferred Qualifications
Experience supporting enterprise customers in a solutions, pre-sales, or consulting capacity.
Familiarity with tools like dbt, Airflow, or similar analytics orchestration systems.
Prior exposure to B2B SaaS or cloud infrastructure analytics environments.
Demonstrated thought leadership in experimentation-talks, posts, or internal education initiatives.
Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you're passionate about technology and want to grow your skills, we encourage you to apply.
Benefits & Growth
New hire stock equity (RSUs) and employee stock purchase plan (ESPP)
Continuous professional development, product training, and career pathing
Intra-departmental mentor and buddy program for in-house networking
An inclusive company culture, with the ability to join our Community Guilds
Access to Inclusion Talks, our internal panel discussions
Free, global Spring Health benefits for employees and dependents age 6+
Competitive global benefits
Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog.
Datadog offers a competitive salary and equity package, and may include variable compensation. Actual compensation is based on factors such as the candidate's skills, qualifications, and experience. In addition, Datadog offers a wide range of best in class, comprehensive and inclusive employee benefits for this role including healthcare, dental, parental planning, and mental health benefits, a 401(k) plan and match, paid time off, fitness reimbursements, and a discounted employee stock purchase plan.
The reasonably estimated yearly salary for this role at Datadog is:$187,000-$240,000 USD
About Datadog:
Datadog (NASDAQ: DDOG) is a global SaaS business, delivering a rare combination of growth and profitability. We are on a mission to break down silos and solve complexity in the cloud age by enabling digital transformation, cloud migration, and infrastructure monitoring of our customers' entire technology stacks. Built by engineers, for engineers, Datadog is used by organizations of all sizes across a wide range of industries. Together, we champion professional development, diversity of thought, innovation, and work excellence to empower continuous growth. Join the pack and become part of a collaborative, pragmatic, and thoughtful people-first community where we solve tough problems, take smart risks, and celebrate one another. Learn more about #DatadogLife on Instagram, LinkedIn, and Datadog Learning Center.
Equal Opportunity at Datadog:
Datadog is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and other characteristics protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Here are our Candidate Legal Notices for your reference.
Datadog endeavors to make our Careers Page accessible to all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please complete this form. This form is for accommodation requests only and cannot be used to inquire about the status of applications.
Privacy and AI Guidelines:
Any information you submit to Datadog as part of your application will be processed in accordance with Datadog's Applicant and Candidate Privacy Notice. For information on our AI policy, please visit Interviewing at Datadog AI Guidelines.
Auto-ApplySr Data Scientist
New York jobs
Infosys is seeking a Data Scientist / Gen AI Lead Consultant with ZGenerative AI, Agentic AI, Machine Learning (ML), AI and Python experience. Ideal candidate is expected to have prior experience in end-to-end implementation of Gen AI and Agentic AI based solution, fine tuning large language models, Machine Learning models that includes identification of ‘right' problem, designing ‘optimum' solution, implementing using ‘best in class' practices and deploying the models to production. Will work in alignment with data strategy at various clients, using multiple technologies and platforms.
Required Qualifications:
Bachelor's Degree or foreign equivalent will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 7 years of Information Technology experience
At least 4 years of hands-on GenAI / Agentic AI and data science with machine learning
Strong proficiency in Python programming.
Experience of deploying the Gen AI applications with one of the Agent Frameworks like Langgraph, Autogen, Crew AI.
Experience in deploying the Gen AI stack/services provided by various platforms such as AWS, GCP, Azure, IBM Watson
Experience in Generative AI and working with multiple Large Language Models and implementing Advanced RAG based solutions.
Experience in processing/ingesting unstructured data from PDFs, HTML, Image files, audio to text etc.
Experience with data gathering, data quality, system architecture, coding best practices
Hands-on experience with Vector Databases (such as FAISS, Pinecone, Weaviate, or Azure AI Search).
Experience with Lean / Agile development methodologies
This position may require travel, will involve close co-ordination with offshore teams
This position is located in Bridgewater, NJ; Sunnyvale, CA; Austin, TX; Raleigh, NC; Richardson, TX; Tempe, AZ; Phoenix, AZ; Charlotte, NC; Houston, TX; Denver, CO; Hartford, CT; New York, NY, Palm Beach, FL; Tampa, FL or Alpharetta, GA, or is willing to relocate.
Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time
Preferred Data Scientist Qualifications:
4 years of hands-on experience with more than one programming language; Python, R, Scala, Java, SQL
Hands-on experience with CI/CD pipelines and DevOps tools like Jenkins, GitHub Actions, or Terraform.
Proficiency in NoSQL and SQL databases (PostgreSQL, MongoDB, CosmosDB, DynamoDB).
Deep Learning experience with CNNs, RNN, LSTMs and the latest research trends
Experience in Python AI/ML frameworks such as TensorFlow, PyTorch, or LangChain.
Strong understanding and experience of LLM fine-tuning, local deployment of open-source models
Proficiency in building RESTful APIs using FastAPI, Flask, or Django.
Experience in Model evaluation tools like DeepEval, FMeval, RAGAS , Bedrock model evaluation.
Experience with perception (e.g. computer vision), time series data (e.g. text analysis)
Big Data Experience strongly preferred, HDFS, Hive, Spark, Scala
Data visualization tools such as Tableau, Query languages such as SQL, Hive
Good applied statistics skills, such as distributions, statistical testing, regression, etc.
The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements.
The estimated annual compensation range for candidates in the below locations will be-
Sunnyvale, CA; Bridgewater, NJ; New York, NY, Denver, CO: $103500 to $188888
Along with competitive pay, as a full-time Infosys employee, you are also eligible for the following benefits :
Medical/Dental/Vision/Life Insurance
Long-term/Short-term Disability
Health and Dependent Care Reimbursement Accounts
Insurance (Accident, Critical Illness, Hospital Indemnity, Legal)
401(k) plan and contributions dependent on salary level
Paid holidays plus Paid Time Off
About Us
Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.
Infosys provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
Senior Customer Data Scientist
New York, NY jobs
Datadog helps developers, data teams, and business users ship and learn fast-combining observability, security, feature flagging, and experimentation into one unified platform. Our tools power innovation across the modern enterprise, helping organizations deliver faster, more reliable digital experiences while continuously learning from their data. We're growing rapidly and building new capabilities that put Datadog at the forefront of experimentation, inference, and product development tooling.
About the Team
Eppo, now part of Datadog, is building the experimentation platform that powers the world's most innovative companies-including Coinbase, Perplexity, and DraftKings. Our team of experimenters helps organizations run trustworthy, high-velocity experiments that drive real business outcomes. We bring cutting-edge causal inference methods to bear across a wide range of experiment types and literacy levels, from simple A/B tests to advanced techniques like holdouts, bandits, and synthetic controls.
As part of the Eppo Solutions team within Datadog, you'll work closely with customers to help them succeed with experimentation-guiding them through design, interpretation, and operational best practices, and ensuring that experimentation becomes a core capability within their business. You'll be at the forefront of shaping how the world's leading companies run experiments at scale. At Datadog, you'll join a collaborative, growth-oriented environment with competitive salary, equity, and benefits-where your expertise in data science and experimentation directly contributes to the success of our customers and our business.
At Datadog, we place value in our office culture - the relationships and collaboration it builds, and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them.
What You'll Do
* Serve as a technical and analytical partner to customers evaluating Datadog's experimentation capabilities during pre-sales engagements.
* Support proofs of concept by helping customers configure metrics, validate data pipelines, and design statistically sound experiments.
* Educate customers on best practices in experiment design, metric definition, and interpretation-spanning product A/B tests, marketing lifecycle tests, pricing and packaging, and AI/ML experimentation.
* Provide post-sales enablement and training, ensuring customers adopt strong analytical and experimental habits within Eppo and Datadog.
* Collaborate with Product and Engineering to represent customer feedback and influence product direction.
* Partner with Sales and Solutions leadership to identify opportunities, overcome objections, and build champions within customer organizations.
* Contribute to internal documentation, playbooks, and presentations that raise the bar for data-driven experimentation across the company.
Who You Are
* You have 4+ years of professional data science experience with a strong foundation in statistics, inference, and experimental design.
* You have run many A/B experiments across multiple companies or products, and are comfortable applying advanced methods such as holdouts, bandits, synthetic controls, or geolift tests.
* You're fluent in SQL and experienced enough with data engineering concepts to diagnose issues in customer data warehouses and pipelines.
* You thrive in customer-facing settings-able to communicate complex analytical ideas clearly, handle objections thoughtfully, and build trust with both technical and non-technical audiences.
* You are an active listener, able to sense what's said and what's not said in conversations, and you adjust your approach accordingly.
* You enjoy teaching, simplifying, and elevating others' analytical capabilities.
* You are motivated by impact-you want your work to influence real business outcomes like customer success, win rates, and revenue growth.
Preferred Qualifications
* Experience supporting enterprise customers in a solutions, pre-sales, or consulting capacity.
* Familiarity with tools like dbt, Airflow, or similar analytics orchestration systems.
* Prior exposure to B2B SaaS or cloud infrastructure analytics environments.
* Demonstrated thought leadership in experimentation-talks, posts, or internal education initiatives.
Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you're passionate about technology and want to grow your skills, we encourage you to apply.
Benefits & Growth
* New hire stock equity (RSUs) and employee stock purchase plan (ESPP)
* Continuous professional development, product training, and career pathing
* Intra-departmental mentor and buddy program for in-house networking
* An inclusive company culture, with the ability to join our Community Guilds
* Access to Inclusion Talks, our internal panel discussions
* Free, global Spring Health benefits for employees and dependents age 6+
* Competitive global benefits
Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog.
Auto-ApplySenior Data Scientist - AI Systems for Business Teams
New York, NY jobs
We build ML-powered systems that help Datadog's customer-facing teams increase revenue and make smarter decisions. Partners include, but are not limited to, Sales, GTM Strategy and Ops, Customer Onboarding for trial-to-paid conversion, Customer Success, Marketing, and Product Management. Our work turns models into durable products that integrate with the tools these teams use every day.
At Datadog, we place value in our office culture - the relationships and collaboration it builds and the creativity it brings to the table. We operate as a hybrid workplace to ensure our Datadogs can create a work-life harmony that best fits them.
What You'll Do:
Design, build, and productionize machine learning systems for revenue-focused use cases such as lead and account scoring, customer onboarding conversion patterns, win/loss signal mining, feature adoption clustering, and recommendations.
Own projects end to end: problem framing, data sourcing, feature engineering, experimentation, offline and online evaluation, deployment, monitoring, and iteration.
Define and uphold production-readiness standards: versioned training data, reproducible pipelines, evaluation gates, model and data quality checks, rollback plans, and SLAs.
Instrument and monitor models in production: drift detection, retraining triggers, performance dashboards, alerting, and post-launch reviews.
Integrate model outputs into business workflows and systems such as Salesforce, Marketo, Customer Success tooling, customer onboarding systems, product analytics surfaces, and team portals.
Partner with data engineering and platform teams to use scalable infrastructure for training, serving, scheduling, lineage, and access control.
Contribute to shared libraries, patterns, and documentation that raise the bar for ML delivery across the org.
Who You Are:
6+ years of hands-on experience in applied machine learning or data science, including ownership of production ML systems.
Strong Python skills and familiarity with common ML and data tooling; experience with platforms such as Airflow, dbt, Snowflake, Spark, or similar.
Architected and shipped reliable models/services with CI/CD and automated tests; data/feature versioning; canary/shadow releases and safe rollbacks; clear SLOs; monitoring and alerting for drift, latency, and accuracy; retraining pipelines; incident runbooks and on-call practices; and compliance/governance best practices.
Depth across the ML lifecycle: dataset design, disciplined experimentation, offline and online evaluation, A/B testing, observability, and safe rollout practices.
Experience integrating model outputs into business systems and measuring impact with business KPIs.
Comfortable working with both technical and non-technical partners; able to turn ambiguous problems into scoped, testable solutions.
A product mindset focused on reliability, usability, and measurable outcomes.
Bonus: experience writing back to systems like Salesforce or Marketo, or supporting Sales, Customer Success, or customer onboarding conversion workflows.
Datadog values people from all walks of life. We understand not everyone will meet all the above qualifications on day one. That's okay. If you're passionate about technology and want to grow your skills, we encourage you to apply.
Benefits and Growth:
New hire stock equity (RSUs) and employee stock purchase plan (ESPP)
Continuous professional development, product training, and career pathing
Intradepartmental mentor and buddy program for in-house networking
An inclusive company culture, ability to join our Community Guilds (Datadog employee resource groups)
Access to Inclusion Talks, our internal panel discussions
Free, global mental health benefits for employees and dependents age 6+
Competitive global benefits
Benefits and Growth listed above may vary based on the country of your employment and the nature of your employment with Datadog.
Auto-ApplySr Data Scientist
New York, NY jobs
Infosys is seeking a Data Scientist / Gen AI Lead Consultant with ZGenerative AI, Agentic AI, Machine Learning (ML), AI and Python experience. Ideal candidate is expected to have prior experience in end-to-end implementation of Gen AI and Agentic AI based solution, fine tuning large language models, Machine Learning models that includes identification of 'right' problem, designing 'optimum' solution, implementing using 'best in class' practices and deploying the models to production. Will work in alignment with data strategy at various clients, using multiple technologies and platforms.
Required Qualifications:
* Bachelor's Degree or foreign equivalent will also consider three years of progressive experience in the specialty in lieu of every year of education.
* At least 7 years of Information Technology experience
* At least 4 years of hands-on GenAI / Agentic AI and data science with machine learning
* Strong proficiency in Python programming.
* Experience of deploying the Gen AI applications with one of the Agent Frameworks like Langgraph, Autogen, Crew AI.
* Experience in deploying the Gen AI stack/services provided by various platforms such as AWS, GCP, Azure, IBM Watson
* Experience in Generative AI and working with multiple Large Language Models and implementing Advanced RAG based solutions.
* Experience in processing/ingesting unstructured data from PDFs, HTML, Image files, audio to text etc.
* Experience with data gathering, data quality, system architecture, coding best practices
* Hands-on experience with Vector Databases (such as FAISS, Pinecone, Weaviate, or Azure AI Search).
* Experience with Lean / Agile development methodologies
* This position may require travel, will involve close co-ordination with offshore teams
* This position is located in Bridgewater, NJ; Sunnyvale, CA; Austin, TX; Raleigh, NC; Richardson, TX; Tempe, AZ; Phoenix, AZ; Charlotte, NC; Houston, TX; Denver, CO; Hartford, CT; New York, NY, Palm Beach, FL; Tampa, FL or Alpharetta, GA, or is willing to relocate.
* Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time
Preferred Data Scientist Qualifications:
* 4 years of hands-on experience with more than one programming language; Python, R, Scala, Java, SQL
* Hands-on experience with CI/CD pipelines and DevOps tools like Jenkins, GitHub Actions, or Terraform.
* Proficiency in NoSQL and SQL databases (PostgreSQL, MongoDB, CosmosDB, DynamoDB).
* Deep Learning experience with CNNs, RNN, LSTMs and the latest research trends
* Experience in Python AI/ML frameworks such as TensorFlow, PyTorch, or LangChain.
* Strong understanding and experience of LLM fine-tuning, local deployment of open-source models
* Proficiency in building RESTful APIs using FastAPI, Flask, or Django.
* Experience in Model evaluation tools like DeepEval, FMeval, RAGAS , Bedrock model evaluation.
* Experience with perception (e.g. computer vision), time series data (e.g. text analysis)
* Big Data Experience strongly preferred, HDFS, Hive, Spark, Scala
* Data visualization tools such as Tableau, Query languages such as SQL, Hive
* Good applied statistics skills, such as distributions, statistical testing, regression, etc.
The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements.
The estimated annual compensation range for candidates in the below locations will be-
Sunnyvale, CA; Bridgewater, NJ; New York, NY, Denver, CO: $103500 to $188888
Along with competitive pay, as a full-time Infosys employee, you are also eligible for the following benefits :
* Medical/Dental/Vision/Life Insurance
* Long-term/Short-term Disability
* Health and Dependent Care Reimbursement Accounts
* Insurance (Accident, Critical Illness, Hospital Indemnity, Legal)
* 401(k) plan and contributions dependent on salary level
* Paid holidays plus Paid Time Off