Data Scientist
Data scientist job in Port Washington, NY
The world around us is changing. Retail is in a state of major transition, and consumers have more options than ever. As a leading provider of global information and advisory services, NPD is at the forefront of helping our clients, the world's biggest brands and retailers understand and profit from these changes.
Since 1966, we have been helping businesses track industry trends and understand their customers in order to get the right products in the right places for the right people at the right prices. We serve more than a dozen industries including consumer electronics, entertainment, fashion, food / foodservice, toys, video games, and more.
We want to lead manufacturers and retailers on collaborating via the effective use of our information in line reviews and joint business planning.
Job Description
NPD group is looking for a principal data scientist on both the engineering side and the analytics side of data science. This position resides in the Global Data Quality Management group. He or she will combine the skills to create new prototypes with the creativity and thoroughness to interrogate the most challenging questions about data quality, which is at the center of NPD's value creation for our clients. This is a leadership position and will require superior ability to quickly gather information and requirements from stakeholders, formulate solution, and implement the solution within the data quality groups. This position will interact frequently with senior leadership teams and will direct and manage the work of junior data scientists. Qualified candidates will have a strong academic background in mathematics, statistics, computer science, operational research, economics, and other highly quantitative disciplines, passion about data science and machine learning and experience with big data architecture and methods.
Overall Responsibilities
:
Drive the creation of new data quality management capabilities that will bring significant operational efficiency.
Conceptualize, analyze and develop actionable recommendations best practices in data quality processes
Work with key stakeholders and understand their needs to develop new or improve existing solutions around data quality.
Manage data analysis to develop fact-based recommendations for innovation projects.
Work with cross-functional teams to develop ideas and execute business plans.
Remain current on new developments in data quality
Qualifications
8+ years' experience in modeling and predictive analytics with experience working with recommendation engine
Excellent problem solving skills with the ability to design algorithms, which may include data profiling, clustering, anomaly detection, and predictive modeling methodologies
Strong skills in statistical analyses with abilities in advanced data management and statistical programming using SAS, R, and other languages
Familiarity with Agile methodology
Ability to work cross-functionally in a highly matrix driven organization under ambiguous circumstances
Broad understanding and experience of recommendation engine
Personal qualities desired: creativity, tenacity, curiosity, and passion for deep technical excellence
Advanced degree in a quantitative field (Statistics, Mathematics, Economics, etc.), PhD highly preferred.
Additional Information
*NPD is an EQUAL EMPLOYMENT OPPORTUNITY/AFFIRMATIVE ACTION EMPLOYER.
Junior Data Scientist
Data scientist job in New Haven, CT
About Us
Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment.
The Job
Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies.
Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively.
Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery.
Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries.
Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts.
Qualifications and Skills:
Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks.
Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered.
Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus.
Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results.
Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, data engineers, and AI researchers.
Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
Director, ERM - Actuary or Data Scientist
Data scientist job in Greenwich, CT
Company Details "Our Company provides a state of predictability which allows brokers and agents to act with confidence." Founded in 1967, W. R. Berkley Corporation has grown from a small investment management firm into one of the largest commercial lines property and casualty insurers in the United States.Along the way, we've been listed on the New York Stock Exchange, become a Fortune 500 Company, joined the S&P 500, and seen our gross written premiums exceed $10 billion.Today the Berkley brand comprises more than 60+ businesses worldwide and is divided into two segments: Insurance and Reinsurance and Monoline Excess. Led by our Executive Chairman, founder and largest shareholder, William. R. Berkley and our President and Chief Executive Officer, W. Robert Berkley, Jr., W.R. Berkley Corporation is well-positioned to respond to opportunities for future growth.
The Company is an equal employment opportunity employer.
Responsibilities
* Please provide a one-page resume when applying.
Enterprise Risk Management (ERM) TeamOur key risk management aim is to maximize Berkley's return on capital over the long term for an acceptable level of risk. This requires regular interaction with senior management both in corporate and our business units. The ERM team comprises ERM actuaries and catastrophe modelers responsible for identification, quantification and reporting on insurance, investment, credit and operational risks. The ERM team is a corporate function at Berkley's headquarters in Greenwich, CT.
The RoleThe successful candidate will collaborate with other ERM team members on a variety of projects with a focus on exposure management and catastrophe modeling for casualty (re)insurance. The candidate is expected to demonstrate expertise in data and analytics and be capable of presenting data-driven insights to senior executives.
Key responsibilities include:
Casualty Accumulation and Catastrophe Modeling• Lead the continuous enhancement of the casualty data ETL process• Analyze and visualize casualty accumulations by insureds, lines and industries to generate actionable insight for business leaders• Collaborate with data engineers to resolve complex data challenges and implement scalable solutions• Support the development of casualty catastrophe scenarios by researching historical events and emerging risks• Model complex casualty reinsurance protections
Risk Process Automation and Group Reporting• Lead AI-driven initiatives aimed at automating key risk processes and projects• Contribute to Group-level ERM reports, including deliverables to senior executives, rating agencies and regulators
Qualifications
* Minimum of 5 years of experience in P&C (re)insurance, with a focus on casualty• Proficiency in R/Python and Excel• Strong verbal and written communication skills• Proven ability to manage multiple projects and meet deadlines in a dynamic environment
Education Requirement
* Minimum of Bachelor's degree required (preferably in STEM)• ACAS/FCAS is a plus
Sponsorship Details
Sponsorship Offered for this Role Responsibilities *Please provide a one-page resume when applying. Enterprise Risk Management (ERM) Team Our key risk management aim is to maximize Berkley's return on capital over the long term for an acceptable level of risk. This requires regular interaction with senior management both in corporate and our business units. The ERM team comprises ERM actuaries and catastrophe modelers responsible for identification, quantification and reporting on insurance, investment, credit and operational risks. The ERM team is a corporate function at Berkley's headquarters in Greenwich, CT. The Role The successful candidate will collaborate with other ERM team members on a variety of projects with a focus on exposure management and catastrophe modeling for casualty (re)insurance. The candidate is expected to demonstrate expertise in data and analytics and be capable of presenting data-driven insights to senior executives. Key responsibilities include: Casualty Accumulation and Catastrophe Modeling • Lead the continuous enhancement of the casualty data ETL process • Analyze and visualize casualty accumulations by insureds, lines and industries to generate actionable insight for business leaders • Collaborate with data engineers to resolve complex data challenges and implement scalable solutions • Support the development of casualty catastrophe scenarios by researching historical events and emerging risks • Model complex casualty reinsurance protections Risk Process Automation and Group Reporting • Lead AI-driven initiatives aimed at automating key risk processes and projects • Contribute to Group-level ERM reports, including deliverables to senior executives, rating agencies and regulators
Auto-ApplyData Scientist
Data scientist job in Rye, NY
Responsibilities
The Data Scientist will work within the Enterprise Analytics and Data Science team to become an expert at understanding all NYBCe data domains and participate in delivering high-quality, high-velocity data products. This includes developing, training, and monitoring high-fidelity models and analyses to optimize products and processes as well as test the effectiveness of different courses of action.
Candidates must be able to report into one of the following NYBCe locations: New York City, NY; Kansas City, Missouri; St. Paul, Minnesota; Providence , RI and Newark, DE.
Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
Partner with the data engineering and analytics team to define measurement approaches, quantify and evaluate success, develop KPIs, and guide tracking efforts for ongoing measurement and reporting.
Tackle complex and ambiguous analysis projects: provide clarity and leverage tools to answer complicated business questions.
Use a variety of modeling techniques to increase and optimize donor, customer, and employee experiences, revenue generation, ad targeting, and other business outcomes.
Perform detailed data/process analysis and create supporting visualizations and reports.
Coordinate with different functional teams to implement models and monitor outcomes.
Develop processes and tools to monitor and analyze model performance and data accuracy.
Ensure data quality and data hygiene across enterprise reporting platforms focusing on consistent, timely, and accurate data. Run regular tests and discrepancy checks to ensure continuity, accuracy, and quality of data.
Develop reporting, visualizations, dashboards, and analytics products for all departments across the enterprise.
Identify opportunities to improve reporting and analytics processes and identify opportunities to automate tasks.
Any related duties as assigned.
Qualifications
Education:
Bachelor's Degree in Information Technology, Computer Science, or a related area.
Master's or Ph.D. in Statistics, Mathematics, Computer Science, or another quantitative field.
Experience:
4+ years of industry experience in data science, business analytics, statistics, business intelligence, or comparable data analysis role, including data warehousing and business intelligence tools, techniques, and technology.
At least 4+ years of hands-on experience with Python and or R for statistical data analysis is required.
Experience with Tableau, R/R-Shiny, or Python libraries (Matplotlib, Seaborn, ggplot…) to create impactful reports, visualizations, and interactive dashboards is required.
4+ years of experience manipulating data sets and building statistical models
Knowledge:
Experience using statistical computer languages (R, Python, SLQ, etc.) to extract, manipulate and draw insights from large data sets.
Knowledge in statistical and data mining techniques such as GLM/Regression, Random Forest, Boosting, Trees, text mining, and social network analysis.
Experience analyzing data from 3rd party providers: Google Analytics, Facebook, etc.
Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, proper usage, etc.).
Skills:
Must be an analytical mind, critical thinker, and problem solver.
Detail-oriented with strong organization and time management skills.
Attention to detail and organizational skills
Cultural competency and the ability to communicate effectively in a culturally sensitive manner with both individuals and groups from diverse backgrounds.
Abilities:
Strong problem-solving skills with an emphasis on product development.
Ability to work independently or as part of a team
Ability to interact with customers one to one or in large groups
Excellent oral, written and presentation skills are mandatory
Ability to build in receiving feedback as part of the development process, seek consistent and constructive feedback.
Ability to embrace accountability and ownership.
A combination of education, training, and experience will be considered to meet the total required years of experience.
For applicants who will perform this position in New York City or Westchester County, the proposed annual salary is $110,000.00p/yr. to $120,000.00p/yr. For applicants who will perform this position outside of New York City or Westchester County, salary will reflect local market rates and be commensurate with the applicant's skills, job-related knowledge, and experience.
Position reporting into our New York location: $110,000 - $120,000
Positions reporting into our Rhode Island location: $105,000 - $115,000
Positions reporting into our Minnesota location: $105,000 - $115,000
Position reporting into our Delaware location: $100,000 - $110,000
Positions reporting into our Nebraska location: $100,000 - $110,000
Positions reporting into our Missouri location:$100,000 - $110,000
Unless otherwise specified, all posted opportunities are located in the New York or Greater Tri-State office locations.
Overview
Founded in 1964, New York Blood Center Enterprises (NYBCe) has provided more than 60 years of lifesaving research, innovation, and impact. NYBCe is one of the largest nonprofit blood centers, spanning 17+ states and serving 75 million people. NYBCe operates Blood Bank of Delmarva, Community Blood Center of Kansas City, Connecticut Blood Center, Memorial Blood Centers, Nebraska Community Blood Bank, New Jersey Blood Services, New York Blood Center, and Rhode Island Blood Center, delivering one million blood products to 400+ U.S. hospitals annually. NYBCe additionally delivers cellular therapies, specialty pharmacy, and medical services to 200+ research, academic, and biopharmaceutical organizations. NYBCe's Lindsley F. Kimball Research Institute is a leader in hematology and transfusion medicine research, dedicated to the study, prevention, treatment and cure of bloodborne and blood-related diseases. NYBC serves as a vital community lifeline dedicated to helping patients and advancing global public health. To learn more, visit nybc.org. Connect with us on Facebook, X, Instagram, and LinkedIn.
Auto-ApplyData Scientist
Data scientist job in West Haven, CT
The Data Scientist, Northeast Program Evaluation Center (NEPEC) is organizationally part of the VA Central Office (VACO) Office of Mental Health and Suicide Prevention (OMHSP). This position is in direct communication with OMHSP, as well as the Director and Associate Directors of NEPEC. The primary purpose of the Data Scientist is to evaluate VA specialized mental health systems.
Duties include but are not limited to:
* Conceptualize and conduct program evaluation projects involving large healthcare data sets using knowledge about multiple fields of research that bridge diverse disciplines.
* Conducting projects that require a versatile skill set within the realm of Data Analytics, Modeling, Optimization, Governance and Visualization while having the ability to utilize and apply statistical methods and techniques to enhance data analysis.
* Developing and maintaining electronic program monitoring tools for wide dissemination of data inside and outside VA, including to VA Central Office, Office of Informatics and Analytics, professional organizations, and the public.
* Use multiple software packages and operating system environments suitable for working with, manipulating, and analyzing large datasets.
* Create and maintain systems and databases designed to ensure that projects comply with administrative policies, procedures, and the data security guidelines and policies of the VA Office of Information Technology (OIT) Oversight and Compliance.
* Use of Structured Querying Language (SQL) Server Management Studio, SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) to create and maintain reports that support NEPEC program evaluation projects.
* Provides NEPEC staff with technical training as deemed necessary by the Director, both orally and in writing.
* Participates in national initiatives involving data syndication and data interoperability and collaborates with internal and external subject matter and data experts in data innovation and migration projects.
Work Schedule: Monday-Friday, 8am-430pm
Compressed/Flexible: Authorized
Telework: Ad-Hoc is Authorized
Remote: Not Authorized
Virtual: Not Authorized.
Position Description/PD#: Data Scientist/PD03739-A
Relocation/Recruitment Incentives: Not Authorized
Financial Disclosure Report: Not required
Notifications:
* This position is not a Bargaining Unit position.
* Selectee may be required to work at any VACT Healthcare System campus, as needed.
* May be required to attend conference at other facilities or other VA sites on occasion.
* The incumbent may be required to travel to other VA campuses and CBOC's.
* This position is in the Competitive Service.
Senior Data Scientist/Analytics Specialist
Data scientist job in Greenwich, CT
About the Role You will deliver insights and predictive models that directly impact investment, portfolio, and operational decisions. As the lead scientist in the new data squad, you'll combine analytics and machine learning to generate high-value business outcomes.
Key Responsibilities
* Deliver analytics dashboards and insights to stakeholders.
* Build prototypes of predictive and forecasting models for business use cases.
* Partner with product managers and business leaders to define priorities.
* Translate complex analytics into clear, actionable business recommendations.
* Establish standards for model development, validation, and monitoring.
* Mentor junior data scientists and analysts.
Requirements
* 5-8+ years in data science or advanced analytics roles.
* Strong skills in SQL, visualization, and a general-purpose programming language (Python, R, etc.).
* Experience developing and deploying predictive models into production.
* Excellent communication and storytelling skills for non-technical stakeholders.
* Familiarity with MLOps and model lifecycle management is a plus.
Auto-ApplyAssociate Data Scientist - Spring 2026 Master's level graduates
Data scientist job in Stamford, CT
The Insights and Product Analytics team (IPA) organization is responsible for business performance of all research products and performs analysis related to all aspects of Gartner's Business and Technology Insights business unit. This includes client value drivers: Research content, Client interaction, and the insights into conferences and events. IPA also supports the BTI organization by enabling and performing analysis running from client retention analytics, associate performance analytics, budget and financial analysis (in partnership with the finance organization), and client demand sensing. We power fact-based decision making by providing data, insights, and analytic tools to continuously improve our business - operationally and strategically. We're committed to attracting the most creative, talented, and motivated students for our Associate Data Scientist and Data Scientist roles.
What you'll do:
● Execute large scale, high impact data modeling projects with responsibility for designing, developing, validating, socializing, operationalizing, and maintaining data-driven analytics that provide business insights to increase operational efficiency and customer value.
● Provide ad hoc modeling and analytical insights to inform strategic and operational initiatives.
● Conduct all phases of the analytics process. Including: Understanding business issues, proposing technical solutions, data wrangling, data cleaning, data analysis, feature engineering, model selection, model development, model validation, model operationalization, presentation of results and insights, model implementation, model documentation.
● Convert "top-down" business initiative requirements into actionable data analytics projects as well as conceiving and proposing "bottom-up" analytics innovations.
● Communicate technical solutions and results to business stakeholders.
● Partner with business stakeholders, IT, Project Management and lead the design and delivery of innovative analytics solutions.
● Inject the most applicable technology, including Machine Learning, Artificial Intelligence, Generative AI, Natural Language Processing, and Statistical Modelling.
Job Requirements:
● Education: Master's degree or Bachelor's Degree with at least 2 years of related experience required.
* Degrees in Engineering, Masters of Statistics, Computer Science, Mathematics, Applied Mathematics, Data Science, or related field preferred.
● Previous experience must include data modeling experience in a business environment.
● In-depth knowledge of statistical principles and their application in modeling and data analysis.
● Experience developing and applying descriptive, predictive, prescriptive models.
● In-depth NLP knowledge and application experience.
● Expertise in Python, SQL, Spark. Basic skills in PBI, Excel, Power Point.
● Experience with multiple modeling techniques such as: Time Series, Random Forests, Clustering, Neural Networks, Generalized Linear Models, Optimization, DOE, Dimensionality Reduction.
● Experience with churn analysis, profiling, recommendation systems.
● MLOps experience (implementing models/algorithms into production systems).
● Ability to work in a fast-paced environment and deliver against milestones.
● Excellent communication skills in technical and business domains.
● Proven ability to influence key stakeholders and leaders.
● Work authorization: This role requires U.S. work authorization
#LI-DNI
Who are we?
At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world.
Our mission relies on expert analysis and bold ideas to deliver actionable, objective business and technology insights, helping enterprise leaders and their teams succeed with their mission-critical priorities.
Since our founding in 1979, we've grown to 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That's why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here.
What makes Gartner a great place to work?
Our vast, virtually untapped market potential offers limitless opportunities - opportunities that may not even exist right now - for you to grow professionally and flourish personally. How far you go is driven by your passion and performance.
We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients.
Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations.
We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work.
What do we offer?
Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers.
In our hybrid work environment, we provide the flexibility and support for you to thrive - working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring.
Ready to grow your career with Gartner? Join us.
Gartner believes in fair and equitable pay. A reasonable estimate of the base salary range for this role is 88,000 USD - 116,000 USD. Please note that actual salaries may vary within the range, or be above or below the range, based on factors including, but not limited to, education, training, experience, professional achievement, business need, and location. In addition to base salary, employees will participate in either an annual bonus plan based on company and individual performance, or a role-based, uncapped sales incentive plan. Our talent acquisition team will provide the specific opportunity on our bonus or incentive programs to eligible candidates. We also offer market leading benefit programs including generous PTO, a 401k match up to $7,200 per year, the opportunity to purchase company stock at a discount, and more.
The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity.
Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company's career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at ***************** or by sending an email to ApplicantAccommodations@gartner.com.
Job Requisition ID:103575
By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence.
Gartner Applicant Privacy Link: *************************************************
For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Auto-ApplyEXCLUSIVE: Chief Actuary - Reserving - North America
Data scientist job in Stamford, CT
EXCLUSIVE! Highly visible Regional Chief Actuary opportunity with Multinational Insurance leader, offering the chance to lead as Appointed Actuary for U. S. legal entities, sign SAOs, and serve as a trusted advisor to the Global Chief Actuary. Influential role leads actuarial strategy, reserve governance, valuation, and financial reporting while ensuring regulatory compliance across North America.
With extensive interaction among worldwide leaders and business partners, the Chief Actuary fosters collaboration across diverse regions by leveraging both cultural and technical expertise.
Seeking a relationship-oriented ACAS/FCAS with deep Reserving and Casualty market expertise to guide and inspire a high-performing actuarial team with confidence and integrity.
Base salary up to $315K plus a robust benefits package.
Reinsurance Actuary (Director or Managing Director Level)
Data scientist job in Stamford, CT
Howden Re is the global reinsurance broker and risk, capital & strategic advisor focused on relentless innovation & superior analytics for top client service.
About Role
This is a Mid-level position and will reside within the Actuarial team. We expect this person to work successfully across Analytics, Actuarial, and Broking functions providing the full suite of actuarial work in support of reinsurance placements for clients. You will be joining an experienced analytics team that produces quality solutions in a collegial, casual, and results-driven environment.
Responsibilities | Support:
Traditional LR analysis, experience/exposure rating, stochastic modelling, etc
Present analyses in clear terms appropriate to the audience
Provide value-added service to clients as needed
Market research and development & assist senior actuaries with industry studies
A high priority will be the development & programming of various tools to aid in streamlining workflow and helping Howden Re fully utilize data
Interpersonal | Communication | Teamwork:
Willingness to be part of Howden Re's “team first” culture
Keen ability to take initiative
Sets effective priorities and handles multiple projects under tight timeframes
Responds constructively to different viewpoints, changing priorities, new conditions
Works well in teams with colleagues of various backgrounds
Shares knowledge, opinions and insights in constructive manner
Offers to help others without prompting, & assists others in learning
Qualifications:
ACAS or FCAS required
Bachelor's degree from reputable university; advanced degree a huge plus
7-15 years of experience in the (re)insurance industry
Able to apply advanced mathematical / actuarial concepts and techniques
Skilled in using Microsoft Excel
Software experience with R, VBA, Python
Proven track record of hard work, client success, and innovation
Legally authorized to work in the United States
The expected base salary range for this role is $225,000-300,000. The base salary range is based on level of relevant experience and location and does not include other types of compensation such as discretionary bonus or benefits.
Auto-ApplyData Solutions - Summer 2026 Intern
Data scientist job in Stamford, CT
Join the fintech powerhouse redefining how the world invests in private markets. iCapital is a global leader in alternative investments, trusted by financial advisors, wealth managers, asset managers, and industry innovators worldwide. With $999.73 billion in assets serviced globally-including $272.1 billion in alternative platform assets-we empower over 3,000 wealth management firms and 118,000 financial professionals to deliver cutting-edge alternative investment solutions.
This summer, become part of a dynamic team where your ideas matter. Make a meaningful impact, accelerate your professional growth, and help push the boundaries of what's possible at the intersection of technology and finance.
Key features of our Summer 2026 Internship:
Become a key member of the iCapital team, driving initiatives, contributing to projects, and potentially jumpstart your career with us after graduation.
Immerse yourself in an inclusive company culture where we create a sense of belonging for everyone.
Gain exclusive access to the AltsEdge Certificate Program, our award-winning alternative investments education curriculum for wealth managers.
Attend recurring iLearn seminars and platform demos where you will learn the latest about our products.
Participate in an intern team project, culminating in an end-of-summer presentation to a panel of senior executives.
Join senior executive speaker seminars that provide career development, guidance, and access to the leaders at iCapital.
About the role:
The Data Solutions department provides a reporting service that leverages top-tier third-party reporting tools to assist UHNW clients in identifying opportunities and risks within their portfolios. Through collaborations with leading technology platforms, we curate reports that offer insightful, consolidated, real-time views of all assets and liabilities, detailing what they are, who holds them, how ownership is divided, how they're invested, and how they're performing. These reports are strategically designed to uncover opportunities and highlight financial risks.
Learn and leverage financial reporting and data aggregation tools:
Conduct account level reconciliation.
Provide accurate and timely statements and data entry.
Work with internal teams to resolve data issues.
Generate Ad Hoc reports as needed.
Work with the team to prioritize individual and communal work to ensure all projects are completed on time and to detailed specifications.
Valued qualities and key skills:
Highly inquisitive, collaborative, and a creative problem solver
Possess foundational knowledge of and/or genuine interest in the financial markets
Able to thrive in a fast-paced environment
Able to adapt to new responsibilities and manage competing priorities
Technologically proficient in Microsoft Office (Excel, PowerPoint)
Strong verbal and written communication skills
What we offer:
Outings with iCapital team members and fellow interns to build connections and grow your network.
Corporate culture and volunteer activities in support of the communities where we live and work.
Rooftop Happy Hours showcasing our impressive views of NYC.
Eligibility:
A rising junior or senior in a U.S. college/university bachelor's degree program
Must be available to work the duration of the program from June 8th through August 7th to be eligible
Committed to working five days a week in the Stamford office for the entire duration of the internship
Authorized to work in the United States*
*We are unable to offer any type of employment-based immigration sponsorship for this program
Pay Rate: $31.00/hour + relocation stipend and transportation stipend
iCapital in the Press:
We are innovating at the intersection of technology and investment opportunity, but don't take our word for it. Here's what others are saying about us:
Two consecutive years on the CNBC World's Top Fintech Companies list
Two consecutive years listed in Top 100 Fastest Growing Financial Services Companies
Four-time winner of the Money Management Institute/Barron's Solutions Provider of the Year
For additional information on iCapital, please visit **************************************** Twitter: @icapitalnetwork | LinkedIn: ***************************************************** | Awards Disclaimer: ****************************************/recognition/
Auto-ApplySenior Data Engineer
Data scientist job in Farmingdale, NY
D'Addario & Company is the world's largest manufacturer and distributor of musical instrument accessories. As a U.S.-based manufacturing leader, we pride ourselves on high-automation machinery, cutting-edge technology, and a deep commitment to environmentally sustainable practices. Most importantly, we're proud of our diverse team of individuals who embody our core values-family, curiosity, passion, candor, and responsibility-and bring them to life every day.
D'Addario is seeking a Senior Data Engineer to help architect, build, and optimize the next generation of our global data infrastructure. In this role, you'll design and maintain production-grade data pipelines, support AI and machine learning initiatives, and serve as a technical mentor within a growing Business Intelligence team. You'll work closely with the Global Director of BI to deliver scalable solutions that power insights and innovation across the organization. This position is ideal for someone who thrives on solving complex data challenges, enjoys bringing structure to large datasets, and is passionate about enabling smarter decision-making through data.
This is a hybrid role and will require the candidate to work on-site in the Farmingdale office three days a week.
At D'Addario, we don't just offer a job-we offer a career with one of the most iconic names in the music industry. We're passionate about innovation, craftsmanship, and creating a workplace where diverse backgrounds, perspectives, and ideas thrive. We're eager to connect with individuals who bring fresh thinking and a collaborative spirit. If you're ready to make an impact, we'd love to hear how you'll add value to our team.
Some Perks & Benefits of Working at D'Addario:
Competitive compensation package
Health, vision, and dental insurance
12 weeks of fully paid parental leave
Fertility and family-building benefits
401(k) retirement plan with generous employer contributions
Career pathing and professional development via LinkedIn Learning
Paid Time Off (PTO) and flexible sick day policy
12 Paid Holidays
Life and AD&D Insurance
Enhanced Short-Term Disability Insurance
Employee Assistance Program (EAP)
Tuition Reimbursement
Discounts on D'Addario products and merchandise
Company jam nights, artist performances, holiday parties, and special events
A passionate, talented team that loves what they do!
Responsibilities
Build & Optimize Pipelines: Design, implement, and maintain robust, high-performance data pipelines to support analytical models within Microsoft Fabric and our data environment.
Data Integration: Connect and harmonize new data sources, including ERP, e-commerce platforms, and external APIs.
Mentorship & Standards: Guide junior BI team members, lead code reviews, and establish coding, documentation, and testing best practices.
AI/ML Enablement: Partner on machine learning and AI projects from proof-of-concept through deployment, supporting predictive and prescriptive analytics into production workflows.
Advanced Analytics Development: Team up with analysts to prepare data products and predictive models using Python, PySpark, and modern ML frameworks.
Collaboration: Work with stakeholders across Sales, Marketing, Operations, and Product to translate business requirements and align priorities.
Technical Leadership: Drive data engineering excellence through continuous improvement, quality assurance, and innovation in data architecture and governance.
Qualifications
5+ years of experience building and maintaining production-grade data pipelines.
Advanced programming skills in Python, PySpark, and SQL.
Strong background in data modeling and scalable analytics.
Experience deploying machine learning models and data products in production environments.
Solid understanding of cloud data platforms (Azure preferred).
Bachelor's degree in Computer Science, Engineering, Data Science, or equivalent experience.
Clear communicator with the ability to simplify complex technical concepts.
Proven leadership in mentoring and developing technical talent.
Highly organized, self-directed, and comfortable in fast-paced environments.
Passion for using data to drive innovation and business impact.
The base salary range for this role would be commensurate with experience: $140k to $165k per year
#LI-HYBRID
Auto-ApplyData Engineer
Data scientist job in Greenwich, CT
Interactive Brokers Group, Inc. (Nasdaq: IBKR) is a global financial services company headquartered in Greenwich, CT, USA, with offices in over 15 countries. We have been at the forefront of financial innovation for over four decades, known for our cutting-edge technology and client commitment.
IBKR affiliates provide global electronic brokerage services around the clock on stocks, options, futures, currencies, bonds, and funds to clients in over 200 countries and territories. We serve individual investors and institutions, including financial advisors, hedge funds and introducing brokers. Our advanced technology, competitive pricing, and global market help our clients to make the most of their investments.
Barron's has recognized Interactive Brokers as the #1 online broker for six consecutive years. Join our dynamic, multi-national team and be a part of a company that simplifies and enhances financial opportunities using state-of-the-art technology.
This is a hybrid role (3 days in the office/2 days remote).
Job Summary:
Develop our comprehensive data processing pipeline transforming on-premises Kafka streams into both actionable business insights and regulatory compliance reports through AWS cloud services (S3, Glue, Athena, EMR). Design robust ETL processes and build automated, scalable data solutions aligned with our zero-maintenance vision, delivering high-quality outputs for both business decision-making and regulatory requirements.
About your team:
We are the Realtime Order Analytics and Reporting team, a dynamic group focused on transforming financial transaction data into valuable business intelligence and regulatory reporting. Our team:
Works with cutting-edge technologies, including AWS cloud services and realtime data processing
Operates in a collaborative environment where innovation and ideas are encouraged
Maintains a balance between technical excellence and business impact
Values automation and efficiency in all our solutions
Fosters continuous learning and professional development
Plays a critical role in supporting business decision-making and ensuring regulatory compliance
Embraces agile methodologies to deliver high-quality solutions efficiently
We're looking for someone who shares our passion for data engineering and wants to make a significant impact by turning complex financial data into actionable insights.
What will be your responsibilities within IBKR:
Designing, developing, and maintaining ETL workflows using AWS services
Processing data from Kafka streams and S3 storage to generate insights
Implementing data transformation logic using Python, PySpark, and PyAthena
Creating and optimizing data models for both analytical and regulatory reporting needs
Building automated data quality checks and monitoring systems
Developing and maintaining documentation for data pipelines and processes
Troubleshooting and resolving data pipeline issues
Contributing to architectural decisions for data infrastructure
Ensuring data solutions meet performance, security, and compliance requirements
Continuously improving our data systems for scalability and reduced maintenance
Which skills are required:
Bachelor's or master's degree in Computer Science or a related field
3+ years of professional software engineering experience in Python, PySpark and PyAthena
3+ years of professional experience in Python as a primary language (non-scripting)
Extensive experience in Pandas or NumPy
Experience with ETL processes and data warehousing concepts
Familiarity with cloud technologies, particularly AWS (S3, Glue, Athena, EMR)
Experience using ELK Stack (Elasticsearch, Logstash, Kibana)
Thorough understanding of databases and SQL
1+ years of professional experience with Linux operating systems
An analytical mind and business acumen
Strong communication skills
Good to have:
Experience with financial markets or the brokerage industry
Experience with business intelligence tools, especially Tableau
Experience with version control systems (e.g., Git, BitBucket)
Experience with CI/CD Practices and Tools
To be successful in this position, you will have the following:
Self-motivated and able to handle tasks with minimal supervision.
Superb analytical and problem-solving skills.
Excellent collaboration and communication (Verbal and written) skills.
Outstanding organizational and time management skills.
Company Benefits & Perks
Competitive salary, annual performance-based bonus and stock grant
Retirement plan 401(k) with competitive company match
Excellent health and wellness benefits, including medical, dental, and vision benefits, and a company-paid medical healthcare premium.
Wellness screenings and assessments, health coaches and counseling services through an Employee Assistance Program (EAP)
Paid time off and a generous parental leave policy
Daily company lunch allowance provided, and a fully stocked kitchen with healthy options for breakfast and snacks
Corporate events, including team outings, dinners, volunteer activities and company sports teams
Education reimbursement and learning opportunities
Modern offices with multi-monitor setups
Data Engineer
Data scientist job in New Hyde Park, NY
Job Description
Data is pivotal to our goal of frequent launch and rapid iteration. We're recruiting a Data Engineer at iRocket to build pipelines, analytics, and tools that support propulsion test, launch operations, manufacturing, and vehicle performance.
The Role
Design and build data pipelines for test stands, manufacturing machines, launch telemetry, and operations systems.
Develop dashboards, real-time monitoring, data-driven anomaly detection, performance trending, and predictive maintenance tools.
Work with engineers across propulsion, manufacturing, and operations to translate data-needs into data-products.
Maintain data architecture, ETL processes, cloud/edge-data systems, and analytics tooling.
Support A/B testing, performance metrics, and feed insights back into design/manufacturing cycles.
Requirements
Bachelor's degree in Computer Science, Data Engineering, or related technical field.
2+ years of experience building data pipelines, ETL/ELT workflows, and analytics systems.
Proficient in Python, SQL, cloud data platforms (AWS, GCP, Azure), streaming/real-time analytics, and dashboarding (e.g., Tableau, PowerBI).
Strong ability to work cross-functionally and deliver data-products to engineering and operations teams.
Strong communication, documentation, and a curiosity-driven mindset.
Benefits
Health Care Plan (Medical, Dental & Vision)
Retirement Plan (401k, IRA)
Life Insurance (Basic, Voluntary & AD&D)
Paid Time Off (Vacation, Sick & Public Holidays)
Family Leave (Maternity, Paternity)
Short Term & Long Term Disability
Wellness Resources
P&C Commercial Insurance Data Analytics Intern - Genesis
Data scientist job in Stamford, CT
Shape Your Future With Us Genesis Management and Insurance Services Corporation (Genesis) is a premier alternative risk transfer provider, offering innovative solutions for the unique needs of public entity and education clients. Genesis takes pride in being a long-term thought partner and provider of insurance and reinsurance to public sector, K-12 and higher education self-insured individual risks, pools and trusts for over 30 years.
Genesis is a wholly-owned subsidiary of General Re Corporation, a subsidiary of Berkshire Hathaway Inc. General Re Corporation is a holding company for global reinsurance and related operations with more than 2,000 employees worldwide. Our first-class financial security receives the highest financial strength ratings.
Genesis currently offers an excellent opportunity for a P&C Commercial Insurance Data Analytics Intern based in our Stamford office. This opportunity is for available for Summer 2026 (July-August). This is a hybrid role.
Role Description
Join Genesis' Actuarial Pricing Unit for an immersive 8-week internship during Summer 2026. This program is designed to provide hands-on experience in actuarial pricing, data analytics, and research. Interns will work on real-world projects that combine technical skills with critical thinking to support pricing strategies and risk assessment.
You will:
* Gain exposure to actuarial concepts, insurance industry practices, and pricing methodologies.
* Work with advanced tools and technologies, including R, SQL, Excel, and cloud-based data platforms.
* Collect, clean, and structure data for analysis and modeling.
* Perform exploratory analysis to identify trends and support decision-making.
* Conduct research to evaluate industry developments and their impact on pricing.
* Document processes and communicate findings clearly to technical and non-technical audiences.
This internship is ideal for students who are analytical, detail-oriented, and eager to apply data-driven approaches to solve complex business challenges. You'll develop practical skills in data engineering, quantitative analysis, and research while collaborating with experienced professionals in a dynamic environment.
Role Qualifications and Experience
Required Skill Set
* Technical Skills -
* Experience with R and advanced skills in Excel.
* Familiar with SQL and cloud-based data warehouses(e.g., Google BigQuery).
* Special consideration for Postgres or spatial analytics.
* Alternative data analysis and modeling tool like Python may be acceptable.
* Data Collection & Engineering - Familiarity with gathering raw data, cleaning it, standardizing formats, and building structured datasets.
* Research Skills - Ability to search, evaluate, and synthesize information from diverse online sources.
* Organization & Documentation - Strong ability to organize information, track data sources, and document the research process.
* Analytical & Quantitative Skills - Comfort with exploratory analysis, identifying trends, and supporting basic modeling work.
* Critical Thinking - Ability to connect data insights with social, legal, and environmental developments.
* Communication Skills - Capability to clearly explain findings to audiences with limited technical or subject-matter background.
Salary Range
$22.00 - $25.00 per hour
The annual base salary range posted represents a broad range of salaries around the US and is subject to many factors including but not limited to credentials, education, experience, geographic location, job responsibilities, performance, skills and/or training.
Our Corporate Headquarters Address
General Reinsurance Corporation
400 Atlantic Street, 9th Floor
Stamford, CT 06901 (US)
At General Re Corporation, we celebrate diversity and are committed to creating an inclusive environment for all employees. It is the General Re Corporation's continuing policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, sex (including childbirth or related medical conditions), religion, national origin or ancestry, age, past or present disability , marital status, liability for service in the armed forces, veterans' status, citizenship, sexual orientation, gender identity, or any other characteristic protected by applicable law. In addition, Gen Re provides reasonable accommodation for qualified individuals with disabilities in accordance with the Americans with Disabilities Act.
C++ Market Data Engineer (USA)
Data scientist job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
* Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
* Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
* Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
* Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
* Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
* Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
* Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
* Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Tech Lead, Data & Inference Engineer
Data scientist job in Greenwich, CT
Job Description
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
Senior Data Engineer
Data scientist job in Farmingdale, NY
D'Addario & Company is the world's largest manufacturer and distributor of musical instrument accessories. As a U.S.-based manufacturing leader, we pride ourselves on high-automation machinery, cutting-edge technology, and a deep commitment to environmentally sustainable practices. Most importantly, we're proud of our diverse team of individuals who embody our core values-family, curiosity, passion, candor, and responsibility-and bring them to life every day.
D'Addario is seeking a Senior Data Engineer to help architect, build, and optimize the next generation of our global data infrastructure. In this role, you'll design and maintain production-grade data pipelines, support AI and machine learning initiatives, and serve as a technical mentor within a growing Business Intelligence team. You'll work closely with the Global Director of BI to deliver scalable solutions that power insights and innovation across the organization. This position is ideal for someone who thrives on solving complex data challenges, enjoys bringing structure to large datasets, and is passionate about enabling smarter decision-making through data.
This is a hybrid role and will require the candidate to work on-site in the Farmingdale office three days a week.
At D'Addario, we don't just offer a job-we offer a career with one of the most iconic names in the music industry. We're passionate about innovation, craftsmanship, and creating a workplace where diverse backgrounds, perspectives, and ideas thrive. We're eager to connect with individuals who bring fresh thinking and a collaborative spirit. If you're ready to make an impact, we'd love to hear how you'll add value to our team.
Some Perks & Benefits of Working at D'Addario:
Competitive compensation package
Health, vision, and dental insurance
12 weeks of fully paid parental leave
Fertility and family-building benefits
401(k) retirement plan with generous employer contributions
Career pathing and professional development via LinkedIn Learning
Paid Time Off (PTO) and flexible sick day policy
12 Paid Holidays
Life and AD&D Insurance
Enhanced Short-Term Disability Insurance
Employee Assistance Program (EAP)
Tuition Reimbursement
Discounts on D'Addario products and merchandise
Company jam nights, artist performances, holiday parties, and special events
A passionate, talented team that loves what they do!
Responsibilities
Build & Optimize Pipelines: Design, implement, and maintain robust, high-performance data pipelines to support analytical models within Microsoft Fabric and our data environment.
Data Integration: Connect and harmonize new data sources, including ERP, e-commerce platforms, and external APIs.
Mentorship & Standards: Guide junior BI team members, lead code reviews, and establish coding, documentation, and testing best practices.
AI/ML Enablement: Partner on machine learning and AI projects from proof-of-concept through deployment, supporting predictive and prescriptive analytics into production workflows.
Advanced Analytics Development: Team up with analysts to prepare data products and predictive models using Python, PySpark, and modern ML frameworks.
Collaboration: Work with stakeholders across Sales, Marketing, Operations, and Product to translate business requirements and align priorities.
Technical Leadership: Drive data engineering excellence through continuous improvement, quality assurance, and innovation in data architecture and governance.
Qualifications
5+ years of experience building and maintaining production-grade data pipelines.
Advanced programming skills in Python, PySpark, and SQL.
Strong background in data modeling and scalable analytics.
Experience deploying machine learning models and data products in production environments.
Solid understanding of cloud data platforms (Azure preferred).
Bachelor's degree in Computer Science, Engineering, Data Science, or equivalent experience.
Clear communicator with the ability to simplify complex technical concepts.
Proven leadership in mentoring and developing technical talent.
Highly organized, self-directed, and comfortable in fast-paced environments.
Passion for using data to drive innovation and business impact.
The base salary range for this role would be commensurate with experience: $140k to $165k per year
#LI-HYBRID
Auto-ApplyData Engineer
Data scientist job in New Haven, CT
Bexorg is transforming drug discovery by restoring molecular activity in postmortem human brains. Our groundbreaking BrainEx platform enables direct experimentation on functionally preserved human brain tissue, generating massive, high-fidelity molecular datasets that power AI-driven drug discovery for CNS diseases. We are seeking a Data Engineer to help harness this unprecedented data. In this onsite, mid-level role, you will design and optimize the pipelines and cloud infrastructure that turn terabytes of raw experimental data into actionable insights, driving our mission to revolutionize treatments for central nervous system disorders.
The Job:
Data Ingestion & Pipeline Management: Manage and optimize massive data ingestion pipelines from cutting-edge experimental devices, ensuring reliable, real-time capture of complex molecular data.
Cloud Data Architecture: Organize and structure large datasets in Google Cloud Platform, using tools like BigQuery and cloud storage to build a scalable data warehouse for fast querying and analysis of brain data.
Large-Scale Data Processing: Design and implement robust ETL/ELT processes to handle PB scale data, emphasizing speed, scalability, and data integrity at each step of the process.
Internal Data Services: Work closely with our software and analytics teams to expose processed data and insights to internal web applications. Build appropriate APIs or data access layers so that scientists and engineers can seamlessly visualize and interact with the data through our web platform.
Internal Experiment Services: Work with our life science teams to ensure data entry protocols for seamless metadata integration and association with experimental data
Infrastructure Innovation: Recommend and implement cloud infrastructure improvements (such as streaming technologies, distributed processing frameworks, and automation tools) that will future-proof our data pipeline. You will continually assess new technologies and best practices to increase throughput, reduce latency, and support our rapid growth in data volume.
Qualifications and Skills:
Experience with Google Cloud: Hands-on experience with Google Cloud services (especially BigQuery and related data tools) for managing and analyzing large datasets. You've designed or maintained data systems in a cloud environment and understand how to leverage GCP for big data workloads.
Data Engineering Background: 3+ years of experience in data engineering or a similar role. Proven ability to build and maintain data pipelines dealing with petabyte-scale data. Proficiency in programming (e.g., Python, Java, or Scala) and SQL for developing data processing jobs and queries.
Scalability & Performance Mindset: Familiarity with distributed systems or big data frameworks and a track record of optimizing data workflows for speed and scalability. You can architect solutions that handle exponential data growth without sacrificing performance.
Biology Domain Insight: Exposure to biology or experience working with scientific data (e.g. genomics, bioinformatics, neuroscience) is a strong plus. While deep domain expertise isn't required, you should be excited to learn about our experimental data and comfortable discussing requirements with biologists.
Problem-Solving & Collaboration: Excellent problem-solving skills, attention to detail, and a proactive attitude in tackling technical challenges. Ability to work closely with cross-functional teams (scientists, software engineers, data scientists) and communicate complex data systems in clear, approachable terms.
Passion for the Mission: A strong desire to apply your skills to transform drug discovery. You are inspired by Bexorg's mission and eager to build the data backbone of a platform that could unlock new therapies for CNS diseases.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
Data Platform Engineer (USA)
Data scientist job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a highly motivated and technically rigorous Data Platform Engineer to help modernize our foundational data infrastructure. As a Data Platform Engineer, you will be at the center of building the systems that ensure the quality, reliability, and discoverability of mission-critical data. Your work will directly impact the data operators and downstream consumers by creating robust tools, monitoring, and workflows that ensure accuracy, validity, and timeliness of data across the firm.
Responsibilities
* Architect and maintain core components of the Data Platform with a strong focus on reliability and scalability.
* Build and maintain tools to manage data feeds, monitor validity, and ensure data timeliness.
* Design and implement event-based data orchestration pipelines.
* Evaluate and integrate data quality and observability tools via POCs and MVPs.
* Stand up a data catalog system to improve data discoverability and lineage tracking.
* Collaborate closely with infrastructure teams to support operational excellence and platform uptime.
* Write and maintain data quality checks to validate real-time and batch data.
* Validate incoming real-time data using custom Python-based validators.
* Ensure low-level data correctness and integrity, especially in high-precision environments.
* Build robust and extensible systems that will be used by data operators to ensure the health of our data ecosystem.
* Own the foundational systems used by analysts and engineers alike to trust and explore our datasets.
Tech Lead, Data & Inference Engineer
Data scientist job in Stamford, CT
Job Description
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.