Post job

Data scientist jobs in Greenwich, CT

- 110 jobs
All
Data Scientist
Data Engineer
Data Science Internship
Actuary
  • Network Planning Data Scientist (Manager)

    Atlas Air Worldwide Holdings 4.9company rating

    Data scientist job in White Plains, NY

    Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes. This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency. Responsibilities Analyze and Monitor Network Performance Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements. Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness. Modeling & Optimization Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions. Support the design, testing, and implementation of software optimization tools to enhance operational decision-making. Reporting & Governance Develop periodic performance and reliability reports for customers, assisting in presentation creation Produce regular and ad hoc reports to monitor compliance with established operating parameters. Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets. Cross-Functional Collaboration Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution. Provide insights that inform schedule adjustments, fleet utilization, and contingency planning. Innovation & Continuous Improvement Identify opportunities to streamline workflows and automate recurring analyses. Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility. Qualifications Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI) Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience. 5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation. Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment. Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders. A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement. Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations Salary Range: $131,500 - $177,500 Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset. The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws. If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ****************************************** To view our Pay Transparency Statement, please click here: Pay Transparency Statement “Know Your Rights: Workplace Discrimination is Illegal” Poster The "EEO Is The Law" Poster
    $131.5k-177.5k yearly Auto-Apply 60d+ ago
  • Summer Intern - Clinical Data Sciences (M.S.)

    Boehringer Ingelheim 4.6company rating

    Data scientist job in Ridgefield, CT

    Boehringer Ingelheim is currently seeking a talented and innovative Intern to join our Biostatistics and Data Science department located at our Ridgefield facility. As an intern, you will work alongside a Clinical Data Scientist on projects such as clinical trial data analysis, designing data visualization tools, and/or big data processing utilizing statistical programming languages such as SAS/R. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies' success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim's high regard for our employees **Duties & Responsibilities** - Develop, review, validate SAS or R programs to generate analysis datasets, tables, figures, and listings to support trial or project data analysis - Follow Good Programming Practices with documentation when developing programs - Understand and Follow company SOPs and How to Guides where applicable. - Support the review of analysis dataset specifications based on Case report form, Statistical analysis plan, and display template, in accordance with industry standards (CDISC). - Develop interactive web-based data driven applications deployed through the Rshiny. - Present your project/assignments work at the end of the program **Requirements** + Must be an Undergraduate, Graduate, or Professional Student in good academic standing. + Must have completed 12 credit hours within a related major and/or other related coursework. + Overall, cumulative GPA (from last completed quarter) must be at least 3.000 (on 4.0 scale) or better (No rounding up). + Major must be related to the field of internship. **Eligibility Requirements:** + Must be legally authorized to work in the United States without restriction. + Must be willing to take a drug test and post-offer physical (if required) + Must be 18 years of age or older + This position will require individuals to be fully vaccinated against COVID-19 or have an approved medical or religious accommodation. Click here for more information on the vaccine mandate and COVID-19. **Desired Skills, Experience and Abilities** - Must be a Master of Science graduate student in fields related to Bio/statistics, Computer Science (software programming language focused), Data Science, or a related degree program - Demonstrated proficiency using SAS, R and/or Python - Good written and oral communications skills in the English language **Who We Are:** At Boehringer Ingelheim we create value through innovation with one clear goal: to improve the lives of patients. We develop breakthrough therapies and innovative healthcare solutions in areas of unmet medical need for both humans and animals. As a family owned company we focus on long term performance. We are powered by 50,000 employees globally who nurture a diverse, collaborative and inclusive culture. Learning and development for all employees is key because your growth is our growth. Want to learn more? Visit boehringer-ingelheim.com and join us in our effort to make more health. Boehringer Ingelheim, including Boehringer Ingelheim Pharmaceuticals, Inc., Boehringer Ingelheim USA, Boehringer Ingelheim Animal Health USA Inc., Boehringer Ingelheim Animal Health Puerto Rico LLC and Boehringer Ingelheim Fremont, Inc. is an equal opportunity and affirmative action employer committed to a culturally diverse workforce. All qualified applicants will receive consideration for employment without regard to race; color; creed; religion; national origin; age; ancestry; citizenship status, marital, domestic partnership or civil union status; gender, gender identity or expression; affectional or sexual orientation; pregnancy, childbirth or related medical condition; physical or psychiatric disability; veteran or military status; domestic violence victim status; genetic information (including the refusal to submit to genetic testing) or any other characteristic protected by applicable federal, state or local law. All qualified applicants will receive consideration for employment without regard to a person's actual or perceived race, including natural hairstyles, hair texture and protective hairstyles; color; creed; religion; national origin; age; ancestry; citizenship status, marital status; gender, gender identity or expression; sexual orientation, mental, physical or intellectual disability, veteran status; pregnancy, childbirth or related medical condition; genetic information (including the refusal to submit to genetic testing) or any other class or characteristic protected by applicable law.
    $87k-120k yearly est. 4d ago
  • EXCLUSIVE: Chief Actuary - Reserving - North America

    Ezra Penland

    Data scientist job in Stamford, CT

    EXCLUSIVE! Highly visible Regional Chief Actuary opportunity with Multinational Insurance leader, offering the chance to lead as Appointed Actuary for U. S. legal entities, sign SAOs, and serve as a trusted advisor to the Global Chief Actuary. Influential role leads actuarial strategy, reserve governance, valuation, and financial reporting while ensuring regulatory compliance across North America. With extensive interaction among worldwide leaders and business partners, the Chief Actuary fosters collaboration across diverse regions by leveraging both cultural and technical expertise. Seeking a relationship-oriented ACAS/FCAS with deep Reserving and Casualty market expertise to guide and inspire a high-performing actuarial team with confidence and integrity. Base salary up to $315K plus a robust benefits package.
    $86k-133k yearly est. 60d+ ago
  • Reinsurance Actuary (Director or Managing Director Level)

    Hyperiongrp

    Data scientist job in Stamford, CT

    Howden Re is the global reinsurance broker and risk, capital & strategic advisor focused on relentless innovation & superior analytics for top client service. About Role This is a Mid-level position and will reside within the Actuarial team. We expect this person to work successfully across Analytics, Actuarial, and Broking functions providing the full suite of actuarial work in support of reinsurance placements for clients. You will be joining an experienced analytics team that produces quality solutions in a collegial, casual, and results-driven environment. Responsibilities | Support: Traditional LR analysis, experience/exposure rating, stochastic modelling, etc Present analyses in clear terms appropriate to the audience Provide value-added service to clients as needed Market research and development & assist senior actuaries with industry studies A high priority will be the development & programming of various tools to aid in streamlining workflow and helping Howden Re fully utilize data Interpersonal | Communication | Teamwork: Willingness to be part of Howden Re's “team first” culture Keen ability to take initiative Sets effective priorities and handles multiple projects under tight timeframes Responds constructively to different viewpoints, changing priorities, new conditions Works well in teams with colleagues of various backgrounds Shares knowledge, opinions and insights in constructive manner Offers to help others without prompting, & assists others in learning Qualifications: ACAS or FCAS required Bachelor's degree from reputable university; advanced degree a huge plus 7-15 years of experience in the (re)insurance industry Able to apply advanced mathematical / actuarial concepts and techniques Skilled in using Microsoft Excel Software experience with R, VBA, Python Proven track record of hard work, client success, and innovation Legally authorized to work in the United States The expected base salary range for this role is $225,000-300,000. The base salary range is based on level of relevant experience and location and does not include other types of compensation such as discretionary bonus or benefits.
    $86k-133k yearly est. Auto-Apply 60d+ ago
  • Reinsurance Actuary (Director or Managing Director Level)

    Howden Group Holdings Ltd.

    Data scientist job in Stamford, CT

    Howden Re is the global reinsurance broker and risk, capital & strategic advisor focused on relentless innovation & superior analytics for top client service. About Role This is a Mid-level position and will reside within the Actuarial team. We expect this person to work successfully across Analytics, Actuarial, and Broking functions providing the full suite of actuarial work in support of reinsurance placements for clients. You will be joining an experienced analytics team that produces quality solutions in a collegial, casual, and results-driven environment. Responsibilities | Support: * Traditional LR analysis, experience/exposure rating, stochastic modelling, etc * Present analyses in clear terms appropriate to the audience * Provide value-added service to clients as needed * Market research and development & assist senior actuaries with industry studies * A high priority will be the development & programming of various tools to aid in streamlining workflow and helping Howden Re fully utilize data Interpersonal | Communication | Teamwork: * Willingness to be part of Howden Re's "team first" culture * Keen ability to take initiative * Sets effective priorities and handles multiple projects under tight timeframes * Responds constructively to different viewpoints, changing priorities, new conditions * Works well in teams with colleagues of various backgrounds * Shares knowledge, opinions and insights in constructive manner * Offers to help others without prompting, & assists others in learning Qualifications: * ACAS or FCAS required * Bachelor's degree from reputable university; advanced degree a huge plus * 7-15 years of experience in the (re)insurance industry * Able to apply advanced mathematical / actuarial concepts and techniques * Skilled in using Microsoft Excel * Software experience with R, VBA, Python * Proven track record of hard work, client success, and innovation * Legally authorized to work in the United States * The expected base salary range for this role is $225,000-300,000. The base salary range is based on level of relevant experience and location and does not include other types of compensation such as discretionary bonus or benefits.
    $86k-133k yearly est. Auto-Apply 60d+ ago
  • Data Solutions - Summer 2026 Intern

    Icapital Network 3.8company rating

    Data scientist job in Stamford, CT

    Join the fintech powerhouse redefining how the world invests in private markets. iCapital is a global leader in alternative investments, trusted by financial advisors, wealth managers, asset managers, and industry innovators worldwide. With $999.73 billion in assets serviced globally-including $272.1 billion in alternative platform assets-we empower over 3,000 wealth management firms and 118,000 financial professionals to deliver cutting-edge alternative investment solutions. This summer, become part of a dynamic team where your ideas matter. Make a meaningful impact, accelerate your professional growth, and help push the boundaries of what's possible at the intersection of technology and finance. Key features of our Summer 2026 Internship: Become a key member of the iCapital team, driving initiatives, contributing to projects, and potentially jumpstart your career with us after graduation. Immerse yourself in an inclusive company culture where we create a sense of belonging for everyone. Gain exclusive access to the AltsEdge Certificate Program, our award-winning alternative investments education curriculum for wealth managers. Attend recurring iLearn seminars and platform demos where you will learn the latest about our products. Participate in an intern team project, culminating in an end-of-summer presentation to a panel of senior executives. Join senior executive speaker seminars that provide career development, guidance, and access to the leaders at iCapital. About the role: The Data Solutions department provides a reporting service that leverages top-tier third-party reporting tools to assist UHNW clients in identifying opportunities and risks within their portfolios. Through collaborations with leading technology platforms, we curate reports that offer insightful, consolidated, real-time views of all assets and liabilities, detailing what they are, who holds them, how ownership is divided, how they're invested, and how they're performing. These reports are strategically designed to uncover opportunities and highlight financial risks. Learn and leverage financial reporting and data aggregation tools: Conduct account level reconciliation. Provide accurate and timely statements and data entry. Work with internal teams to resolve data issues. Generate Ad Hoc reports as needed. Work with the team to prioritize individual and communal work to ensure all projects are completed on time and to detailed specifications. Valued qualities and key skills: Highly inquisitive, collaborative, and a creative problem solver Possess foundational knowledge of and/or genuine interest in the financial markets Able to thrive in a fast-paced environment Able to adapt to new responsibilities and manage competing priorities Technologically proficient in Microsoft Office (Excel, PowerPoint) Strong verbal and written communication skills What we offer: Outings with iCapital team members and fellow interns to build connections and grow your network. Corporate culture and volunteer activities in support of the communities where we live and work. Rooftop Happy Hours showcasing our impressive views of NYC. Eligibility: A rising junior or senior in a U.S. college/university bachelor's degree program Must be available to work the duration of the program from June 8th through August 7th to be eligible Committed to working five days a week in the Stamford office for the entire duration of the internship Authorized to work in the United States* *We are unable to offer any type of employment-based immigration sponsorship for this program Pay Rate: $31.00/hour + relocation stipend and transportation stipend iCapital in the Press: We are innovating at the intersection of technology and investment opportunity, but don't take our word for it. Here's what others are saying about us: Two consecutive years on the CNBC World's Top Fintech Companies list Two consecutive years listed in Top 100 Fastest Growing Financial Services Companies Four-time winner of the Money Management Institute/Barron's Solutions Provider of the Year For additional information on iCapital, please visit **************************************** Twitter: @icapitalnetwork | LinkedIn: ***************************************************** | Awards Disclaimer: ****************************************/recognition/
    $31 hourly Auto-Apply 11d ago
  • P&C Commercial Insurance Data Analytics Intern - Genesis

    General Re Corporation 4.8company rating

    Data scientist job in Stamford, CT

    Shape Your Future With Us Genesis Management and Insurance Services Corporation (Genesis) is a premier alternative risk transfer provider, offering innovative solutions for the unique needs of public entity and education clients. Genesis takes pride in being a long-term thought partner and provider of insurance and reinsurance to public sector, K-12 and higher education self-insured individual risks, pools and trusts for over 30 years. Genesis is a wholly-owned subsidiary of General Re Corporation, a subsidiary of Berkshire Hathaway Inc. General Re Corporation is a holding company for global reinsurance and related operations with more than 2,000 employees worldwide. Our first-class financial security receives the highest financial strength ratings. Genesis currently offers an excellent opportunity for a P&C Commercial Insurance Data Analytics Intern based in our Stamford office. This opportunity is for available for Summer 2026 (July-August). This is a hybrid role. Role Description Join Genesis' Actuarial Pricing Unit for an immersive 8-week internship during Summer 2026. This program is designed to provide hands-on experience in actuarial pricing, data analytics, and research. Interns will work on real-world projects that combine technical skills with critical thinking to support pricing strategies and risk assessment. You will: * Gain exposure to actuarial concepts, insurance industry practices, and pricing methodologies. * Work with advanced tools and technologies, including R, SQL, Excel, and cloud-based data platforms. * Collect, clean, and structure data for analysis and modeling. * Perform exploratory analysis to identify trends and support decision-making. * Conduct research to evaluate industry developments and their impact on pricing. * Document processes and communicate findings clearly to technical and non-technical audiences. This internship is ideal for students who are analytical, detail-oriented, and eager to apply data-driven approaches to solve complex business challenges. You'll develop practical skills in data engineering, quantitative analysis, and research while collaborating with experienced professionals in a dynamic environment. Role Qualifications and Experience Required Skill Set * Technical Skills - * Experience with R and advanced skills in Excel. * Familiar with SQL and cloud-based data warehouses(e.g., Google BigQuery). * Special consideration for Postgres or spatial analytics. * Alternative data analysis and modeling tool like Python may be acceptable. * Data Collection & Engineering - Familiarity with gathering raw data, cleaning it, standardizing formats, and building structured datasets. * Research Skills - Ability to search, evaluate, and synthesize information from diverse online sources. * Organization & Documentation - Strong ability to organize information, track data sources, and document the research process. * Analytical & Quantitative Skills - Comfort with exploratory analysis, identifying trends, and supporting basic modeling work. * Critical Thinking - Ability to connect data insights with social, legal, and environmental developments. * Communication Skills - Capability to clearly explain findings to audiences with limited technical or subject-matter background. Salary Range $22.00 - $25.00 per hour The annual base salary range posted represents a broad range of salaries around the US and is subject to many factors including but not limited to credentials, education, experience, geographic location, job responsibilities, performance, skills and/or training. Our Corporate Headquarters Address General Reinsurance Corporation 400 Atlantic Street, 9th Floor Stamford, CT 06901 (US) At General Re Corporation, we celebrate diversity and are committed to creating an inclusive environment for all employees. It is the General Re Corporation's continuing policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, sex (including childbirth or related medical conditions), religion, national origin or ancestry, age, past or present disability , marital status, liability for service in the armed forces, veterans' status, citizenship, sexual orientation, gender identity, or any other characteristic protected by applicable law. In addition, Gen Re provides reasonable accommodation for qualified individuals with disabilities in accordance with the Americans with Disabilities Act.
    $22-25 hourly 18d ago
  • Data Engineer

    Innovative Rocket Technologies Inc. 4.3company rating

    Data scientist job in New Hyde Park, NY

    Job Description Data is pivotal to our goal of frequent launch and rapid iteration. We're recruiting a Data Engineer at iRocket to build pipelines, analytics, and tools that support propulsion test, launch operations, manufacturing, and vehicle performance. The Role Design and build data pipelines for test stands, manufacturing machines, launch telemetry, and operations systems. Develop dashboards, real-time monitoring, data-driven anomaly detection, performance trending, and predictive maintenance tools. Work with engineers across propulsion, manufacturing, and operations to translate data-needs into data-products. Maintain data architecture, ETL processes, cloud/edge-data systems, and analytics tooling. Support A/B testing, performance metrics, and feed insights back into design/manufacturing cycles. Requirements Bachelor's degree in Computer Science, Data Engineering, or related technical field. 2+ years of experience building data pipelines, ETL/ELT workflows, and analytics systems. Proficient in Python, SQL, cloud data platforms (AWS, GCP, Azure), streaming/real-time analytics, and dashboarding (e.g., Tableau, PowerBI). Strong ability to work cross-functionally and deliver data-products to engineering and operations teams. Strong communication, documentation, and a curiosity-driven mindset. Benefits Health Care Plan (Medical, Dental & Vision) Retirement Plan (401k, IRA) Life Insurance (Basic, Voluntary & AD&D) Paid Time Off (Vacation, Sick & Public Holidays) Family Leave (Maternity, Paternity) Short Term & Long Term Disability Wellness Resources
    $102k-146k yearly est. 3d ago
  • Data Engineer (AI, ML, and Data Science)

    Consumer Reports

    Data scientist job in Yonkers, NY

    Job DescriptionWHO WE ARE Consumer Reports is an independent, nonprofit organization dedicated to a fair and just marketplace for all. CR is known for our rigorous testing and trusted ratings on thousands of products and services. We report extensively on consumer trends and challenges, and survey millions of people in the U.S. each year. We leverage our evidence-based approach to advocate for consumer rights, working with policymakers and companies to find solutions for safer products and fair practices. Our mission starts with you. We offer medical benefits that start on your first day as a CR employee that include behavioral health coverage, family planning and a generous 401K match. Learn more about how CR advocates on behalf of our employees. OVERVIEW Data powers everything we do at CR-and it's the foundation for our AI and machine learning efforts that are transforming how we serve consumers. The Data Engineer ( AI/ML & Data Science) will play a critical role in building the data infrastructure that powers advanced AI applications, machine learning models, and analytics systems across CR. Reporting to the Associate Director, AI/M & Data Science, in this role, you will design and maintain robust data pipelines and services that support experimentation, model training, and AI application deployment. If you're passionate about solving complex data challenges, working with cutting-edge AI technologies, and enabling impactful, data-driven products that support CR's mission, this is the role for you. This is a hybrid position. This position is not eligible for sponsorship or relocation assistance. How You'll Make An Impact As a mission based organization, CR and our Software team are pursuing an AI strategy that will drive value for our customers, give our employees superpowers, and address AI harms in the digital marketplace. We're looking for an AI/ML engineer to help us execute on our multi-year roadmap around generative AI. As a Data Engineer ( AI/M & Data Science) you will: Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data to support AI/ML model and application development, evaluation, and monitoring. Build and optimize data processing workflows in Databricks, AWS SageMaker, or similar cloud platforms. Collaborate with AI/ML engineers to deliver clean, reliable datasets for model training and inference. Implement data quality, observability, and lineage tracking within the ML lifecycle. Develop Data APIs/microservices to power AI applications and reporting/analytics dashboards. Support the deployment of AI/ML applications by building and maintaining feature stores and data pipelines optimized for production workloads. Ensure adherence to CR's data governance, security, and compliance standards across all AI and data workflows. Work with Product, Engineering and other stakeholders to define project requirements and deliverables. Integrate data from multiple internal and external systems, including APIs, third-party datasets, and cloud storage. ABOUT YOU You'll Be Highly Rated If: You have the experience. You have 3+ years of experience designing and developing data pipelines, data models/schemas, APIs, or services for analytics or ML workloads. You have the education. You've earned a Bachelor's degree in Computer Science, Engineering, or a related field. You have programming skills. You are skilled in Python, SQL, and have experience with PySpark on large-scale datasets. You have experience with data orchestration tools such as Airflow, dbt and Prefect, plus CI/CD pipelines for data delivery. You have experience with Data and AI/ML platforms such as Databricks, AWS SageMaker or similar. You have experience working with Kubernetes on cloud platforms like - AWS, GCP, or Azure. You'll Be One of Our Top Picks If: You are passionate about automation and continuous improvement. You have excellent documentation and technical communication skills. You are an analytical thinker with troubleshooting abilities. You are self-driven and proactive in solving infrastructure bottlenecks. FAIR PAY AND A JUST WORKPLACE At Consumer Reports, we are committed to fair, transparent pay and we strive to provide competitive, market-informed compensation.The target salary range for this position is $100K-$120K. It is anticipated that most qualified candidates will fall near the middle of this range. Compensation for the successful candidate will be informed by the candidate's particular combination of knowledge, skills, competencies, and experience. We have three locations: Yonkers, NY, Washington, DC and Colchester, CT. We are registered to do business in and can only hire from the following states and federal district: Arizona, California, Connecticut, Illinois, Maryland, Massachusetts, Michigan, New Hampshire, New Jersey, New York, Texas, Vermont, Virginia and Washington, DC. Consumer Reports is an equal opportunity employer and does not discriminate in employment on the basis of actual or perceived race, color, creed, religion, age, national origin, ancestry, citizenship status, sex or gender (including pregnancy, childbirth, related medical conditions or lactation), gender identity and expression (including transgender status), sexual orientation, marital status, military service or veteran status, protected medical condition as defined by applicable state or local law, disability, genetic information, or any other basis protected by applicable federal, state or local laws. Consumer Reports will provide you with any reasonable assistance or accommodation for any part of the application and hiring process.
    $100k-120k yearly 16d ago
  • Tech Lead, Data & Inference Engineer

    Catalyst Labs

    Data scientist job in Stamford, CT

    Our Client A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts. About Us Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations. We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems. Location: San Francisco Work type: Full Time, Compensation: above market base + bonus + equity Roles & Responsibilities Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use. Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems. Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions. Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops. Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making. Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally. Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases. Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization. Qualifications Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics. Excellent written and verbal communication; proactive and collaborative mindset. Comfortable in hybrid or distributed environments with strong ownership and accountability. A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes. Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly. Core Experience 6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design. Expert SQL (query optimization on large datasets) and Python skills. Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect). Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability. Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Bonus: Strong Node.js skills for faster onboarding and system integration. Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
    $84k-114k yearly est. 34d ago
  • Sr. Data Engineer

    A.M. Best 4.4company rating

    Data scientist job in Waldwick, NJ

    * Flexible and hybrid work arrangements * Paid time off/Paid company holidays * Medical plan options/prescription drug plan * Dental plan/vision plan options * Flexible spending and health savings accounts * 401(k) retirement savings plan with a Roth savings option and company matching contributions * Educational assistance program Overview The Sr. Data Engineer requires expertise in DataOps and is responsible for guaranteeing dependable, scalable and effective data processes and solutions by combining components, functions and skills from data engineering, software engineering and IT Operations. This role is critical to AM Best's goal of developing, maintaining and scaling a sophisticated platform for data and analytics. Responsibilities Create and support scalable data solutions: * Build high performance data systems including databases, APIs, and data integration pipelines * Implement a metadata-driven architecture and infrastructure as code approach to automate and simplify the design, deployment, and management of data systems * Facilitate the adoption of data and software engineering best practices, including code review, testing, and continuous integration and delivery (CI/CD) * Develop and establish a data governance framework * Continuous monitoring of process performance and implement improvements for efficiency including fine-tuning existing ETL processes, optimizing queries, or refactoring code * Assess and make optimal use of cloud platforms and technologies, especially Azure and Databricks to enhance system architecture * Implement data quality checks and build processes to identify and resolve data issues * Create and maintain documentation for data architecture, standards, and best practices * Contribute designs, code, tooling, testing, and operational support Increase operational efficiency: * Identify opportunities for process optimization and automation to enhance data operations efficiency Leadership and innovation: * Provide technical leadership to the data engineering team and actively lead design discussions * Ensure that the new data infrastructure remains modern and efficient by staying knowledgeable of the latest tools, technologies, and methodologies in the data and analytics space Qualifications * 7 plus years of experience working as a Data Engineer, Data Architect, Database Developer or similar roles * Bachelor's Degree in Computer Science, Data Science, Engineering or related field Skills * 7+ years of experience in programming languages such as Python, Scala, Java, Rust or similar • 7+ years of experience in Oracle OLTP database design and development • 7+ years of experience in building ETL / ELT data pipelines using Apache Spark, Airflow, dbt or similar • 7 years of experience in developing APIs that include REST, SOAP, RPC, GraphQL or similar • 7 years of experience in OLAP / data warehouse design and development using dimensional (Kimball star-schema), data vault or Inmon methodologies • 5 years of experience in NoSQL database design and development preferably on MongoDB or similar • 5 years of experience working with message broker platforms like RabbitMQ, Apache Kafka, Red Hat AMQ or similar • 5 years of experience working with Azure or AWS cloud-native technologies / services • Working knowledge of CI/CD automation tools or services like Jenkins, Ansible, Chef, Puppet or similar • Working knowledge of containerized platform or services like Docker, Kubernetes or similar • Must be hands-on, have a passion for data and problem solving, and an innovative mind-set to help deliver results across projects Pluses: • Experience working with Databricks data intelligence platform • Experience working with business intelligence tools / platforms like PowerBI, Cognos, Tableau or similar • Experience working with caching databases: Memcache,Redis or similar • Experience working with search databases and platforms: Elasticsearch, Apache Solr or similar • Experience working with data modeling tools: ERwin, ERStudio, Navicat, SqlDBM or similar • Knowledge of data governance tools: Colibra, Alation or similar • Knowledge of master data management tools: Informatica, TIBCO, Riversand, Ataccama or similar
    $99k-138k yearly est. Auto-Apply 60d+ ago
  • C++ Market Data Engineer (USA)

    Trexquant 4.0company rating

    Data scientist job in Stamford, CT

    Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run. Responsibilities * Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE). * Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate. * Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems. * Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance. * Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages. * Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning. * Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading. * Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
    $95k-136k yearly est. 5d ago
  • Network Planning Data Scientist (Manager)

    Atlas Air 4.9company rating

    Data scientist job in White Plains, NY

    Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes. This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency. Responsibilities * Analyze and Monitor Network Performance * Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements. * Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness. * Modeling & Optimization * Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions. * Support the design, testing, and implementation of software optimization tools to enhance operational decision-making. * Reporting & Governance * Develop periodic performance and reliability reports for customers, assisting in presentation creation * Produce regular and ad hoc reports to monitor compliance with established operating parameters. * Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets. * Cross-Functional Collaboration * Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution. * Provide insights that inform schedule adjustments, fleet utilization, and contingency planning. * Innovation & Continuous Improvement * Identify opportunities to streamline workflows and automate recurring analyses. * Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility. Qualifications * Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI) * Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience. * 5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning * Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation. * Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment. * Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders. * A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement. * Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations Salary Range: $131,500 - $177,500 Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset. The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws. If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ****************************************** To view our Pay Transparency Statement, please click here: Pay Transparency Statement "Know Your Rights: Workplace Discrimination is Illegal" Poster The "EEO Is The Law" Poster
    $131.5k-177.5k yearly Auto-Apply 6d ago
  • Data Platform Engineer / Sr. Data Platform Engineer

    Boehringer Ingelheim 4.6company rating

    Data scientist job in Ridgefield, CT

    **Compensation Data** This position offers a base salary typically between $115,000 and $181,000+. This position may be eligible for a role specific variable or performance based bonus and or other compensation elements. For an overview of our benefits please click here. (***************************************************************** **Description** We built the Boehringer Ingelheim's Data Platform to maximize the value of our data and foster a more data-driven culture and mindset. As part of the Platform Support and Onboarding team, you play a key role in ensuring our users get started to work on top of our platform and have a smooth experience along their data-intensive software delivery journey. As an employee of Boehringer Ingelheim, you will actively contribute to the discovery, development and delivery of our products to our patients and customers. Our global presence provides opportunity for all employees to collaborate internationally, offering visibility and opportunity to directly contribute to the companies´ success. We realize that our strength and competitive advantage lie with our people. We support our employees in a number of ways to foster a healthy working environment, meaningful work, diversity and inclusion, mobility, networking and work-life balance. Our competitive compensation and benefit programs reflect Boehringer Ingelheim´s high regard for our employees. **Duties & Responsibilities** + Ensure proper onboarding into our Platform offering. You explain to new delivery teams our platform features and advice on how to operate them. You assess the maturity of our new teams. + Collaborate with different product teams helping them refine their technical architecture and enabling the teams to deliver their products in the best possible way on top of our Data Platform + Give solution architects advice on how to accommodate their architecture into our Platform offering + Support our delivery teams and user community resolve incidents, problems and queries related to our Platform offering + Partner with other IT functions to unblock Platform related issues impacting our product delivery teams. + Contribute to co-create specific features of our Platform offering, if necessary. + Know our wider platform offering, related to software engineering, data integration, analytics and AI, and guide potential users and stakeholders to our subject matter experts. + Provide consultancy to potential users and product delivery teams on how to leverage our Platform offering + Connect with the Data Platform Development team to share customer feedback, which helps improve user experience and our platform's technology backbone + Contribute to content-creation and presentation activities to support customer awareness, understanding, and excitement around our product. **Requirements** Data Platform Engineer Associate degree in Computer Science or MIS with a minimum of 4 years experience; or Bachelor degree in Computer Science, or MIS, or related field with a minimum of 2 years of experience; or a Master degree in Computer Science, MIS, with minimum 1 year of experience; or relevant Business or IT experience of minimum of 4 years. Sr. Data Platform Engineer + Expert level of technical understanding and demonstrated knowledge for their area of responsibility. + Strong technical design and architecture skills, and project management experience. + Strong foundation in software development life cycle methodology. + Associate degree in Computer Science, MIS or related field with a minimum of 11 years experience; or Bachelor degree in Computer Science, or MIS, or related field with minimum 9 years of experience; or a Master degree in Computer Science, MIS, or related field with minimum 7 years of experience; or relevant Business or IT experience of minimum of 11 years. Minimum of 6 years programming experience preferred. Additional: + At least 5 years of experience in a similar role + At least 5 years of experience designing, documenting and deploying cloud architectures using different technologies (AWS, Snowflake, dbt, Iceberg...) + At least 5 years of hands-on software development, operations, data lakes, integration with data catalogues... + Master's degree in Computer Science, Information Technology, or equivalent experience + Advanced knowledge of data platform related technologies such as data-related AWS, Snowflake, dbt... + Knowledge of a programming language (Python, JavaScript, Go, Java, etc.) and/or scripting, Infrastructure as Code etc. + Good command of engineering practices such as code refactoring, design patterns, design driven development, CI/CD, building highly scalable application and APIs, and security + Understanding of cloud networks (VPC, security groups) or experience in Cloud security (firewalls, encryption, IAM, roles) is a plus + Experience with migration projects (data or software) is a plus + Ability to explain complex technical concepts in a simple, user-friendly manner + Ability to work effectively within an international and intercultural environment. + Excellent communication skills + Experience in training content development or as a trainer is a plus + Willingness and readiness to travel occasionally. **_Eligibility Requirements:_** + _Must be legally authorized to work in the United States without restriction._ + _Must be willing to take a drug test and post-offer physical (if required)_ + _Must be 18 years of age or older_ All qualified applicants will receive consideration for employment without regard to a person's actual or perceived race, including natural hairstyles, hair texture and protective hairstyles; color; creed; religion; national origin; age; ancestry; citizenship status, marital status; gender, gender identity or expression; sexual orientation, mental, physical or intellectual disability, veteran status; pregnancy, childbirth or related medical condition; genetic information (including the refusal to submit to genetic testing) or any other class or characteristic protected by applicable law.
    $115k-181k yearly 60d+ ago
  • Data Engineer

    A.M. Best 4.4company rating

    Data scientist job in Waldwick, NJ

    * Flexible and hybrid work arrangements * Paid time off/Paid company holidays * Medical plan options/prescription drug plan * Dental plan/vision plan options * Flexible spending and health savings accounts * 401(k) retirement savings plan with a Roth savings option and company matching contributions * Educational assistance program Overview The Data Engineer will be responsible for designing, building, and optimizing scalable data solutions to support a wide range of business needs. This role requires a strong ability to work both independently and collaboratively in a fast-paced, agile environment. The ideal candidate will engage with cross-functional teams to gather data requirements, propose enhancements to existing data pipelines and structures, and ensure the reliability and efficiency of data processes. Responsibilities * Develop and maintain scripts and tools using Python, PowerShell, and R• Design, write, and optimize SQL queries for performance and scalability• Help modernize 'legacy' solutions to realign with our current code base and tech stack• Assist in redevelopment, improvement and ongoing maintenance of existing data, analytics, and reporting solutions• Ensure accurate and efficient data integration of diverse data sources and formats• Enhance and support database functions and procedures• Optimize data access and data processing workflows for performance, scalability, and efficiency• Implement data quality checks and validations to ensure the accuracy, consistency, and completeness of data• Identify and resolve performance bottlenecks, investigate and troubleshoot data related issues, and provide solutions to address defects• Seamlessly transition between production support and development tasks based on business needs• Deploy and manage code utilizing engineering best practices in non-prod and prod environments Qualifications * Bachelors Degree in computer science, data science, software engineering, information systems, or related quantitative field.• Minimum 4 years of experience working as a Python Developer, Solutions Engineer, Data Engineer, or similar roles. Skills * 4+ years of solid continuous experience in Python • 3+ years of solid experience writing SQL and PL/SQL code • 3+ years of experience working with relational databases (solid understanding of Oracle preferred) • 2+ years of experience scripting with PowerShell • Experience programming in R • Experience with web application frameworks including Shiny, Dash, Streamlit • Experience with CI/CD utilizing git/Azure DevOps • Knowledge of alternative storage formats including Parquet/Arrow/Avro • Ability to collaborate within and across teams of different technical knowledge to support delivery of solutions • Expert problem-solving skills, including debugging skills, enabling the determination of sources of issues in unfamiliar code or systems Pluses, but not required: Any work experience in the following: ETL / ELT tools: Spark, Kafka, Azure Data Factory (ADF) NoSQL databases: MongoDB, Cosmos DB, DocumentDB or similar Languages: SAS, Java, Scala, .Net
    $99k-138k yearly est. Auto-Apply 60d+ ago
  • Data Platform Engineer (USA)

    Trexquant 4.0company rating

    Data scientist job in Stamford, CT

    Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a highly motivated and technically rigorous Data Platform Engineer to help modernize our foundational data infrastructure. As a Data Platform Engineer, you will be at the center of building the systems that ensure the quality, reliability, and discoverability of mission-critical data. Your work will directly impact the data operators and downstream consumers by creating robust tools, monitoring, and workflows that ensure accuracy, validity, and timeliness of data across the firm. Responsibilities * Architect and maintain core components of the Data Platform with a strong focus on reliability and scalability. * Build and maintain tools to manage data feeds, monitor validity, and ensure data timeliness. * Design and implement event-based data orchestration pipelines. * Evaluate and integrate data quality and observability tools via POCs and MVPs. * Stand up a data catalog system to improve data discoverability and lineage tracking. * Collaborate closely with infrastructure teams to support operational excellence and platform uptime. * Write and maintain data quality checks to validate real-time and batch data. * Validate incoming real-time data using custom Python-based validators. * Ensure low-level data correctness and integrity, especially in high-precision environments. * Build robust and extensible systems that will be used by data operators to ensure the health of our data ecosystem. * Own the foundational systems used by analysts and engineers alike to trust and explore our datasets.
    $95k-136k yearly est. 5d ago
  • Tech Lead, Data & Inference Engineer

    Catalyst Labs

    Data scientist job in Goldens Bridge, NY

    Job Description Our Client A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts. About Us Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations. We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems. Location: San Francisco Work type: Full Time, Compensation: above market base + bonus + equity Roles & Responsibilities Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use. Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems. Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions. Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops. Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making. Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally. Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases. Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization. Qualifications Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics. Excellent written and verbal communication; proactive and collaborative mindset. Comfortable in hybrid or distributed environments with strong ownership and accountability. A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes. Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly. Core Experience 6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design. Expert SQL (query optimization on large datasets) and Python skills. Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect). Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability. Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Bonus: Strong Node.js skills for faster onboarding and system integration. Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
    $90k-123k yearly est. 3d ago
  • Data Engineer - Python Programmer

    A.M. Best 4.4company rating

    Data scientist job in Waldwick, NJ

    * Flexible and hybrid work arrangements * Paid time off/Paid company holidays * Medical plan options/prescription drug plan * Dental plan/vision plan options * Flexible spending and health savings accounts * 401(k) retirement savings plan with a Roth savings option and company matching contributions * Educational assistance program Overview The Data Engineer is responsible for designing, building, and optimizing scalable data solutions to support a wide range of business needs. This role requires a strong ability to work both independently and collaboratively in a fast-paced, agile environment. The ideal candidate will engage with cross-functional teams to gather data requirements, propose enhancements to existing data pipelines and structures, and ensure the reliability and efficiency of data processes. Responsibilities * Focus on continued build out/improvement of the current data stores (Oracle, CEPH, and possibly MongoDB) including data access and persistence• Requirement of prior knowledge/experience with database concepts such as SQL tuning, indexes, views, stored procedures, etc• Proficient in fundamental algorithms and data structures. Server-side Python processes utilizing concurrency patterns with asyncio, mutli-processing, and threading. Also, comfortable working with Numpy, Pandas, Python collections, etc • Must handle API development using REST. Strong working knowledge of FastAPI, with a primary focus on mastering the REST protocol. Experience with gRPC and socket-based communication is a valuable plus• Mastery of typical software development life cycle and deployment processes. Experience with GIT, MS Azure DevOps, Artifactory, etc. Must be comfortable building CI/CD pipelines• Experienced in developing applications and managing systems on Red Hat Enterprise Linux (RHEL) environments Qualifications * Associate's Degree preferred with 5 to 7 years demonstrated server-side development proficiency• Bachelor's Degree preferred with 3 to 5 years demonstrated server-side development proficiency Skills * Programming Languages: Python (NumPy, Pandas, Oracle PL/SQL). Other non-interpreted languages like Java, C++, Rust, etc. are a plus. Must be proficient in the intermediate-advanced level of the language (concurrency, memory management, etc.) • Design patterns: typical GOF patterns (Factory, Facade, Singleton, etc.) • Data structures: maps, lists, arrays, etc • SCM: solid Git proficiency, MS Azure DevOps (CI/CD) • SQL: proficiency with Oracle indexes, SQL tuning, views, stored procedures, and functions • OS: majority of the development is on Redhat Linux but should be comfortable with Windows. Some Unix shell scripting may be needed from time to time • API development: must have proficiency with HTTP REST with gRPC / sockets as a plus • Comfortable working iteratively in a dynamic, flexible environment • Strong self-management skills and with a focus on timely completion of tasks • Ability to converse with technical and non-technical groups
    $99k-138k yearly est. Auto-Apply 40d ago
  • Data Platform Engineer (USA)

    Trexquant Investment 4.0company rating

    Data scientist job in Stamford, CT

    Job Description Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a highly motivated and technically rigorous Data Platform Engineer to help modernize our foundational data infrastructure. As a Data Platform Engineer, you will be at the center of building the systems that ensure the quality, reliability, and discoverability of mission-critical data. Your work will directly impact the data operators and downstream consumers by creating robust tools, monitoring, and workflows that ensure accuracy, validity, and timeliness of data across the firm. Responsibilities Architect and maintain core components of the Data Platform with a strong focus on reliability and scalability. Build and maintain tools to manage data feeds, monitor validity, and ensure data timeliness. Design and implement event-based data orchestration pipelines. Evaluate and integrate data quality and observability tools via POCs and MVPs. Stand up a data catalog system to improve data discoverability and lineage tracking. Collaborate closely with infrastructure teams to support operational excellence and platform uptime. Write and maintain data quality checks to validate real-time and batch data. Validate incoming real-time data using custom Python-based validators. Ensure low-level data correctness and integrity, especially in high-precision environments. Build robust and extensible systems that will be used by data operators to ensure the health of our data ecosystem. Own the foundational systems used by analysts and engineers alike to trust and explore our datasets. Requirements A Bachelor's degree in Computer Science or a related field; advanced degree preferred. 3+ years of hands-on experience with Python in a data engineering or backend development context Experience with distributed data systems (e.g., Spark, Kafka, Airflow). Proven experience running POCs, evaluating data quality and data platform tools. Demonstrated interest and experience in low-level data reliability, correctness, and observability. Familiarity with systems-level thinking and the principles of data operations in production. Background in high-performance computing or real-time data processing is a plus. Prior experience in a quantitative or financial setting is highly desirable. Benefits Competitive salary, plus bonus based on individual and company performance. Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets. PPO Health, dental, and vision insurance premiums fully covered for you and your dependents. Pre-Tax Commuter Benefits - making your commute smoother. Trexquant is an Equal Opportunity Employer
    $95k-136k yearly est. 11d ago
  • Tech Lead, Data & Inference Engineer

    Catalyst Labs

    Data scientist job in Goldens Bridge, NY

    Our Client A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts. About Us Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations. We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems. Location: San Francisco Work type: Full Time, Compensation: above market base + bonus + equity Roles & Responsibilities Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use. Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems. Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions. Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops. Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making. Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally. Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases. Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization. Qualifications Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics. Excellent written and verbal communication; proactive and collaborative mindset. Comfortable in hybrid or distributed environments with strong ownership and accountability. A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes. Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly. Core Experience 6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design. Expert SQL (query optimization on large datasets) and Python skills. Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect). Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability. Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure). Bonus: Strong Node.js skills for faster onboarding and system integration. Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
    $90k-123k yearly est. 34d ago

Learn more about data scientist jobs

How much does a data scientist earn in Greenwich, CT?

The average data scientist in Greenwich, CT earns between $64,000 and $122,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Greenwich, CT

$88,000

What are the biggest employers of Data Scientists in Greenwich, CT?

The biggest employers of Data Scientists in Greenwich, CT are:
  1. Atlas Air
  2. Gartner
  3. W. R. Berkley
  4. New York Blood Center
Job type you want
Full Time
Part Time
Internship
Temporary