*Primary Responsibilities* below, then hit the apply button. · Design and implement responsive, accessible, and user-friendly interfaces using React and modern JavaScript/TypeScript practices.
· Collaborate with UX/UI designers, clinicians, and backend engineers to translate complex medical workflows into clear, efficient, and compliant user experiences.
· Integrate front-end applications with backend services, APIs, and healthcare data standards (e.g., FHIR, HL7).
· Maintain high standards of security and data privacy, ensuring compliance with HIPAA and healthcare industry requirements.
· Participate in code reviews, ensuring maintainability, performance, and adherence to coding best practices.
· Implement unit, integration, and end-to-end tests to safeguard against regressions and ensure software reliability.
· Optimize front-end performance for large datasets and real-time clinical applications.
· Stay current with front-end trends and emerging healthcare technology standards to continuously improve user experience.
· Collaborate in Agile development teams, contributing to sprint planning, stand-ups, and retrospectives.
· Provide feedback and contribute to design system improvements and reusable UI component libraries.
*Technical Skills*
· Minimum 2+ years of front-end development experience, preferably in healthcare or other regulated industries.
· Strong proficiency in React (Hooks, Context API, state management) and TypeScript/JavaScript (ES6+).
· Experience with HTML5, CSS3, modern UI frameworks, and responsive design.
· Familiarity with RESTful APIs, front-end/backend integration
· Experience with testing frameworks (e.g., Jest, React Testing Library, Cypress).
· Proficiency with Git, CI/CD workflows, and agile development practices.
· Bonus: Experience with data visualization libraries (e.g., D3.js, Recharts, Chart.js) for healthcare dashboards.
*Soft Skills*
· Strong collaboration skills with designers, product managers, and clinicians.
· Excellent written and verbal communication, able to explain technical solutions to non-technical healthcare stakeholders.
· Detail-oriented, with an emphasis on accuracy and usability in safety-critical applications.
· Self-driven, adaptable, and passionate about building software that improves healthcare outcomes.
*Qualifications*
· Bachelor's or Master's degree in Computer Science, Software Engineering, or related field.
· Healthcare technology experience (EHR/EMR, medication management systems, or clinical decision support tools) is highly desirable. xevrcyc
· Commitment to continuous learning, professional development, and regulatory awareness in the healthcare domain.
*We are an EOE*
Job Type: Full-time
Pay: $80,000.00 - $120,000.00 per year
Benefits:
* 401(k)
* Dental insurance
* Health insurance
* Paid time off
* Vision insurance
Education:
* Bachelor's (Required)
Ability to Commute:
* Woodbury, NY 11797 (Required)
Work Location: In person
Job Type: Full-time
Pay: $80,000.00 - $120,000.00 per year
Benefits:
* 401(k)
* Dental insurance
* Health insurance
* Paid time off
* Vision insurance
Work Location: In person
$80k-120k yearly 1d ago
Looking for a job?
Let Zippia find it for you.
Senior Software Development Engineer in Test
Interactive Brokers Group, Inc. 4.8
Data engineer job in Greenwich, CT
Interactive Brokers Group, Inc. (Nasdaq: IBKR) is a global financial services company headquartered in Greenwich, CT, USA, with offices in over 15 countries. We have been at the forefront of financial innovation for over four decades, known for our cutting-edge technology and client commitment.
IBKR affiliates provide global electronic brokerage services around the clock on stocks, options, futures, currencies, bonds, and funds to clients in over 200 countries and territories. We serve individual investors and institutions, including financial advisors, hedge funds and introducing brokers. Our advanced technology, competitive pricing, and global market help our clients to make the most of their investments.
Barron\u2019s has recognized Interactive Brokers as the #1 online broker for six consecutive years. Join our dynamic, multi-national team and be a part of a company that simplifies and enhances financial opportunities using state-of-the-art technology.
This is a hybrid role (3 days in the office/2 days remote).
About Your Team:
The current Tools Engineering team has provided various world-class tools across the firm (Developers, Quality Engineers and Traders) to help solve their productivity issues and provide easy solutions to build environments at the runtime. The Tools Engineering team also provides an automated testing framework allowing end users (Developers/QA) to write functional integration test cases using simple scripts and mocking/stubbing various inputs and outputs to Interactive Brokers\u2019 front office trading systems when required.
What will be your responsibilities within IBKR:
We seek a self-driven, self-motivated & self-managed software developer with expertise in Python programming.
An ideal candidate will be able to design/develop solutions based on the requirements/needs of end users.
Consistently deliver on timelines with the highest quality of work.
The candidate should be able to troubleshoot problems related to the Linux operating system and trading systems individually and collaborate with other team members.
Candidate should have a problem-solving track record.
Which Skills Are Required:
Overall, 7-10+ years of experience in the financial industry, specifically in front-office trading, is a must.
10+ years of experience with Python programming language is a must.
Must have a deep understanding of FIX protocol.
Strong domain knowledge of financial asset classes like stocks, options, market data concepts, FIX connectivity
Subject Matter Expert in building efficient and scalable automation frameworks using Pytest
Good understanding of the Linux Operating System.
Good understanding of the GIT version control system.
To be successful in this position, you will have the following:
Docker experience will be a plus.
Knowledge of JAVA and PERL is a plus.
Backoffice & clearing experience is a plus.
Self-motivated and able to handle tasks with minimal supervision.
Superb analytical skills.
Excellent collaboration and communication (Verbal and written) skills
Outstanding organizational and time management skills
Company Benefits & Perks
Competitive salary, annual performance-based bonus and stock grant
Retirement plan 401(k) with a competitive company match
Excellent health and wellness benefits, including medical, dental, and vision benefits, and a company-paid medical healthcare premium
Wellness screenings and assessments, health coaches and counseling services through an Employee Assistance Program (EAP)
Paid time off and a generous parental leave policy
Daily company lunch allowance provided, and a fully stocked kitchen with healthy options for breakfast and snack
Corporate events, including team outings, dinners, volunteer activities and company sports teams
Education reimbursement and learning opportunities
Modern offices with multi-monitor setups
Apply for this job
If you are interested in this role, please submit your application with your resume/CV. Details requested typically include contact information, education, and background relevant to the position.
#J-18808-Ljbffr
$92k-112k yearly est. 2d ago
Senior Software Engineer - Analytics Front Office
P2P 3.2
Data engineer job in Greenwich, CT
DRW is a diversified trading firm with over 3 decades of experience bringing sophisticated technology and exceptional people together to operate in markets around the world. We value autonomy and the ability to quickly pivot to capture opportunities, so we operate using our own capital and trading at our own risk.
Headquartered in Chicago with offices throughout the U.S., Canada, Europe, and Asia, we trade a variety of asset classes including Fixed Income, ETFs, Equities, FX, Commodities and Energy across all major global markets. We have also leveraged our expertise and technology to expand into three non-traditional strategies: real estate, venture capital and cryptoassets.
We operate with respect, curiosity and open minds. The people who thrive here share our belief that it's not just what we do that matters - it's how we do it. DRW is a place of high expectations, integrity, innovation and a willingness to challenge consensus.
Team & Role Overview
UP - Analytics Front Office team is looking for a Senior Software Engineer who relishes working in challenging time‑critical environments solving complex problems alongside highly capable peers. Our team operates services providing real‑time PnL and Risk monitoring services for a diverse group of trading desks each with varying degrees of portfolio and model complexity.
While previous experience in the trading and finance industry is beneficial, we're looking for talented software engineers with or without industry‑specific expertise.
UP - Analytics Front Office primarily operates C# services heavily utilizing RX for LINQ‑style composition and asynchronous dispatch. We often reach for Python to build smaller services and frequently interact with analytics libraries in C++.
Responsibilities
Design, implement and operate low latency risk analytics systems as part of a highly capable team.
Decompose complex functional requirements into coherent service designs that are efficient, simple to operate, and can be changed reliably.
Provide on‑call support as part of our teamwide rotation; we split on‑call across the US and UK time zones to limit off‑hours disruptions.
Be a capable mentor who is eager to contribute unique knowledge and perspective to advance the team's capabilities.
Required Qualifications
Extensive experience designing & operating low latency distributed systems at scale for critical business functions.
Extensive experience in testing & test automation.
Fluency in functional and object‑oriented programming languages.
Competency in using Git, CI/CD platforms, Docker and Kubernetes.
Familiarity With
More than one of: C#, Java, Python, and C++.
Databases such as: MSSQL, Postgres, Redis.
Kafka/RabbitMQ or similar event‑based platforms.
Data structures and design/analysis of algorithms.
Bonus
Fixed Income products and Interest Rate derivatives (including Risk, PnL attribution, scenario analysis, etc.).
IR derivatives models (Yield Curves, Option Pricing, SABR, etc.).
Statistics, discrete mathematics, linear algebra.
Personal Traits
Possesses ability and desire to learn, adapt and grow.
Demonstrates personal humility, respect for others, and trust in teammates.
Capable of independently driving projects to completion but prefers collaborating with teammates.
Excellent problem‑solving and debugging skills, as well as strong listening and communication skills.
Strong attention to detail, with a track record of leading and driving projects to completion.
Compensation & Benefits
The annual base salary range for this position is $200k to $250k depending on the candidate's experience, qualifications, and relevant skill set. The position is also eligible for an annual discretionary bonus. DRW offers a comprehensive suite of employee benefits including group medical, pharmacy, dental and vision insurance, 401k with discretionary employer match, short and long‑term disability, life and AD&D insurance, health savings accounts, and flexible spending accounts.
Privacy Notice
For more information about DRW's processing activities and our use of job applicants' data, please view our Privacy Notice at *******************************
California residents, please review the California Privacy Notice for information about certain legal rights at ******************************************
[#LI-SK1]
#J-18808-Ljbffr
$200k-250k yearly 5d ago
Principal Software Engineer (Embedded Systems)
Highbrow LLC 3.8
Data engineer job in Norwalk, CT
Job Title: Principal Software Engineer (Embedded Systems)
Domain: Industrial Automation Robotics
Interview: Teams Meetings (2) - then potential onsite interview (client pays the travel expenses)
VISA: USC, Green Card
Do not send: Konstantin Yakovlev, Kam Anjorin, Jason Lowe, Charles Lowe, Bo Lui, Gilbert Desmarias, Darrell Weaver, Vadym Kargin, Vijay Gude, Brett Porter
PLEASE FILL IN TEMPLATES #1 and #2 BELOW
Template#1 Candidate Details
Full Name (First and Last):
Phone Number:
E-Mail ID:
Salary:
US or Green card:
Current Location (City & State):
Template #2 How Many Years With
C:
C++:
RTOS:
Embedded Software:
Device Driver Software Development:
Job Details
As a software engineer, you'll tackle challenges that blend hardware and software-working on things like machine learning for organizing and categorizing algorithms, real-time system monitoring, and high-performance automation tools. The problems are complex, the scale is global, and your work directly impacts how businesses operate.
It's an environment where innovation is constant, your contributions are visible, and your growth is taken seriously.
If you're looking to write software that drives real machines, solves physical problems, and delivers impact you can see-not just in code, but in motion-this is the kind of place that will keep you engaged and growing every day.
10 years of experience with C++, Embedded Development, RTOS, and Control systems are needed for this role. Bachelor's degree as a minimum is also needed for this role.
Industries/Domains to target
Medical
Semiconductor
Aerospace
Defense
Industrial Control Systems
Robotics
Machines
Appliances
Embedded Devices
#J-18808-Ljbffr
$110k-146k yearly est. 3d ago
ATE Engineer
Russell Tobin 4.1
Data engineer job in Hauppauge, NY
- ATE Engineer
Rate- $65/hr
Job descriptions:
· 10+ years overall experience in test system software & test hardware development
· Strong knowledge in handling NI PXI, PCI Hardware ( DAQ Cards, Chassis). Integrate 3rd party hardware.
· Experience in development/ Upgrading automated test equipment , avionics, aerospace programs is a plus.
· High level of expertise in developing automated testing equipment for production testing of mixed-signal products and sub-assemblies for MIL/Aerospace applications including analog circuits, embedded microprocessors and FPGAs, and supporting circuitry and power supplies.
· Experience or knowledge of RS422/RS232, ARINC, AFDX interface knowledge, TCP/IP & Ethernet, UDP and such communication standards, protocols and/or interfaces is desirable
· Experience in managing Software code repositories and Continuous Integration/Deployment is highly desirable
· Strong verbal and written communication skills.
· Common I/O protocols (I2C, SPI, JTAG, RS-232, RS-422, RS-485, Arinc 429, MIL-STD-1553, etc).
· Solid foundation of LabView expertise
· High level of experience with hands-on troubleshooting and turn-on of new test systems, including test bench equipment such as multi meters, DAQs, spectrum analyzers, JTAG and ICE probes, software, oscilloscopes, etc.
· Experience in qualifying embedded systems to MIL-STD-810 MIL-STD-461 or IEC equivalents
· Skilled in the use of common test bench equipment such as multimeters, DAQs, oscilloscopes, power supplies, RF and optical spectrum analyzers
· Hands-on and extremely strong in system bring-up, troubleshooting, calibration, etc.
· High level of expertise in NI LabVIEW , Test Stand & Phyton
· Experience in reading schematics
· Experience in test equipment failure analysis and troubleshooting skills in a production environment.
· Experience with prototyping solutions and bench testing methodology
· Experience in testing to MIL-STD-810, MIL-STD-461, MIL-STD-1275, RTCA/DO-160 and IPC-610 requirements.
· Able to handle ITAR data
$65 hourly 4d ago
Data Scientist - Analytics roles draw analytical talent hunting for roles.
Boxncase
Data engineer job in Commack, NY
About the Role We believe that the best decisions are backed by data. We are seeking a curious and analytical Data Scientist to champion our data -driven culture.
In this role, you will act as a bridge between technical data and business strategy. You will mine massive datasets, build predictive models, and-most importantly-tell the story behind the numbers to help our leadership team make smarter choices. You are perfect for this role if you are as comfortable with SQL queries as you are with slide decks.
### What You Will Do
Exploratory Analysis: Dive deep into raw data to discover trends, patterns, and anomalies that others miss.
Predictive Modeling: Build and test statistical models (Regression, Time -series, Clustering) to forecast business outcomes and customer behavior.
Data Visualization: Create clear, impactful dashboards using Tableau, PowerBI, or Python libraries (Matplotlib/Seaborn) to visualize success metrics.
Experimentation: Design and analyze A/B tests to optimize product features and marketing campaigns.
Data Cleaning: Work with DataEngineers to clean and structure messy data for analysis.
Strategy: Present findings to stakeholders, translating complex math into clear, actionable business recommendations.
Requirements
Experience: 2+ years of experience in Data Science or Advanced Analytics.
The Toolkit: Expert proficiency in Python or R for statistical analysis.
Data Querying: Advanced SQL skills are non -negotiable (Joins, Window Functions, CTEs).
Math Mindset: Strong grasp of statistics (Hypothesis testing, distributions, probability).
Visualization: Ability to communicate data visually using Tableau, PowerBI, or Looker.
Communication: Excellent verbal and written skills; you can explain a p -value to a non -technical manager.
### Preferred Tech Stack (Keywords)
Languages: Python (Pandas, NumPy), R, SQL
Viz Tools: Tableau, PowerBI, Looker, Plotly
Machine Learning: Scikit -learn, XGBoost (applied to business problems)
Big Data: Spark, Hadoop, Snowflake
Benefits
Salary Range: $50,000 - $180,000 USD / year (Commensurate with location and experience)
Remote Friendly: Work from where you are most productive.
Learning Budget: Stipend for data courses (Coursera, DataCamp) and books.
$50k-180k yearly 20d ago
Data Scientist
The Connecticut Rise Network
Data engineer job in New Haven, CT
RISE Data Scientist
Reports to: Monitoring, Evaluation, and Learning Manager
Salary: Competitive and commensurate with experience
Please note: Due to the upcoming holidays, application review for this position will begin the first week of January. Applicants can expect outreach by the end of the week of January 5.
Overview:
The RISE Network's mission is to ensure all high school students graduate with a plan and the skills and confidence to achieve college and career success. Founded in 2016, RISE partners with public high schools to lead networks where communities work together to use data to learn and improve. Through its core and most comprehensive network, RISE partners with nine high schools and eight districts, serving over 13,000 students in historically marginalized communities.
RISE high schools work together to ensure all students experience success as they transition to, through, and beyond high school by using data to pinpoint needs, form hypotheses, and pursue ideas to advance student achievement. Partner schools have improved Grade 9 promotion rates by nearly 20 percentage points, while also decreasing subgroup gaps and increasing schoolwide graduation and college access rates. In 2021, the RISE Network was honored to receive the Carnegie Foundation's annual Spotlight on Quality in Continuous Improvement recognition. Increasingly, RISE is pursuing opportunities to scale its impact through research publications, consulting partnerships, professional development experiences, and other avenues to drive excellent student outcomes.
Position Summary and Essential Job Functions:
The RISE Data Scientist will play a critical role in leveraging data to support continuous improvement, program evaluation, and research, enhancing the organization's evidence-based learning and decision-making. RISE is seeking a talented and motivated individual to design and conduct rigorous quantitative analyses to assess the outcomes and impacts of programs.
The ideal candidate is an experienced analyst who is passionate about using data to drive social change, with strong skills in statistical modeling, data visualization, and research design. This individual will also lead efforts to monitor and analyze organization-wide data related to mission progress and key performance indicators (KPIs), and communicate these insights in ways that inspire improvement and action. This is an exciting opportunity for an individual who thrives in an entrepreneurial environment and is passionate about closing opportunity gaps and supporting the potential of all students, regardless of life circumstances. The role will report to the Monitoring, Evaluation, and Learning (MEL) Manager and sit on the MEL team.
Responsibilities include, but are not limited to:
1. Research and Evaluation (30%)
Collaborate with MEL and network teams to design and implement rigorous process, outcome, and impact evaluations.
Lead in the development of data collection tools and survey instruments.
Manage survey data collection, reporting, and learning processes.
Develop RISE learning and issue briefs supported by quantitative analysis.
Design and implement causal inference approaches where applicable, including quasi-experimental designs.
Provide technical input on statistical analysis plans, monitoring frameworks, and indicator selection for network programs.
Translate complex findings into actionable insights and policy-relevant recommendations for non-technical audiences.
Report data for RISE leadership and staff, generating new insights to inform program design.
Create written reports, presentations, publications, and communications pieces.
2. Quantitative Analysis and Statistical Modeling (30%)
Clean, transform, and analyze large and complex datasets from internal surveys, the RISE data warehouse, and external data sources such as the National Student Clearinghouse (NSC).
Conduct exploratory research that informs organizational learning.
Lead complex statistical analyses using advanced methods (regression modeling, propensity score matching, difference in differences analysis, time-series analysis, etc).
Contribute to data cleaning and analysis for key performance indicator reporting.
Develop processes that support automation of cleaning and analysis for efficiency.
Develop and maintain analytical code and workflows to ensure reproducibility.
3. Data Visualization and Tool-building (30%)
Work closely with non-technical stakeholders to understand the question(s) they are asking and the use cases they have for specific data visualizations or tools
Develop well-documented overviews and specifications for new tools.
Create clear, compelling data visualizations and dashboards.
Collaborate with DataEngineering to appropriately and sustainably source data for new tools.
Manage complex projects to build novel and specific tools for internal or external stakeholders.
Maintain custom tools for the duration of their usefulness, including by responding to feedback and requests from project stakeholders.
4. Data Governance and Quality Assurance (10%)
Support data quality assurance protocols and standards across the MEL team.
Ensure compliance with data protection, security, and ethical standards.
Maintain organized, well-documented code and databases.
Collaborate with the DataEngineering team to maintain RISE MEL data infrastructure.
Qualifications
Master's degree (or PhD) in statistics, economics, quantitative social sciences, public policy, data science, or related field.
Minimum of 3 years of professional experience conducting statistical analysis and managing large datasets.
Advanced proficiency in R, Python, or Stata for data analysis and modeling.
Experience designing and implementing quantitative research and evaluation studies.
Strong understanding of inferential statistics, experimental and quasi-experimental methods, and sampling design.
Strong knowledge of survey data collection tools such as Key Surveys, Google Forms, etc.
Excellent data visualization and communication skills
Experience with data visualization tools; strong preference for Tableau.
Ability to translate complex data into insights for diverse audiences, including non-technical stakeholders.
Ability to cultivate relationships and earn credibility with a diverse range of stakeholders.
Strong organizational and project management skills.
Strong sense of accountability and responsibility for results.
Ability to work in an independent and self-motivated manner.
Demonstrated proficiency with Google Workspace.
Commitment to equity, ethics, and learning in a nonprofit or mission-driven context.
Positive attitude and willingness to work in a collaborative environment.
Strong belief that all students can learn and achieve at high levels.
Preferred
Experience working on a monitoring, evaluation, and learning team.
Familiarity with school data systems and prior experience working in a school, district, or similar K-12 educational context preferred.
Experience working with survey data (e.g., DHS, LSMS), administrative datasets, or real-time digital data sources.
Working knowledge of dataengineering or database management (SQL, cloud-based platforms).
Salary Range
$85k - $105k
Most new hires' salaries fall within the first half of the range, allowing team members to grow in their roles. For those who already have significant and aligned experiences at the same level as the role, placement may be at the higher end of the range.
The Connecticut RISE Network is an equal opportunity employer and welcomes candidates from diverse backgrounds.
$85k-105k yearly Auto-Apply 18d ago
Junior Data Scientist
Bexorg
Data engineer job in New Haven, CT
About Us
Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment.
The Job
Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies.
Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively.
Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery.
Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries.
Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts.
Qualifications and Skills:
Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks.
Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered.
Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus.
Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results.
Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, dataengineers, and AI researchers.
Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
$75k-105k yearly est. Auto-Apply 60d+ ago
Data Scientist, Media
Digital United
Data engineer job in Farmington, CT
Accepting applicants in CT, FL, MN, NJ, NC, OH, TX Mediate.ly is seeking a hands-on Data Scientist to elevate media performance analysis, predictive modeling, and channel optimization. In this role, you'll leverage advanced machine learning techniques and generative AI tools to uncover actionable insights, automate reporting, and enhance campaign effectiveness across digital channels. You'll manage and evolve our existing performance dashboard (with a small external team), own the feature roadmap, and collaborate closely with Primacy on SEO/CRO data integration. A key part of the role involves supporting Account teams with clear, insight-rich reporting powered by enhanced data storytelling and visualization. This was meant for you if you are passionate and skilled in transforming complex datasets into clear, compelling insights.
Measures:
AI-Enhanced Reporting & Insight Automation
Business & Media Impact
Reporting Standardization and Quality
Dashboard & Data Product Ownership
Reports to: President
RESPONSIBILITIES:
Media & Channel Analytics
Analyze paid media across Google Ads, Meta, LinkedIn, Programmatic, YouTube; translate results into clear recommendations.
Build/maintain attribution approaches (last-click, MTA, assisted) and funnel diagnostics.
Integrate CRM/GA4/platform data to surface actionable trends by geo, audience, and creative.
Predictive Modeling & Experimentation
Develop forecasting and propensity models to guide budget allocation and channel mix.
Run simulations (CPM/CPC/conv-rate scenarios) and design A/B and lift tests.
Partner with SEO/CRO to connect acquisition with on-site conversion improvements.
Dashboard Ownership (Existing Platform)
Manage the dashboard development team (backlog, priorities, sprints) and collaborate on new features that improve usability and insight depth.
Gather stakeholder requirements (Accounts, Media, Leadership) and maintain a transparent roadmap.
Ensure data reliability (ETL QA, schema governance, tagging/UTM standards).
Reporting & Client Enablement
Support Account teams with data-backed, insight-driven reporting (monthly/quarterly reviews, executive summaries, narrative analyses).
Build repeatable report templates; automate where possible while preserving clear storytelling.
AI & Product Ideation
Explore LLM/ML use cases (persona signals, creative scoring, conversion prediction).
Prototype lightweight tools for planners/buyers (e.g., channel recommender, influence maps).
What it takes to succeed in this role-
QUALIFICATIONS:
5-7 years in data science/marketing analytics/digital media performance.
Proficient in Python or R; strong SQL; experience with GA4/BigQuery and media platform exports.
Comfort with BI tools (Looker Studio, Tableau, Power BI) and dashboard product management/ Data visualization.
Familiarity with generative AI tools (e.g., OpenAI, Hugging Face, or Google Vertex AI) for automating insights, reporting, or content analysis.
Comfortable in a fast-paced environment with competing priorities.
Experience applying machine learning models to media mix modeling, customer segmentation, or predictive performance forecasting.
Strong understanding of marketing attribution models and how to evaluate cross-channel performance using statistical techniques.
Excellent communicator who can turn data into decisions for non-technical stakeholders.
Experience with paid media a plus!
Key Competencies
Data Visualization & Storytelling - Skilled in transforming complex datasets into clear, compelling insights using tools like Tableau, Power BI, or Python libraries.
AI & Machine Learning Expertise - Proficient in applying supervised and unsupervised learning techniques to optimize media performance and audience targeting.
Media Analytics & Attribution - Deep understanding of digital media metrics, multi-touch attribution models, and cross-channel performance analysis.
Dashboard Development & Management - Experience managing analytics dashboards, defining feature roadmaps, and collaborating with developers for scalable solutions.
SEO/CRO Data Integration - Ability to synthesize SEO and conversion rate optimization data to inform predictive models and campaign strategies.
Stakeholder Communication - Strong ability to translate data into actionable insights for Account teams and clients, supporting strategic decision-making.
Automation & Efficiency - Familiarity with AI tools to streamline reporting, anomaly detection, and campaign optimization workflows.
Statistical Analysis & Experimentation - Proficient in A/B testing, regression analysis, and causal inference to validate media strategies.
The Perks:
The best co-workers you'll ever find
Unlimited PTO
Medical, Dental, Vision, 401k plus match
Annual performance bonus eligibility
Ongoing training opportunities
Planned outings and team events (remote workers included!)
PHYSICAL DEMANDS AND WORK ENVIRONMENT:
Prolonged periods of sitting at a desk and working on a computer.
Occasional standing, walking, or lifting of office supplies (up to 10-20 lbs.)
Frequent communication via phone, email, and video conferencing.
Work is performed in a temperature-controlled office environment with standard lighting and noise levels.
Position may require occasional travel to client site
Compensation Range: We offer a competitive salary based on experience and qualifications. The compensation range for this position is $90,000 to $100,000 annually, with potential for bonuses, stock and additional benefits. EEO & Accessibility Statement
Primacy is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. If you require reasonable accommodation during the application or interview process, please contact [email protected]
$90k-100k yearly Auto-Apply 48d ago
Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake
Intermedia Group
Data engineer job in Ridgefield, CT
OPEN JOB: DataEngineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake **HYBRID - This candidate will work on site 2-3X per week in Ridgefield CT location SALARY: $140,000 to $185,000
2 Openings
NOTE: CANDIDATE MUST BE US CITIZEN OR GREEN CARD HOLDER
We are seeking a highly skilled and experienced DataEngineer to design, build, and maintain our scalable and robust data infrastructure on a cloud platform. In this pivotal role, you will be instrumental in enhancing our data infrastructure, optimizing data flow, and ensuring data availability. You will be responsible for both the hands-on implementation of data pipelines and the strategic design of our overall data architecture.
Seeking a candidate with hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake, Proficiency in Python and SQL and DevOps/CI/CD experience
Duties & Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
Collaborate with data architects, modelers and IT team members to help define and evolve the overall cloud-based data architecture strategy, including data warehousing, data lakes, streaming analytics, and data governance frameworks
Collaborate with data scientists, analysts, and other business stakeholders to understand data requirements and deliver solutions.
Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift) ensuring data quality, integrity, security, and accessibility.
Implement data quality and validation processes to ensure data accuracy and reliability.
Develop and maintain documentation for data processes, architecture, and workflows.
Monitor and troubleshoot data pipeline performance and resolve issues promptly.
Consulting and Analysis: Meet regularly with defined clients and stakeholders to understand and analyze their processes and needs. Determine requirements to present possible solutions or improvements.
Technology Evaluation: Stay updated with the latest industry trends and technologies to continuously improve dataengineering practices.
Requirements
Cloud Expertise: Expert-level proficiency in at least one major cloud platform (AWS, Azure, or GCP) with extensive experience in their respective data services (e.g., AWS S3, Glue, Lambda, Redshift, Kinesis; Azure Data Lake, Data Factory, Synapse, Event Hubs; GCP BigQuery, Dataflow, Pub/Sub, Cloud Storage); experience with AWS data cloud platform preferred
SQL Mastery: Advanced SQL writing and optimization skills.
Data Warehousing: Deep understanding of data warehousing concepts, Kimball methodology, and various data modeling techniques (dimensional, star/snowflake schemas).
Big Data Technologies: Experience with big data processing frameworks (e.g., Spark, Hadoop, Flink) is a plus.
Database Systems: Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
DevOps/CI/CD: Familiarity with DevOps principles and CI/CD pipelines for data solutions.
Hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake Formation
Proficiency in Python and SQL
Desired Skills, Experience and Abilities
4+ years of progressive experience in dataengineering, with a significant portion dedicated to cloud-based data platforms.
ETL/ELT Tools: Hands-on experience with ETL/ELT tools and orchestrators (e.g., Apache Airflow, Azure Data Factory, AWS Glue, dbt).
Data Governance: Understanding of data governance, data quality, and metadata management principles.
AWS Experience: Ability to evaluate AWS cloud applications, make architecture recommendations; AWS solutions architect certification (Associate or Professional) is a plus
Familiarity with Snowflake
Knowledge of dbt (data build tool)
Strong problem-solving skills, especially in data pipeline troubleshooting and optimization
If you are interested in pursuing this opportunity, please respond back and include the following:
Full CURRENT Resume
Required compensation
Contact information
Availability
Upon receipt, one of our managers will contact you to discuss in full
STEPHEN FLEISCHNER
Recruiting Manager
INTERMEDIA GROUP, INC.
EMAIL: *******************************
$140k-185k yearly Easy Apply 60d+ ago
Data Engineer (AI, ML, and Data Science)
Consumer Reports
Data engineer job in Yonkers, NY
WHO WE ARE
Consumer Reports is an independent, nonprofit organization dedicated to a fair and just marketplace for all. CR is known for our rigorous testing and trusted ratings on thousands of products and services. We report extensively on consumer trends and challenges, and survey millions of people in the U.S. each year. We leverage our evidence-based approach to advocate for consumer rights, working with policymakers and companies to find solutions for safer products and fair practices.
Our mission starts with you. We offer medical benefits that start on your first day as a CR employee that include behavioral health coverage, family planning and a generous 401K match. Learn more about how CR advocates on behalf of our employees.
OVERVIEW
Data powers everything we do at CR-and it's the foundation for our AI and machine learning efforts that are transforming how we serve consumers.
The DataEngineer ( AI/ML & Data Science) will play a critical role in building the data infrastructure that powers advanced AI applications, machine learning models, and analytics systems across CR. Reporting to the Associate Director, AI/M & Data Science, in this role, you will design and maintain robust data pipelines and services that support experimentation, model training, and AI application deployment.
If you're passionate about solving complex data challenges, working with cutting-edge AI technologies, and enabling impactful, data-driven products that support CR's mission, this is the role for you.
This is a hybrid position. This position is not eligible for sponsorship or relocation assistance.
How You'll Make An Impact
As a mission based organization, CR and our Software team are pursuing an AI strategy that will drive value for our customers, give our employees superpowers, and address AI harms in the digital marketplace. We're looking for an AI/ML engineer to help us execute on our multi-year roadmap around generative AI.
As a DataEngineer ( AI/M & Data Science) you will:
Design, develop, and maintain ETL/ELT pipelines for structured and unstructured data to support AI/ML model and application development, evaluation, and monitoring.
Build and optimize data processing workflows in Databricks, AWS SageMaker, or similar cloud platforms.
Collaborate with AI/ML engineers to deliver clean, reliable datasets for model training and inference.
Implement data quality, observability, and lineage tracking within the ML lifecycle.
Develop Data APIs/microservices to power AI applications and reporting/analytics dashboards.
Support the deployment of AI/ML applications by building and maintaining feature stores and data pipelines optimized for production workloads.
Ensure adherence to CR's data governance, security, and compliance standards across all AI and data workflows.
Work with Product, Engineering and other stakeholders to define project requirements and deliverables.
Integrate data from multiple internal and external systems, including APIs, third-party datasets, and cloud storage.
ABOUT YOU
You'll Be Highly Rated If:
You have the experience. You have 3+ years of experience designing and developing data pipelines, data models/schemas, APIs, or services for analytics or ML workloads.
You have the education. You've earned a Bachelor's degree in Computer Science, Engineering, or a related field.
You have programming skills. You are skilled in Python, SQL, and have experience with PySpark on large-scale datasets.
You have experience with data orchestration tools such as Airflow, dbt and Prefect, plus CI/CD pipelines for data delivery.
You have experience with Data and AI/ML platforms such as Databricks, AWS SageMaker or similar.
You have experience working with Kubernetes on cloud platforms like - AWS, GCP, or Azure.
You'll Be One of Our Top Picks If:
You are passionate about automation and continuous improvement.
You have excellent documentation and technical communication skills.
You are an analytical thinker with troubleshooting abilities.
You are self-driven and proactive in solving infrastructure bottlenecks.
FAIR PAY AND A JUST WORKPLACE
At Consumer Reports, we are committed to fair, transparent pay and we strive to provide competitive, market-informed compensation.The target salary range for this position is $100K-$120K. It is anticipated that most qualified candidates will fall near the middle of this range. Compensation for the successful candidate will be informed by the candidate's particular combination of knowledge, skills, competencies, and experience. We have three locations: Yonkers, NY, Washington, DC and Colchester, CT. We are registered to do business in and can only hire from the following states and federal district: Arizona, California, Connecticut, Illinois, Maryland, Massachusetts, Michigan, New Hampshire, New Jersey, New York, Texas, Vermont, Virginia and Washington, DC.
Consumer Reports is an equal opportunity employer and does not discriminate in employment on the basis of actual or perceived race, color, creed, religion, age, national origin, ancestry, citizenship status, sex or gender (including pregnancy, childbirth, related medical conditions or lactation), gender identity and expression (including transgender status), sexual orientation, marital status, military service or veteran status, protected medical condition as defined by applicable state or local law, disability, genetic information, or any other basis protected by applicable federal, state or local laws. Consumer Reports will provide you with any reasonable assistance or accommodation for any part of the application and hiring process.
$100k-120k yearly Auto-Apply 33d ago
Tech Lead, Data & Inference Engineer
Catalyst Labs
Data engineer job in Stamford, CT
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
$84k-114k yearly est. 51d ago
Director, ERM - Actuary or Data Scientist
Berkley 4.3
Data engineer job in Greenwich, CT
Company Details
"Our Company provides a state of predictability which allows brokers and agents to act with confidence."
Founded in 1967, W. R. Berkley Corporation has grown from a small investment management firm into one of the largest commercial lines property and casualty insurers in the United States.
Along the way, we've been listed on the New York Stock Exchange, become a Fortune 500 Company, joined the S&P 500, and seen our gross written premiums exceed $10 billion.
Today the Berkley brand comprises more than 60+ businesses worldwide and is divided into two segments: Insurance and Reinsurance and Monoline Excess. Led by our Executive Chairman, founder and largest shareholder, William. R. Berkley and our President and Chief Executive Officer, W. Robert Berkley, Jr., W.R. Berkley Corporation is well-positioned to respond to opportunities for future growth.
The Company is an equal employment opportunity employer.
Responsibilities
*Please provide a one-page resume when applying.
Enterprise Risk Management (ERM) Team
Our key risk management aim is to maximize Berkley's return on capital over the long term for an acceptable level of risk. This requires regular interaction with senior management both in corporate and our business units. The ERM team comprises ERM actuaries and catastrophe modelers responsible for identification, quantification and reporting on insurance, investment, credit and operational risks. The ERM team is a corporate function at Berkley's headquarters in Greenwich, CT.
The Role
The successful candidate will collaborate with other ERM team members on a variety of projects with a focus on exposure management and catastrophe modeling for casualty (re)insurance. The candidate is expected to demonstrate expertise in data and analytics and be capable of presenting data-driven insights to senior executives.
Key responsibilities include:
Casualty Accumulation and Catastrophe Modeling
• Lead the continuous enhancement of the casualty data ETL process
• Analyze and visualize casualty accumulations by insureds, lines and industries to generate actionable insight for business leaders
• Collaborate with dataengineers to resolve complex data challenges and implement scalable solutions
• Support the development of casualty catastrophe scenarios by researching historical events and emerging risks
• Model complex casualty reinsurance protections
Risk Process Automation and Group Reporting
• Lead AI-driven initiatives aimed at automating key risk processes and projects
• Contribute to Group-level ERM reports, including deliverables to senior executives, rating agencies and regulators
Qualifications
• Minimum of 5 years of experience in P&C (re)insurance, with a focus on casualty
• Proficiency in R/Python and Excel
• Strong verbal and written communication skills
• Proven ability to manage multiple projects and meet deadlines in a dynamic environment
Education Requirement
• Minimum of Bachelor's degree required (preferably in STEM)
• ACAS/FCAS is a plus
Sponsorship Details Sponsorship Offered for this Role
$74k-101k yearly est. Auto-Apply 60d+ ago
C++ Market Data Engineer (USA)
Trexquant Investment 4.0
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market DataEngineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Requirements
BS/MS/PhD in Computer Science, Electrical Engineering, or related field.
3+ years of professional C++ (14,17,20) development experience focused on low-latency, high-throughput systems.
Proven track record building or maintaining real-time market-data feeds (e.g., Refinitiv RTS/TREP, Bloomberg B-PIPE, OPRA, CME MDP, ITCH).
Strong grasp of concurrency, lock-free algorithms, memory-model semantics, and compiler optimizations.
Familiarity with serialization formats (FAST, SBE, Protocol Buffers) and time-series databases or in-memory caches.
Comfort with scripting in Python for prototyping, testing, and ops automation.
Excellent problem-solving skills, ownership mindset, and ability to thrive in a fast-paced trading environment.
Familiarity with containerization (Docker/K8s) and public-cloud networking (AWS, GCP).
Benefits
Competitive salary, plus bonus based on individual and company performance.
Collaborative, casual, and friendly work environment while solving the hardest problems in the financial markets.
PPO Health, dental and vision insurance premiums fully covered for you and your dependents.
Pre-Tax Commuter Benefits
Trexquant is an Equal Opportunity Employer
$95k-136k yearly est. Auto-Apply 60d+ ago
Network Planning Data Scientist (Manager)
Atlas Air 4.9
Data engineer job in White Plains, NY
Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes.
This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency.
Responsibilities
* Analyze and Monitor Network Performance
* Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements.
* Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness.
* Modeling & Optimization
* Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions.
* Support the design, testing, and implementation of software optimization tools to enhance operational decision-making.
* Reporting & Governance
* Develop periodic performance and reliability reports for customers, assisting in presentation creation
* Produce regular and ad hoc reports to monitor compliance with established operating parameters.
* Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets.
* Cross-Functional Collaboration
* Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution.
* Provide insights that inform schedule adjustments, fleet utilization, and contingency planning.
* Innovation & Continuous Improvement
* Identify opportunities to streamline workflows and automate recurring analyses.
* Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility.
Qualifications
* Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI)
* Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience.
* 5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning
* Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation.
* Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment.
* Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders.
* A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement.
* Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations
Salary Range: $131,500 - $177,500
Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset.
The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws.
If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ******************************************
To view our Pay Transparency Statement, please click here: Pay Transparency Statement
"Know Your Rights: Workplace Discrimination is Illegal" Poster
The "EEO Is The Law" Poster
$131.5k-177.5k yearly Auto-Apply 23d ago
Salesforce Data 360 Architect
Slalom 4.6
Data engineer job in White Plains, NY
Who You'll Work With In our Salesforce business, we help our clients bring the most impactful customer experiences to life and we do that in a way that makes our clients the hero of their transformation story. We are passionate about and dedicated to building a diverse and inclusive team, recognizing that diverse team members who are celebrated for bringing their authentic selves to their work build solutions that reach more diverse populations in innovative and impactful ways. Our team is comprised of customer strategy experts, Salesforce-certified experts across all Salesforce capabilities, industry experts, organizational and cultural change consultants, and project delivery leaders. As the 3rd largest Salesforce partner globally and in North America, we are committed to growing and developing our Salesforce talent, offering continued growth opportunities, and exposing our people to meaningful work that aligns to their personal and professional goals.
We're looking for individuals who have experience implementing Salesforce Data Cloud or similar platforms and are passionate about customer data. The ideal candidate has a desire for continuous professional growth and can deliver complex, end-to-end Data Cloud implementations from strategy and design, through to data ingestion, segment creation, and activation; all while working alongside both our clients and other delivery disciplines. Our Global Salesforce team is looking to add a passionate Principal or Senior Principal to take on the role of Data Cloud Architect within our Salesforce practice.
What You'll Do:
Responsible for business requirements gathering, architecture design, data ingestion and modeling, identity resolution setup, calculated insight configuration, segment creation and activation, end-user training, and support procedures
Lead technical conversations with both business and technical client teams; translate those outcomes into well-architected solutions that best utilize Salesforce Data Cloud and the wider Salesforce ecosystem
Ability to direct technical teams, both internal and client-side
Provide subject matter expertise as warranted via customer needs and business demands
Build lasting relationships with key client stakeholders and sponsors
Collaborate with digital specialists across disciplines to innovate and build premier solutions
Participate in compiling industry research, thought leadership and proposal materials for business development activities
Experience with scoping client work
Experience with hyperscale data platforms (ex: Snowflake), robust database modeling and data governance is a plus.
What You'll Bring:
Have been part of at least one Salesforce Data Cloud implementation
Familiarity with Salesforce's technical architecture: APIs, Standard and Custom Objects, APEX. Proficient with ANSI SQL and supported functions in Salesforce Data Cloud
Strong proficiency toward presenting complex business and technical concepts using visualization aids
Ability to conceptualize and craft sophisticated wireframes, workflows, and diagrams
Strong understanding of data management concepts, including data quality, data distribution, data modeling and data governance
Detailed understanding of the fundamentals of digital marketing and complementary Salesforce products that organizations may use to run their business. Experience defining strategy, developing requirements, and implementing practical business solutions.
Experience in delivering projects using Agile-based methodologies
Salesforce Data Cloud certification preferred
Additional Salesforce certifications like Administrator are a plus
Strong interpersonal skills
Bachelor's degree in a related field preferred, but not required
Open to travel (up to 50%)
About Us
Slalom is a fiercely human business and technology consulting company that leads with outcomes to bring more value, in all ways, always. From strategy through delivery, our agile teams across 52 offices in 12 countries collaborate with clients to bring powerful customer experiences, innovative ways of working, and new products and services to life. We are trusted by leaders across the Global 1000, many successful enterprise and mid-market companies, and 500+ public sector organizations to improve operations, drive growth, and create value. At Slalom, we believe that together, we can move faster, dream bigger, and build better tomorrows for all.
Compensation and Benefits
Slalom prides itself on helping team members thrive in their work and life. As a result, Slalom is proud to invest in benefits that include meaningful time off and paid holidays, parental leave, 401(k) with a match, a range of choices for highly subsidized health, dental, & vision coverage, adoption and fertility assistance, and short/long-term disability. We also offer yearly $350 reimbursement account for any well-being-related expenses, as well as discounted home, auto, and pet insurance.
Slalom is committed to fair and equitable compensation practices. For this role, we are hiring at the following levels and salary ranges:
East Bay, San Francisco, Silicon Valley:
Principal: $145,000-$225,000
San Diego, Los Angeles, Orange County, Seattle, Boston, Houston, New Jersey, New York City, Washington DC, Westchester:
Principal: $133,000-$206,000
All other locations:
Principal: $122,000-$189,000
In addition, individuals may be eligible for an annual discretionary bonus. Actual compensation will depend upon an individual's skills, experience, qualifications, location, and other relevant factors. The salary pay range is subject to change and may be modified at any time.
We are committed to pay transparency and compliance with applicable laws. If you have questions or concerns about the pay range or other compensation information in this posting, please contact us at: ********************.
We will accept applications until December 31, 2025 or until the position is filled.
EEO and Accommodations
Slalom is an equal opportunity employer and is committed to attracting, developing and retaining highly qualified talent who empower our innovative teams through unique perspectives and experiences. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veterans' status, or any other characteristic protected by federal, state, or local laws. Slalom will also consider qualified applications with criminal histories, consistent with legal requirements. Slalom welcomes and encourages applications from individuals with disabilities. Reasonable accommodations are available for candidates during all aspects of the selection process. Please advise the talent acquisition team or contact ****************************** if you require accommodations during the interview process.
$145k-225k yearly Easy Apply 17d ago
Applications Programmer Analyst
Provision People
Data engineer job in Poughkeepsie, NY
Our award-winning client is seeking an Applications Programmer Analyst to join their team. We're seeking a passionate Software Developer to join our team, leveraging expertise to solve key challenges and contribute to our company's mission.
Responsibilities:
Shape Tomorrow's Technology:
Partner with departments to identify and implement cutting-edge solutions (web, n-tier, advanced reporting, mobile, collaboration) that address their unique needs.
Design, develop, enhance, and test these innovative applications, ensuring quality and business continuity.
Collaborate with external vendors and resources to seamlessly integrate solutions.
Become a User Champion:
Analyze business processes and provide ongoing support for systems and applications, maximizing business impact.
Translate data into actionable insights, supporting informed decision-making across the organization.
Champion continuous improvement by researching emerging technologies and proposing innovative solutions.
Deliver engaging training and guidance on new and existing technology solutions.
Foster Collaboration and Success:
Proactively engage with stakeholders to understand their technology needs and develop impactful solutions.
Communicate project progress transparently and effectively throughout the development cycle.
Present various solution options, including cost and time estimations, to empower stakeholders' decision-making.
Required Qualifications:
A Bachelor's degree in Information Systems (IS) or a 2-year degree in IS with equivalent experience.
At least 4 years of programming and systems analysis experience across all solution development phases.
Expertise in: N-Tier development, CSS, HTML, PHP, JavaScript/TypeScript, NodeJS, C#, SQL, JSON, XML (REST protocol), MS SQL Server, Azure, MS Access, SharePoint, SSIS data integration, MS Visual Studio, SQL Server Mgmt Studio, MS Power platform (PowerBI, PowerApps, Dataverse), Git version control.
Exceptional communication, problem-solving, and analytical skills.
$76k-102k yearly est. 60d+ ago
Senior Software Engineer
Interactive Brokers Group, Inc. 4.8
Data engineer job in Greenwich, CT
Interactive Brokers Group, Inc. is a global financial services company with offices in over 15 countries. We provide electronic brokerage services around the clock to clients in over 200 countries and territories, serving individual investors and institutions. We are a technology-focused company recognized for competitive pricing and a robust trading platform.
This is a hybrid role (3 days in the office/2 days remote).
About the team: Interactive Brokers has been at the forefront of the Fintech space for over 40 years and continues to challenge the status quo to offer the best trading platform with the most sophisticated features at the lowest cost.
Responsibilities
Design and develop applications and services supporting IBKR's rapidly growing client base.
Build software that supports the expansion of IBKR brokerage business into new markets around the world.
Optimize and refactor existing code for improved reliability and performance.
Write and maintain design and engineering documentation.
Qualifications
Bachelor's or master's degree in Computer Science or related field.
Minimum 5 years of professional programming experience.
At least 3 years of Java programming experience.
Experience with Python, Perl or similar scripting languages.
Relational databases experience (Oracle).
Strong analytical skills.
What we'd also love to see
Prior experience in finance.
Ambitious and diligent mindset with a drive to improve systems.
Ability to solve complex problems and innovate.
To be successful in this position, you will have the following
Self-motivated with the ability to handle tasks with minimal supervision.
Strong analytical and problem-solving skills.
Excellent collaboration and communication skills (verbal and written).
Outstanding organizational and time-management skills.
Company Benefits & Perks
Competitive salary, annual performance-based bonus and stock grant
401(k) retirement plan with company match
Comprehensive health benefits (medical, dental, vision) and a company-paid health premium
Wellness programs, health coaching and counseling services through EAP
Paid time off and parental leave
Daily lunch allowance and company kitchen with healthy options
Corporate events, team outings, volunteering, and company sports teams
Education reimbursement and learning opportunities
Modern offices with multi-monitor setups
Apply for this job
To apply, please submit your application through the IBKR Careers portal. This section may include standard application fields as required by the portal.
Equal Employment Opportunity
Interactive Brokers is an equal employment opportunity employer. We do not discriminate on the basis of protected status under applicable law.
#J-18808-Ljbffr
$98k-124k yearly est. 2d ago
Data Engineer
Bexorg
Data engineer job in New Haven, CT
Bexorg is transforming drug discovery by restoring molecular activity in postmortem human brains. Our groundbreaking BrainEx platform enables direct experimentation on functionally preserved human brain tissue, generating massive, high-fidelity molecular datasets that power AI-driven drug discovery for CNS diseases. We are seeking a DataEngineer to help harness this unprecedented data. In this onsite, mid-level role, you will design and optimize the pipelines and cloud infrastructure that turn terabytes of raw experimental data into actionable insights, driving our mission to revolutionize treatments for central nervous system disorders.
The Job:
Data Ingestion & Pipeline Management: Manage and optimize massive data ingestion pipelines from cutting-edge experimental devices, ensuring reliable, real-time capture of complex molecular data.
Cloud Data Architecture: Organize and structure large datasets in Google Cloud Platform, using tools like BigQuery and cloud storage to build a scalable data warehouse for fast querying and analysis of brain data.
Large-Scale Data Processing: Design and implement robust ETL/ELT processes to handle PB scale data, emphasizing speed, scalability, and data integrity at each step of the process.
Internal Data Services: Work closely with our software and analytics teams to expose processed data and insights to internal web applications. Build appropriate APIs or data access layers so that scientists and engineers can seamlessly visualize and interact with the data through our web platform.
Internal Experiment Services: Work with our life science teams to ensure data entry protocols for seamless metadata integration and association with experimental data
Infrastructure Innovation: Recommend and implement cloud infrastructure improvements (such as streaming technologies, distributed processing frameworks, and automation tools) that will future-proof our data pipeline. You will continually assess new technologies and best practices to increase throughput, reduce latency, and support our rapid growth in data volume.
Qualifications and Skills:
Experience with Google Cloud: Hands-on experience with Google Cloud services (especially BigQuery and related data tools) for managing and analyzing large datasets. You've designed or maintained data systems in a cloud environment and understand how to leverage GCP for big data workloads.
DataEngineering Background: 3+ years of experience in dataengineering or a similar role. Proven ability to build and maintain data pipelines dealing with petabyte-scale data. Proficiency in programming (e.g., Python, Java, or Scala) and SQL for developing data processing jobs and queries.
Scalability & Performance Mindset: Familiarity with distributed systems or big data frameworks and a track record of optimizing data workflows for speed and scalability. You can architect solutions that handle exponential data growth without sacrificing performance.
Biology Domain Insight: Exposure to biology or experience working with scientific data (e.g. genomics, bioinformatics, neuroscience) is a strong plus. While deep domain expertise isn't required, you should be excited to learn about our experimental data and comfortable discussing requirements with biologists.
Problem-Solving & Collaboration: Excellent problem-solving skills, attention to detail, and a proactive attitude in tackling technical challenges. Ability to work closely with cross-functional teams (scientists, software engineers, data scientists) and communicate complex data systems in clear, approachable terms.
Passion for the Mission: A strong desire to apply your skills to transform drug discovery. You are inspired by Bexorg's mission and eager to build the data backbone of a platform that could unlock new therapies for CNS diseases.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
$84k-114k yearly est. Auto-Apply 60d+ ago
Network Planning Data Scientist (Manager)
Atlas Air Worldwide Holdings 4.9
Data engineer job in White Plains, NY
Atlas Air is seeking a detail-oriented and analytical Network Planning Analyst to help optimize our global cargo network. This role plays a critical part in the 2-year to 11-day planning window, driving insights that enable operational teams to execute the most efficient and reliable schedules. The successful candidate will provide actionable analysis on network delays, utilization trends, and operating performance, build models and reports to govern network operating parameters, and contribute to the development and implementation of software optimization tools that improve reliability and streamline planning processes.
This position requires strong analytical skills, a proactive approach to problem-solving, and the ability to translate data into operational strategies that protect service quality and maximize network efficiency.
Responsibilities
Analyze and Monitor Network Performance
Track and assess network delays, capacity utilization, and operating constraints to identify opportunities for efficiency gains and reliability improvements.
Develop and maintain key performance indicators (KPIs) for network operations and planning effectiveness.
Modeling & Optimization
Build and maintain predictive models to assess scheduling scenarios and network performance under varying conditions.
Support the design, testing, and implementation of software optimization tools to enhance operational decision-making.
Reporting & Governance
Develop periodic performance and reliability reports for customers, assisting in presentation creation
Produce regular and ad hoc reports to monitor compliance with established operating parameters.
Establish data-driven processes to govern scheduling rules, protect operational integrity, and ensure alignment with reliability targets.
Cross-Functional Collaboration
Partner with Operations, Planning, and Technology teams to integrate analytics into network planning and execution.
Provide insights that inform schedule adjustments, fleet utilization, and contingency planning.
Innovation & Continuous Improvement
Identify opportunities to streamline workflows and automate recurring analyses.
Contributes to the development of new planning methodologies and tools that enhance decision-making and operational agility.
Qualifications
Proficiency in SQL (Python and R are a plus) for data extraction and analysis; experience building decision-support tools, reporting tools dashboards (e.g., Tableau, Power BI)
Bachelor's degree required in Industrial Engineering, Operations Research, Applied Mathematics, Data Science or related quantitative discipline or equivalent work experience.
5+ years of experience in strategy, operations planning, finance or continuous improvement, ideally with airline network planning
Strong analytical skills with experience in statistical analysis, modeling, and scenario evaluation.
Strong problem-solving skills with the ability to work in a fast-paced, dynamic environment.
Excellent communication skills with the ability to convey complex analytical findings to non-technical stakeholders.
A proactive, solution-focused mindset with a passion for operational excellence and continuous improvement.
Knowledge of operations, scheduling, and capacity planning, ideally in airlines, transportation or other complex network operations
Salary Range: $131,500 - $177,500
Financial offer within the stated range will be based on multiple factors to include but not limited to location, relevant experience/level and skillset.
The Company is an Equal Opportunity Employer. It is our policy to afford equal employment opportunity to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, national origin, citizenship, place of birth, age, disability, protected veteran status, gender identity or any other characteristic or status protected by applicable in accordance with federal, state and local laws.
If you'd like more information about your EEO rights as an applicant under the law, please download the available EEO is the Law document at ******************************************
To view our Pay Transparency Statement, please click here: Pay Transparency Statement
“Know Your Rights: Workplace Discrimination is Illegal” Poster
The "EEO Is The Law" Poster
The average data engineer in Bethel, CT earns between $73,000 and $131,000 annually. This compares to the national average data engineer range of $80,000 to $149,000.
Average data engineer salary in Bethel, CT
$98,000
What are the biggest employers of Data Engineers in Bethel, CT?
The biggest employers of Data Engineers in Bethel, CT are: