Data Engineer
Data engineer job in Fairfield, CT
Data Engineer - Vice President
Greenwich, CT
About the Firm
We are a global investment firm focused on applying financial theory to practical investment decisions. Our goal is to deliver long-term results by analyzing market data and identifying what truly matters. Technology is central to our approach, enabling insights across both traditional and alternative strategies.
The Team
A new Data Engineering team is being established to work with large-scale datasets across the organization. This team partners directly with researchers and business teams to build and maintain infrastructure for ingesting, validating, and provisioning large volumes of structured and unstructured data.
Your Role
As a Data Engineer, you will help design and build an enterprise data platform used by research teams to manage and analyze large datasets. You will also create tools to validate data, support back-testing, and extract actionable insights. You will work closely with researchers, portfolio managers, and other stakeholders to implement business requirements for new and ongoing projects. The role involves working with big data technologies and cloud platforms to create scalable, extensible solutions for data-intensive applications.
What You'll Bring
6+ years of relevant experience in data engineering or software development
Bachelor's, Master's, or PhD in Computer Science, Engineering, or related field
Strong coding, debugging, and analytical skills
Experience working directly with business stakeholders to design and implement solutions
Knowledge of distributed data systems and large-scale datasets
Familiarity with big data frameworks such as Spark or Hadoop
Interest in quantitative research (no prior finance or trading experience required)
Exposure to cloud platforms is a plus
Experience with Python, NumPy, pandas, or similar data analysis tools is a plus
Familiarity with AI/ML frameworks is a plus
Who You Are
Thoughtful, collaborative, and comfortable in a fast-paced environment
Hard-working, intellectually curious, and eager to learn
Committed to transparency, integrity, and innovation
Motivated by leveraging technology to solve complex problems and create impact
Compensation & Benefits
Salary range: $190,000 - $260,000 (subject to experience, skills, and location)
Eligible for annual discretionary bonus
Comprehensive benefits including paid time off, medical/dental/vision insurance, 401(k), and other applicable benefits
We are an Equal Opportunity Employer. EEO/VET/DISABILITY
The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
C++ Market Data Engineer
Data engineer job in Stamford, CT
We are seeking a C++ Market Data Engineer to design and optimize ultra-low-latency feed handlers that power global trading systems. This is a high-impact role where your code directly drives real-time decision making.
What You'll Do:
Build high-performance feed handlers in modern C++ (14/17/20) for equities, futures, and options
Optimize systems for micro/nanosecond latency with lock-free algorithms and cache-friendly design
Ensure reliable data delivery with failover, gap recovery, and replay mechanisms
Collaborate with researchers and engineers to align data formats for trading and simulation
Instrument and test systems for continuous performance improvements
What We're Looking For:
3+ years of C++ development experience (low-latency, high-throughput systems)
Experience with real-time market data feeds (e.g., Bloomberg B-PIPE, CME MDP, Refinitiv, OPRA, ITCH)
Strong knowledge of concurrency, memory models, and compiler optimizations
Python scripting skills for testing and automation
Familiarity with Docker/Kubernetes and cloud networking (AWS/GCP) is a plus
Senior Power BI & Systems Integration Developer - 5498
Data engineer job in Shelton, CT
Senior Power BI & Systems Integration Developer
Type: Contract-to-Hire or Full-time
Our client, a leading precision manufacturing company in Connecticut, is seeking a Senior Power BI & Systems Integration Developer to join their IT team. This strategic role is central to modernizing ERP and MES systems, leading critical integration initiatives, and enhancing data-driven decision-making across the organization. The position offers the opportunity to influence IT strategy, optimize operational workflows, and deliver insights that directly impact business outcomes in a fast-paced, high-visibility environment.
Key Responsibilities:
Lead the design, development, and optimization of Power BI dashboards and advanced data models to provide actionable insights for senior management and operational teams.
Drive ERP and MES integration projects, ensuring accurate real-time visibility into production, Work-In-Progress (WIP), and operational KPIs.
Collaborate closely with business and IT leadership to define requirements, architect solutions, and implement high-impact initiatives.
Required Skills and Experience:
Senior-level expertise: 10+ years of experience in Power BI, SQL, and data integration technologies (APIs, .NET, Python, etc.).
Proven experience with ERP systems (Infor LN preferred) and MES platforms (Aegis FactoryLogix preferred).
Strong ability to translate complex business needs into technical solutions.
Software engineering experience (e.g., .NET) is a strong plus.
Exceptional communication skills, with experience presenting insights to executive leadership.
On-site presence required; local candidates strongly preferred.
This is a full-time position that may start as a contract-to-hire as well….great opportunity to make an immediate impact and grow with a company investing in its next phase of digital transformation.
Must be a U.S. Citizen or Green Card holder (federal contract requirement)
By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Benchmark IT, LLC and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here: ************************************
Principal Software Engineer (Embedded Systems)
Data engineer job in Norwalk, CT
Position Type: Full-Time / Direct Hire (W2)
Salary: $200K+ base + 13% bonus
Experience Required: 10-20 years
Domain: Industrial Automation & Robotics
Work Authorization: US Citizen or Green Card
Interview Process: 2× Teams Interviews → Onsite Interview (expenses paid)
“How Many Years With” (Candidate Screening Section)
C:
C++:
RTOS:
Embedded Software Development:
Device Driver Software Development:
Job Description
We are seeking a Principal Software Engineer - Embedded Systems to join a high-performance engineering team building next-generation industrial automation and robotics platforms. This role blends hardware, firmware, real-time systems, machine learning components, and high-performance automation into one of the most technically challenging environments.
The ideal candidate is passionate about writing software that interacts directly with real machines, drives motion control, solves physical-world problems, and contributes to global-scale automation systems.
This role is hands-on, impact-driven, and perfect for someone who wants to see their code operating in motion - not just in a console.
Key Responsibilities
Design, implement, and optimize embedded software in C/C++ for real-time control systems.
Develop and maintain real-time operating system (RTOS)-based applications.
Implement low-latency firmware, control loops, and motion-control algorithms.
Work with hardware teams to integrate sensors, actuators, and automation components.
Architect scalable, high-performance embedded platforms for industrial robotics.
Develop device drivers, board support packages (BSPs), and hardware abstraction layers.
Own full lifecycle development: requirements → design → implementation → testing → deployment.
Develop machine-learning-based modules for system categorization and algorithm organization (experience helpful, not required).
Build real-time monitoring tools, diagnostics interfaces, and system health analytics.
Troubleshoot complex hardware/software interactions in a real-time environment.
Work closely with electrical, mechanical, and controls engineers.
Participate in code reviews, architectural discussions, and continuous improvement.
Required Qualifications
Bachelor's degree in Computer Engineering, Electrical Engineering, Computer Science, or related field (Master's a plus).
10-20 years professional experience in:
C and C++ programming
Embedded Software Development
RTOS-based design (e.g., FreeRTOS, QNX, VxWorks, ThreadX, etc.)
Control systems and real-time embedded environments
Strong experience with:
Device driver development
Board bring-up and hardware interfacing
Debugging tools (oscilloscopes, logic analyzers, JTAG, etc.)
Excellent understanding of:
Memory management
Multithreading
Interrupt-driven systems
Communication protocols (UART, SPI, I2C, CAN, Ethernet)
Preferred Qualifications
Experience with robotics, motion control, industrial automation, or safety-critical systems.
Exposure to machine learning integration in embedded platforms.
Experience in high-precision or high-speed automation workflows.
Target Industries / Domains
Ideal candidates may come from:
Medical Devices
Semiconductor Equipment
Aerospace & Defense
Industrial Control Systems
Robotics & Automation
Machinery & Mechatronics
Appliances & Devices
Embedded Consumer or Industrial Electronics
Senior Software Engineer (Full Stack), Data Analytics - Pharma
Data engineer job in Ridgefield, CT
$140-210K + Bonus
(*At this time our client cannot sponsor or transfer work visas, including H1B, OPT, F1)
Global Pharmaceutical and Healthcare conglomerate seeks dynamic and collaborative Lead Full Stack Software Engineer with 7+ years hands on front and back-end software developer experience designing, developing, testing, and delivering fully functioning Cloud-based, Data Analytics applications and backend services, to help lead application development from ideation and architecture to deployment and optimization, and integrate data science solutions like analytics and machine learning. Must have full stack Data Analytics experience, AWS Cloud, and hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT). This is a visible role that will deliver on key data transformation initiatives, and shape the future of data-driven decision making across the business.
Requirements
Hands-on experience in Data pipeline creation, ETL/ELT (AWS Glue, Databricks, DBT).
Build and maintain robust backend systems using AWS Lambda, API Gateway, and other serverless technologies.
Experience with frontend visualization tools like Tableau or PowerBI.
Hands-on expertise in Agile Development, Test Automation, IT Security best practices, Continuous Development and deployment tools (Git, Jenkins, Docker, Kubernetes), and functional programming.
Familiarity with IT security, container platforms, and software environments across QA, DEV, STAGING, and PROD phases.
Demonstrated thought leadership in driving innovation and best practice adoption.
Leadership & Collaboration: Strong communication, mentorship, and cross-functional teamwork.
Responsibilities include:
Application Development: Design, develop, and maintain both the front-end and back-end components of full-fledged applications using state-of-the-art programming languages and frameworks.
Architecture Integration: Incorporate API-enabled backend technologies into application architecture, following established architecture frameworks and standards to deliver cohesive and maintainable solutions.
Agile Collaboration: Work collaboratively with the team and product owner, following agile methodologies, to deliver secure, scalable, and fully functional applications in iterative cycles.
Testing Strategy and Framework Design: Develop and implement comprehensive testing strategies and frameworks to ensure the delivery of high-quality, reliable software.
For immediate consideration please submit resume in Word or PDF format
** AN EQUAL OPPORTUNITY EMPLOYER **
Principal Software Engineer
Data engineer job in Danbury, CT
If you're an experienced software engineer who wants to build things that actually move - fast, accurately, and at scale - this is a role worth considering.
This is a role for engineers who want to build the core - not just bolt things on.
If you want to own complex systems, influence product direction, and work in an environment where your expertise is valued and your career can thrive - let's talk.
Apply now or message us directly for a confidential conversation.
We're a well-established global tech organization that builds the software and systems behind some of the most advanced, high-speed electromechanical equipment in the industry. Our technology helps run the operations behind critical communications and logistics around the world.
Principle Principal Software Engineer
Location: Norwalk, CT
Salary - $170.000 - $190,000 +Bonus
Right now, we're looking for a Senior Principal Software Engineer to take a leading role in architecting and delivering the software that powers our next generation of machine control systems. This is a hands-on, senior-level position where you'll have real ownership, technical influence, and direct impact on the business.
Why This Role Stands Out:
High Visibility: You won't be buried in code no one sees - this role is front and center across engineering, product, and executive teams.
Complex, Real-World Problems: This isn't app development. You'll be building control software for high-speed, precision-driven systems that integrate mechanical, electrical, and software components.
Stability + Innovation: Join a company that's been around for decades - but continues to evolve. The tech is serious, the teams are strong, and the roadmaps are ambitious.
Long-Term Career Growth: This isn't a stepping-stone job. It's a long-term opportunity to lead, grow, and shape the future of how our machines perform.
What You'll Do:
Design and develop real-time control software in C++ for large-scale, high-performance electromechanical systems
Lead cross-functional efforts across software, hardware, systems, and manufacturing
Guide architecture and technical strategy for multiple products and platforms
Debug and optimize at the system level - from code to motion control to hardware integration
Play a key role in roadmap planning and technical decision-making
What You Bring:
10+ years of experience in object-oriented software design and full-lifecycle development
Deep hands-on experience in C++ and real-time operating systems (such as RTX)
Strong background in mechatronics, machine control, or similar system-level environments
Ability to lead multidisciplinary teams and drive projects through ambiguity to delivery
Excellent communication skills - both with engineers and stakeholders
BS or MS in Computer Science or a related field
Bonus Points:
Experience with motion or servo motor control
Exposure to .NET, Java, or ASP.NET
Background in SQL Server, Oracle, or web-based service architecture
Knowledge of industrial automation or paper-handling/mailing systems
Sr. WAN Developer
Data engineer job in Holtsville, NY
Duties & Responsibilities
Strong Expertise in Java and Android
Solid Experience with mobile LTE(4G)/5G technologies especially telephony, HAL and QCRIL
Solid understanding of RIL and Telephony framework
Proficiency with debugging in embedded software systems. Familiarity with JTAG.
Exposure to one or more telecom networks and technologies like GSM, 3G, LTE, IMS, 5g RAN architecture.
Working experience on eSIM.
Good working experience on private 5G
Exposure and working in Qualcomm chipsets and tools QXDM etc.
Exposure on features and bug fixing related to NA carriers (AT&T, Verizon, and T-Mobile).
Mandatory Technical Skills
Bachelor's degree and 8+ years' experience
Strong Expertise in Java and Android
Solid Experience with mobile LTE(4G)/5G technologies especially telephony, HAL and QCRIL
Solid understanding of RIL and Telephony framework
Proficiency with debugging in embedded software systems. Familiarity with JTAG.
Exposure to one or more telecom networks and technologies like GSM, 3G, LTE, IMS, 5g RAN architecture
Data Scientist - Analytics roles draw analytical talent hunting for roles.
Data engineer job in Commack, NY
About the Role We believe that the best decisions are backed by data. We are seeking a curious and analytical Data Scientist to champion our data -driven culture.
In this role, you will act as a bridge between technical data and business strategy. You will mine massive datasets, build predictive models, and-most importantly-tell the story behind the numbers to help our leadership team make smarter choices. You are perfect for this role if you are as comfortable with SQL queries as you are with slide decks.
### What You Will Do
Exploratory Analysis: Dive deep into raw data to discover trends, patterns, and anomalies that others miss.
Predictive Modeling: Build and test statistical models (Regression, Time -series, Clustering) to forecast business outcomes and customer behavior.
Data Visualization: Create clear, impactful dashboards using Tableau, PowerBI, or Python libraries (Matplotlib/Seaborn) to visualize success metrics.
Experimentation: Design and analyze A/B tests to optimize product features and marketing campaigns.
Data Cleaning: Work with Data Engineers to clean and structure messy data for analysis.
Strategy: Present findings to stakeholders, translating complex math into clear, actionable business recommendations.
Requirements
Experience: 2+ years of experience in Data Science or Advanced Analytics.
The Toolkit: Expert proficiency in Python or R for statistical analysis.
Data Querying: Advanced SQL skills are non -negotiable (Joins, Window Functions, CTEs).
Math Mindset: Strong grasp of statistics (Hypothesis testing, distributions, probability).
Visualization: Ability to communicate data visually using Tableau, PowerBI, or Looker.
Communication: Excellent verbal and written skills; you can explain a p -value to a non -technical manager.
### Preferred Tech Stack (Keywords)
Languages: Python (Pandas, NumPy), R, SQL
Viz Tools: Tableau, PowerBI, Looker, Plotly
Machine Learning: Scikit -learn, XGBoost (applied to business problems)
Big Data: Spark, Hadoop, Snowflake
Benefits
Salary Range: $50,000 - $180,000 USD / year (Commensurate with location and experience)
Remote Friendly: Work from where you are most productive.
Learning Budget: Stipend for data courses (Coursera, DataCamp) and books.
Principal Data Scientist
Data engineer job in Bridgeport, CT
Description & Requirements We now have an exciting opportunity for a Principal Data Scientist to join the Maximus AI Accelerator supporting both the enterprise and our clients. We are looking for an accomplished hands-on individual contributor and team player to be a part of the AI Accelerator team.
You will be responsible for architecting and optimizing scalable, secure AI systems and integrating AI models in production using MLOps best practices, ensuring systems are resilient, compliant, and efficient. This role requires strong systems thinking, problem-solving abilities, and the capacity to manage risk and change in complex environments. Success depends on cross-functional collaboration, strategic communication, and adaptability in fast-paced, evolving technology landscapes.
This position will be focused on strategic company-wide initiatives but will also play a role in project delivery and capture solutioning (i.e., leaning in on existing or future projects and providing solutioning to capture new work.)
This position requires occasional travel to the DC area for client meetings.
Essential Duties and Responsibilities:
- Make deep dives into the data, pulling out objective insights for business leaders.
- Initiate, craft, and lead advanced analyses of operational data.
- Provide a strong voice for the importance of data-driven decision making.
- Provide expertise to others in data wrangling and analysis.
- Convert complex data into visually appealing presentations.
- Develop and deploy advanced methods to analyze operational data and derive meaningful, actionable insights for stakeholders and business development partners.
- Understand the importance of automation and look to implement and initiate automated solutions where appropriate.
- Initiate and take the lead on AI/ML initiatives as well as develop AI/ML code for projects.
- Utilize various languages for scripting and write SQL queries. Serve as the primary point of contact for data and analytical usage across multiple projects.
- Guide operational partners on product performance and solution improvement/maturity options.
- Participate in intra-company data-related initiatives as well as help foster and develop relationships throughout the organization.
- Learn new skills in advanced analytics/AI/ML tools, techniques, and languages.
- Mentor more junior data analysts/data scientists as needed.
- Apply strategic approach to lead projects from start to finish;
Job-Specific Minimum Requirements:
- Develop, collaborate, and advance the applied and responsible use of AI, ML and data science solutions throughout the enterprise and for our clients by finding the right fit of tools, technologies, processes, and automation to enable effective and efficient solutions for each unique situation.
- Contribute and lead the creation, curation, and promotion of playbooks, best practices, lessons learned and firm intellectual capital.
- Contribute to efforts across the enterprise to support the creation of solutions and real mission outcomes leveraging AI capabilities from Computer Vision, Natural Language Processing, LLMs and classical machine learning.
- Contribute to the development of mathematically rigorous process improvement procedures.
- Maintain current knowledge and evaluation of the AI technology landscape and emerging. developments and their applicability for use in production/operational environments.
Minimum Requirements
- Bachelor's degree in related field required.
- 10-12 years of relevant professional experience required.
Job-Specific Minimum Requirements:
- 10+ years of relevant Software Development + AI / ML / DS experience.
- Professional Programming experience (e.g. Python, R, etc.).
- Experience in two of the following: Computer Vision, Natural Language Processing, Deep Learning, and/or Classical ML.
- Experience with API programming.
- Experience with Linux.
- Experience with Statistics.
- Experience with Classical Machine Learning.
- Experience working as a contributor on a team.
Preferred Skills and Qualifications:
- Masters or BS in quantitative discipline (e.g. Math, Physics, Engineering, Economics, Computer Science, etc.).
- Experience developing machine learning or signal processing algorithms:
- Ability to leverage mathematical principles to model new and novel behaviors.
- Ability to leverage statistics to identify true signals from noise or clutter.
- Experience working as an individual contributor in AI.
- Use of state-of-the-art technology to solve operational problems in AI and Machine Learning.
- Strong knowledge of data structures, common computing infrastructures/paradigms (stand alone and cloud), and software engineering principles.
- Ability to design custom solutions in the AI and Advanced Analytics sphere for customers. This includes the ability to scope customer needs, identify currently existing technologies, and develop custom software solutions to fill any gaps in available off the shelf solutions.
- Ability to build reference implementations of operational AI & Advanced Analytics processing solutions.
Background Investigations:
- IRS MBI - Eligibility
#techjobs #VeteransPage
EEO Statement
Maximus is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, age, national origin, disability, veteran status, genetic information and other legally protected characteristics.
Pay Transparency
Maximus compensation is based on various factors including but not limited to job location, a candidate's education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus's total compensation package. Other rewards may include short- and long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation ranges may differ based on contract value but will be commensurate with job duties and relevant work experience. An applicant's salary history will not be used in determining compensation. Maximus will comply with regulatory minimum wage rates and exempt salary thresholds in all instances.
Accommodations
Maximus provides reasonable accommodations to individuals requiring assistance during any phase of the employment process due to a disability, medical condition, or physical or mental impairment. If you require assistance at any stage of the employment process-including accessing job postings, completing assessments, or participating in interviews,-please contact People Operations at **************************.
Minimum Salary
$
156,740.00
Maximum Salary
$
234,960.00
Easy ApplyData Scientist
Data engineer job in New Haven, CT
RISE Data Scientist
Reports to: Monitoring, Evaluation, and Learning Manager
Salary: Competitive and commensurate with experience
Please note: Due to the upcoming holidays, application review for this position will begin the first week of January. Applicants can expect outreach by the end of the week of January 5.
Overview:
The RISE Network's mission is to ensure all high school students graduate with a plan and the skills and confidence to achieve college and career success. Founded in 2016, RISE partners with public high schools to lead networks where communities work together to use data to learn and improve. Through its core and most comprehensive network, RISE partners with nine high schools and eight districts, serving over 13,000 students in historically marginalized communities.
RISE high schools work together to ensure all students experience success as they transition to, through, and beyond high school by using data to pinpoint needs, form hypotheses, and pursue ideas to advance student achievement. Partner schools have improved Grade 9 promotion rates by nearly 20 percentage points, while also decreasing subgroup gaps and increasing schoolwide graduation and college access rates. In 2021, the RISE Network was honored to receive the Carnegie Foundation's annual Spotlight on Quality in Continuous Improvement recognition. Increasingly, RISE is pursuing opportunities to scale its impact through research publications, consulting partnerships, professional development experiences, and other avenues to drive excellent student outcomes.
Position Summary and Essential Job Functions:
The RISE Data Scientist will play a critical role in leveraging data to support continuous improvement, program evaluation, and research, enhancing the organization's evidence-based learning and decision-making. RISE is seeking a talented and motivated individual to design and conduct rigorous quantitative analyses to assess the outcomes and impacts of programs.
The ideal candidate is an experienced analyst who is passionate about using data to drive social change, with strong skills in statistical modeling, data visualization, and research design. This individual will also lead efforts to monitor and analyze organization-wide data related to mission progress and key performance indicators (KPIs), and communicate these insights in ways that inspire improvement and action. This is an exciting opportunity for an individual who thrives in an entrepreneurial environment and is passionate about closing opportunity gaps and supporting the potential of all students, regardless of life circumstances. The role will report to the Monitoring, Evaluation, and Learning (MEL) Manager and sit on the MEL team.
Responsibilities include, but are not limited to:
1. Research and Evaluation (30%)
Collaborate with MEL and network teams to design and implement rigorous process, outcome, and impact evaluations.
Lead in the development of data collection tools and survey instruments.
Manage survey data collection, reporting, and learning processes.
Develop RISE learning and issue briefs supported by quantitative analysis.
Design and implement causal inference approaches where applicable, including quasi-experimental designs.
Provide technical input on statistical analysis plans, monitoring frameworks, and indicator selection for network programs.
Translate complex findings into actionable insights and policy-relevant recommendations for non-technical audiences.
Report data for RISE leadership and staff, generating new insights to inform program design.
Create written reports, presentations, publications, and communications pieces.
2. Quantitative Analysis and Statistical Modeling (30%)
Clean, transform, and analyze large and complex datasets from internal surveys, the RISE data warehouse, and external data sources such as the National Student Clearinghouse (NSC).
Conduct exploratory research that informs organizational learning.
Lead complex statistical analyses using advanced methods (regression modeling, propensity score matching, difference in differences analysis, time-series analysis, etc).
Contribute to data cleaning and analysis for key performance indicator reporting.
Develop processes that support automation of cleaning and analysis for efficiency.
Develop and maintain analytical code and workflows to ensure reproducibility.
3. Data Visualization and Tool-building (30%)
Work closely with non-technical stakeholders to understand the question(s) they are asking and the use cases they have for specific data visualizations or tools
Develop well-documented overviews and specifications for new tools.
Create clear, compelling data visualizations and dashboards.
Collaborate with Data Engineering to appropriately and sustainably source data for new tools.
Manage complex projects to build novel and specific tools for internal or external stakeholders.
Maintain custom tools for the duration of their usefulness, including by responding to feedback and requests from project stakeholders.
4. Data Governance and Quality Assurance (10%)
Support data quality assurance protocols and standards across the MEL team.
Ensure compliance with data protection, security, and ethical standards.
Maintain organized, well-documented code and databases.
Collaborate with the Data Engineering team to maintain RISE MEL data infrastructure.
Qualifications
Master's degree (or PhD) in statistics, economics, quantitative social sciences, public policy, data science, or related field.
Minimum of 3 years of professional experience conducting statistical analysis and managing large datasets.
Advanced proficiency in R, Python, or Stata for data analysis and modeling.
Experience designing and implementing quantitative research and evaluation studies.
Strong understanding of inferential statistics, experimental and quasi-experimental methods, and sampling design.
Strong knowledge of survey data collection tools such as Key Surveys, Google Forms, etc.
Excellent data visualization and communication skills
Experience with data visualization tools; strong preference for Tableau.
Ability to translate complex data into insights for diverse audiences, including non-technical stakeholders.
Ability to cultivate relationships and earn credibility with a diverse range of stakeholders.
Strong organizational and project management skills.
Strong sense of accountability and responsibility for results.
Ability to work in an independent and self-motivated manner.
Demonstrated proficiency with Google Workspace.
Commitment to equity, ethics, and learning in a nonprofit or mission-driven context.
Positive attitude and willingness to work in a collaborative environment.
Strong belief that all students can learn and achieve at high levels.
Preferred
Experience working on a monitoring, evaluation, and learning team.
Familiarity with school data systems and prior experience working in a school, district, or similar K-12 educational context preferred.
Experience working with survey data (e.g., DHS, LSMS), administrative datasets, or real-time digital data sources.
Working knowledge of data engineering or database management (SQL, cloud-based platforms).
Salary Range
$85k - $105k
Most new hires' salaries fall within the first half of the range, allowing team members to grow in their roles. For those who already have significant and aligned experiences at the same level as the role, placement may be at the higher end of the range.
The Connecticut RISE Network is an equal opportunity employer and welcomes candidates from diverse backgrounds.
Auto-ApplyJunior Data Scientist
Data engineer job in New Haven, CT
About Us
Bexorg is revolutionizing drug discovery by restoring molecular activity in postmortem human brains. Through our BrainEx platform, we directly experiment on functionally preserved human brain tissue, creating enormous high-fidelity molecular datasets that fuel AI-driven breakthroughs in treating CNS diseases. We are looking for a Junior Data Scientist to join our team and dive into this one-of-a-kind data. In this onsite role, you will work at the intersection of computational biology and machine learning, helping analyze high-dimensional brain data and uncover patterns that could lead to the next generation of CNS therapeutics. This is an ideal opportunity for a recent graduate or early-career scientist to grow in a fast-paced, mission-driven environment.
The Job
Data Analysis & Exploration: Work with large-scale molecular datasets from our BrainEx experiments - including transcriptomic, proteomic, and metabolic data. Clean, transform, and explore these high-dimensional datasets to understand their structure and identify initial insights or anomalies.
Collaborative Research Support: Collaborate closely with our life sciences, computational biology and deep learning teams to support ongoing research. You will help biologists interpret data results and assist machine learning researchers in preparing data for modeling, ensuring that domain knowledge and data science intersect effectively.
Machine Learning Model Execution: Run and tune machine learning and deep learning models on real-world central nervous system (CNS) data. You'll help set up experiments, execute training routines (for example, using scikit-learn or PyTorch models), and evaluate model performance to extract meaningful patterns that could inform drug discovery.
Statistical Insight Generation: Apply statistical analysis and visualization techniques to derive actionable insights from complex data. Whether it's identifying gene expression patterns or correlating molecular changes with experimental conditions, you will contribute to turning data into scientific discoveries.
Reporting & Communication: Document your analysis workflows and results in clear reports or dashboards. Present findings to the team, highlighting key insights and recommendations. You will play a key role in translating data into stories that drive decision-making in our R&D efforts.
Qualifications and Skills:
Strong Python Proficiency: Expert coding skills in Python and deep familiarity with the standard data science stack. You have hands-on experience with NumPy, pandas, and Matplotlib for data manipulation and visualization; scikit-learn for machine learning; and preferably PyTorch (or similar frameworks like TensorFlow) for deep learning tasks.
Educational Background: A Bachelor's or Master's degree in Data Science, Computer Science, Computational Biology, Bioinformatics, Statistics, or a related field. Equivalent practical project experience or internships in data science will also be considered.
Machine Learning Knowledge: Solid understanding of machine learning fundamentals and algorithms. Experience developing or applying models to real or simulated datasets (through coursework or projects) is expected. Familiarity with high-dimensional data techniques or bioinformatics methods is a plus.
Analytical & Problem-Solving Skills: Comfortable with statistics and data analysis techniques for finding signals in noisy data. Able to break down complex problems, experiment with solutions, and clearly interpret the results.
Team Player: Excellent communication and collaboration skills. Willingness to learn from senior scientists and ability to contribute effectively in a multidisciplinary team that includes biologists, data engineers, and AI researchers.
Motivation and Curiosity: Highly motivated, with an evident passion for data-driven discovery. You are excited by Bexorg's mission and eager to take on challenging tasks - whether it's mastering a new analysis method or digging into scientific literature - to push our research forward.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
Data Scientist, Media
Data engineer job in Farmington, CT
Accepting applicants in CT, FL, MN, NJ, NC, OH, TX Mediate.ly is seeking a hands-on Data Scientist to elevate media performance analysis, predictive modeling, and channel optimization. In this role, you'll leverage advanced machine learning techniques and generative AI tools to uncover actionable insights, automate reporting, and enhance campaign effectiveness across digital channels. You'll manage and evolve our existing performance dashboard (with a small external team), own the feature roadmap, and collaborate closely with Primacy on SEO/CRO data integration. A key part of the role involves supporting Account teams with clear, insight-rich reporting powered by enhanced data storytelling and visualization. This was meant for you if you are passionate and skilled in transforming complex datasets into clear, compelling insights.
Measures:
AI-Enhanced Reporting & Insight Automation
Business & Media Impact
Reporting Standardization and Quality
Dashboard & Data Product Ownership
Reports to: President
RESPONSIBILITIES:
Media & Channel Analytics
Analyze paid media across Google Ads, Meta, LinkedIn, Programmatic, YouTube; translate results into clear recommendations.
Build/maintain attribution approaches (last-click, MTA, assisted) and funnel diagnostics.
Integrate CRM/GA4/platform data to surface actionable trends by geo, audience, and creative.
Predictive Modeling & Experimentation
Develop forecasting and propensity models to guide budget allocation and channel mix.
Run simulations (CPM/CPC/conv-rate scenarios) and design A/B and lift tests.
Partner with SEO/CRO to connect acquisition with on-site conversion improvements.
Dashboard Ownership (Existing Platform)
Manage the dashboard development team (backlog, priorities, sprints) and collaborate on new features that improve usability and insight depth.
Gather stakeholder requirements (Accounts, Media, Leadership) and maintain a transparent roadmap.
Ensure data reliability (ETL QA, schema governance, tagging/UTM standards).
Reporting & Client Enablement
Support Account teams with data-backed, insight-driven reporting (monthly/quarterly reviews, executive summaries, narrative analyses).
Build repeatable report templates; automate where possible while preserving clear storytelling.
AI & Product Ideation
Explore LLM/ML use cases (persona signals, creative scoring, conversion prediction).
Prototype lightweight tools for planners/buyers (e.g., channel recommender, influence maps).
What it takes to succeed in this role-
QUALIFICATIONS:
5-7 years in data science/marketing analytics/digital media performance.
Proficient in Python or R; strong SQL; experience with GA4/BigQuery and media platform exports.
Comfort with BI tools (Looker Studio, Tableau, Power BI) and dashboard product management/ Data visualization.
Familiarity with generative AI tools (e.g., OpenAI, Hugging Face, or Google Vertex AI) for automating insights, reporting, or content analysis.
Comfortable in a fast-paced environment with competing priorities.
Experience applying machine learning models to media mix modeling, customer segmentation, or predictive performance forecasting.
Strong understanding of marketing attribution models and how to evaluate cross-channel performance using statistical techniques.
Excellent communicator who can turn data into decisions for non-technical stakeholders.
Experience with paid media a plus!
Key Competencies
Data Visualization & Storytelling - Skilled in transforming complex datasets into clear, compelling insights using tools like Tableau, Power BI, or Python libraries.
AI & Machine Learning Expertise - Proficient in applying supervised and unsupervised learning techniques to optimize media performance and audience targeting.
Media Analytics & Attribution - Deep understanding of digital media metrics, multi-touch attribution models, and cross-channel performance analysis.
Dashboard Development & Management - Experience managing analytics dashboards, defining feature roadmaps, and collaborating with developers for scalable solutions.
SEO/CRO Data Integration - Ability to synthesize SEO and conversion rate optimization data to inform predictive models and campaign strategies.
Stakeholder Communication - Strong ability to translate data into actionable insights for Account teams and clients, supporting strategic decision-making.
Automation & Efficiency - Familiarity with AI tools to streamline reporting, anomaly detection, and campaign optimization workflows.
Statistical Analysis & Experimentation - Proficient in A/B testing, regression analysis, and causal inference to validate media strategies.
The Perks:
The best co-workers you'll ever find
Unlimited PTO
Medical, Dental, Vision, 401k plus match
Annual performance bonus eligibility
Ongoing training opportunities
Planned outings and team events (remote workers included!)
PHYSICAL DEMANDS AND WORK ENVIRONMENT:
Prolonged periods of sitting at a desk and working on a computer.
Occasional standing, walking, or lifting of office supplies (up to 10-20 lbs.)
Frequent communication via phone, email, and video conferencing.
Work is performed in a temperature-controlled office environment with standard lighting and noise levels.
Position may require occasional travel to client site
Compensation Range: We offer a competitive salary based on experience and qualifications. The compensation range for this position is $90,000 to $100,000 annually, with potential for bonuses, stock and additional benefits. EEO & Accessibility Statement
Primacy is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. If you require reasonable accommodation during the application or interview process, please contact [email protected]
Auto-ApplyData Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake
Data engineer job in Ridgefield, CT
OPEN JOB: Data Engineer w AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake **HYBRID - This candidate will work on site 2-3X per week in Ridgefield CT location SALARY: $140,000 to $185,000
2 Openings
NOTE: CANDIDATE MUST BE US CITIZEN OR GREEN CARD HOLDER
We are seeking a highly skilled and experienced Data Engineer to design, build, and maintain our scalable and robust data infrastructure on a cloud platform. In this pivotal role, you will be instrumental in enhancing our data infrastructure, optimizing data flow, and ensuring data availability. You will be responsible for both the hands-on implementation of data pipelines and the strategic design of our overall data architecture.
Seeking a candidate with hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake, Proficiency in Python and SQL and DevOps/CI/CD experience
Duties & Responsibilities
Design, develop, and maintain scalable data pipelines and ETL processes to support data integration and analytics.
Collaborate with data architects, modelers and IT team members to help define and evolve the overall cloud-based data architecture strategy, including data warehousing, data lakes, streaming analytics, and data governance frameworks
Collaborate with data scientists, analysts, and other business stakeholders to understand data requirements and deliver solutions.
Optimize and manage data storage solutions (e.g., S3, Snowflake, Redshift) ensuring data quality, integrity, security, and accessibility.
Implement data quality and validation processes to ensure data accuracy and reliability.
Develop and maintain documentation for data processes, architecture, and workflows.
Monitor and troubleshoot data pipeline performance and resolve issues promptly.
Consulting and Analysis: Meet regularly with defined clients and stakeholders to understand and analyze their processes and needs. Determine requirements to present possible solutions or improvements.
Technology Evaluation: Stay updated with the latest industry trends and technologies to continuously improve data engineering practices.
Requirements
Cloud Expertise: Expert-level proficiency in at least one major cloud platform (AWS, Azure, or GCP) with extensive experience in their respective data services (e.g., AWS S3, Glue, Lambda, Redshift, Kinesis; Azure Data Lake, Data Factory, Synapse, Event Hubs; GCP BigQuery, Dataflow, Pub/Sub, Cloud Storage); experience with AWS data cloud platform preferred
SQL Mastery: Advanced SQL writing and optimization skills.
Data Warehousing: Deep understanding of data warehousing concepts, Kimball methodology, and various data modeling techniques (dimensional, star/snowflake schemas).
Big Data Technologies: Experience with big data processing frameworks (e.g., Spark, Hadoop, Flink) is a plus.
Database Systems: Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
DevOps/CI/CD: Familiarity with DevOps principles and CI/CD pipelines for data solutions.
Hands-on experience with AWS services such as AWS Glue, Lambda, Athena, Step Functions, and Lake Formation
Proficiency in Python and SQL
Desired Skills, Experience and Abilities
4+ years of progressive experience in data engineering, with a significant portion dedicated to cloud-based data platforms.
ETL/ELT Tools: Hands-on experience with ETL/ELT tools and orchestrators (e.g., Apache Airflow, Azure Data Factory, AWS Glue, dbt).
Data Governance: Understanding of data governance, data quality, and metadata management principles.
AWS Experience: Ability to evaluate AWS cloud applications, make architecture recommendations; AWS solutions architect certification (Associate or Professional) is a plus
Familiarity with Snowflake
Knowledge of dbt (data build tool)
Strong problem-solving skills, especially in data pipeline troubleshooting and optimization
If you are interested in pursuing this opportunity, please respond back and include the following:
Full CURRENT Resume
Required compensation
Contact information
Availability
Upon receipt, one of our managers will contact you to discuss in full
STEPHEN FLEISCHNER
Recruiting Manager
INTERMEDIA GROUP, INC.
EMAIL: *******************************
Easy ApplyTech Lead, Data & Inference Engineer
Data engineer job in Greenwich, CT
Job Description
Our Client
A fast moving and venture backed advertising technology startup based in San Francisco. They have raised twelve million dollars in funding and are transforming how business to business marketers reach their ideal customers. Their identity resolution technology blends business and consumer signals to convert static audience lists into high match and cross channel segments without the use of cookies. By transforming first party and third party data into precision targetable audiences across platforms such as Meta, Google and YouTube, they enable marketing teams to reach higher match rates, reduce wasted advertising spend and accelerate pipeline growth. With a strong understanding of how business buyers behave in channels that have traditionally been focused on business to consumer activity, they are redefining how business brands scale demand generation and account based efforts.
About Us
Catalyst Labs is a leading talent agency with a specialized vertical in Applied AI, Machine Learning, and Data Science. We stand out as an agency thats deeply embedded in our clients recruitment operations.
We collaborate directly with Founders, CTOs, and Heads of AI in those themes who are driving the next wave of applied intelligence from model optimization to productized AI workflows. We take pride in facilitating conversations that align with your technical expertise, creative problem-solving mindset, and long-term growth trajectory in the evolving world of intelligent systems.
Location: San Francisco
Work type: Full Time,
Compensation: above market base + bonus + equity
Roles & Responsibilities
Lead the design, development and scaling of an end to end data platform from ingestion to insights, ensuring that data is fast, reliable and ready for business use.
Build and maintain scalable batch and streaming pipelines, transforming diverse data sources and third party application programming interfaces into trusted and low latency systems.
Take full ownership of reliability, cost and service level objectives. This includes achieving ninety nine point nine percent uptime, maintaining minutes level latency and optimizing cost per terabyte. Conduct root cause analysis and provide long lasting solutions.
Operate inference pipelines that enhance and enrich data. This includes enrichment, scoring and quality assurance using large language models and retrieval augmented generation. Manage version control, caching and evaluation loops.
Work across teams to deliver data as a product through the creation of clear data contracts, ownership models, lifecycle processes and usage based decision making.
Guide architectural decisions across the data lake and the entire pipeline stack. Document lineage, trade offs and reversibility while making practical decisions on whether to build internally or buy externally.
Scale integration with application programming interfaces and internal services while ensuring data consistency, high data quality and support for both real time and batch oriented use cases.
Mentor engineers, review code and raise the overall technical standard across teams. Promote data driven best practices throughout the organization.
Qualifications
Bachelors or Masters degree in Computer Science, Computer Engineering, Electrical Engineering, or Mathematics.
Excellent written and verbal communication; proactive and collaborative mindset.
Comfortable in hybrid or distributed environments with strong ownership and accountability.
A founder-level bias for actionable to identify bottlenecks, automate workflows, and iterate rapidly based on measurable outcomes.
Demonstrated ability to teach, mentor, and document technical decisions and schemas clearly.
Core Experience
6 to 12 years of experience building and scaling production-grade data systems, with deep expertise in data architecture, modeling, and pipeline design.
Expert SQL (query optimization on large datasets) and Python skills.
Hands-on experience with distributed data technologies (Spark, Flink, Kafka) and modern orchestration tools (Airflow, Dagster, Prefect).
Familiarity with dbt, DuckDB, and the modern data stack; experience with IaC, CI/CD, and observability.
Exposure to Kubernetes and cloud infrastructure (AWS, GCP, or Azure).
Bonus: Strong Node.js skills for faster onboarding and system integration.
Previous experience at a high-growth startup (10 to 200 people) or early-stage environment with a strong product mindset.
Data Engineer (Hybrid - Ridgefield, CT) - 1760
Data engineer job in Ridgefield, CT
Employment Type: Full-time employment - no consulting or corp to corp Salary Range: $140K - $185K + bonus Visa restrictions: US Citizen or Green Card only. This role isn't eligible for H-1B, TN, F1 or OPT
Overview
We are looking for a hands-on Data Engineer to design, build, and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture, optimizing data flow, and ensuring data quality and availability across the organization.
This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact, growth, balance, and well-being.
What You'll Do
Design, build, and optimize data pipelines and ETL/ELT workflows to support analytics and reporting.
Partner with architects and engineering teams to define and evolve our cloud-based data architecture, including data lakes, data warehouses, and streaming data platforms.
Work closely with data scientists, analysts, and business partners to understand requirements and deliver reliable, reusable data solutions.
Develop and maintain scalable data storage solutions (e.g., AWS S3, Redshift, Snowflake) with a focus on performance, reliability, and security.
Implement data quality checks, validation processes, and metadata documentation.
Monitor, troubleshoot, and improve pipeline performance and workflow efficiency.
Stay current on industry trends and recommend new technologies and approaches.
Qualifications
Data Engineer (Mid-Level)
Strong understanding of data integration, data modeling, and SDLC.
Experience working on project teams and delivering within Agile environments.
Hands-on experience with AWS data services (e.g., Glue, Lambda, Athena, Step Functions, Lake Formation).
Associate degree + 8 years experience, or Bachelor's + 4 years, or Master's + 2 years. Or Associate degree + 4 years experience, or Bachelor's + 2 years, or Master's + 1 year experience.
Expert-level proficiency in at least one major cloud platform (AWS preferred).
Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema).
Experience with big data processing (e.g., Spark, Hadoop, Flink) is a plus.
Experience with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
Familiarity with CI/CD pipelines and DevOps principles.
Proficiency in Python and SQL (required).
Desired Skills
Experience with ETL/ELT tools (e.g., Airflow, dbt, AWS Glue, ADF).
Understanding of data governance and metadata management.
Experience with Snowflake.
AWS certification is a plus.
Strong problem-solving skills and ability to troubleshoot pipeline performance issues.
Data Analytics Engineer
Data engineer job in Farmington, CT
As a Data Analytics Engineer at Corbin, you will play a pivotal role in bridging business needs and technical execution. You'll partner closely with client-facing teams, internal operations, and our systems architecture team to lead design, implement, and optimize data-driven solutions that reduce manual processes and enhance decision-making. This is a hands-on role focused on building and maintaining relational databases, engineering data pipelines, and enabling analytics through cloud data platforms. The ideal candidate will have a strong SQL foundation, exposure to cloud data platforms, and a working knowledge of capital markets. If you're eager to learn, solve complex problems, and contribute to scalable data systems, this role is for you!
Core Responsibilities:
Documentation: In partnership with the Business Analyst, translate business requirements into technical specifications, process maps, and data flow diagrams to guide solution design and implementation.
Relational Database Management: Support the design and maintenance of relational databases in Snowflake, DOMO, or other tools
Data Flows: Collaborate with the systems architect and business analyst to design and maintain secure, reliable data flows between cloud systems, leveraging APIs and automated processes.
ETLS and Data Pipelines: Build, deploy, and manage ETL/ELT pipelines that ensure clean, structured, and reliable data for reporting and analytics.
Data Visualization: Partner with functional teams to develop and maintain automated dashboards and reporting solutions to support business intelligence and client reporting.
Collaboration: Work closely with cross-functional teams to create and implement solutions that support the organization's evolving data needs
Data Analytics:. Interpreting datasets to uncover trends and insights
Data Governance: Advocate, implement, and enforce best practices around data quality, security, and governance to ensure compliance and reliability across platforms.
API Integration: Lead integration and development with additional tools to support business needs
Requirements
Strong analytical and critical thinking skills with the ability to translate business needs into technical solutions
Initiative-taking, detail-oriented, and passionate about innovation and process improvement
Hands-on experience with relational databases (preferably Snowflake)
Proficiency in SQL, with the ability to design and optimize queries for performance
Experience building ETL/ELT pipelines and automated reporting dashboards (DOMO, Tableau, or similar)
Familiarity with RESTful APIs and data integration techniques
Excellent communication and interpersonal skills, with the ability to engage both technical and non-technical stakeholders
Ability to work in a fast-paced, dynamic environment and manage multiple projects effectively, comfort working in agile, cross-functional teams
Cloud Data Platforms: Proficiency with cloud-based data warehouses and relevant tools.
Business Acumen: Understanding how to align data solutions with business goals and providing actionable insight
Qualifications
Bachelor's degree in Computer Science or 3+ years relevant work experience
Background in capital markets, financial services, or investor relations preferred but not required
Experience with cloud data platforms (Snowflake, AWS, Azure, or GCP) is a plus
Exposure to scripting languages (Python, JavaScript, etc.) for data transformation and automation is a plus
Auto-ApplyC++ Market Data Engineer (USA)
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a C++ Market Data Engineer to design and build ultra-low-latency feed handlers for premier vendor feeds and major exchange multicast feeds. This is a high-impact role that sits at the heart of Trexquant's trading platform; the quality, speed, and reliability of your code directly influence every strategy we run.
Responsibilities
* Design & implement high-performance feed handlers in modern C++ for equities, futures, and options across global venues (e.g., NYSE, CME, Refinitiv RTS, Bloomberg B-PIPE).
* Optimize for micro- and nanosecond latency using lock-free data structures, cache-friendly memory layouts, and kernel-bypass networking where appropriate.
* Build reusable libraries for message decoding, normalization, and publication to internal buses shared by research, simulation, and live trading systems.
* Collaborate with cross-functional teams to tune TCP/UDP multicast stacks, kernel parameters, and NIC settings for deterministic performance.
* Provide robust failover, gap-recovery, and replay mechanisms to guarantee data integrity under packet loss or venue outages.
* Instrument code paths with precision timestamping and performance metrics; drive continuous latency regression testing and capacity planning.
* Partner closely with quantitative researchers to understand downstream data requirements and to fine-tune delivery formats for both simulation and live trading.
* Produce clear architecture documents, operational run-books, and post-mortems; participate in a 24×7 follow-the-sun support rotation for mission-critical market-data services.
Application Support Engineer
Data engineer job in Fairfield, CT
bout Us
We are a global investment firm focused on combining financial theory with practical application. Our goal is to deliver long-term results by cutting through market noise, identifying the most impactful factors, and developing ideas that stand up to rigorous testing. Over the years, we have built a reputation as innovators in portfolio management and alternative investment strategies.
Our team values intellectual curiosity, honesty, and a commitment to understanding what drives financial markets. Collaboration, transparency, and openness to new ideas are central to our culture, fostering innovation and continuous improvement.
Your Role
We are seeking an Application Support Engineer to operate at the intersection of technical systems and business processes that power our investment operations. This individual contributor role involves supporting a complex technical environment, resolving production issues, and contributing to projects that enhance systems and processes. You will gain hands-on experience with cloud-deployed portfolio management and research systems and work closely with both business and technical teams.
This role is ideal for someone passionate about technology and systems reliability, looking to grow into a systems reliability or engineering-focused position.
Responsibilities
Develop and maintain expertise in the organization's applications to support internal users.
Manage user expectations and ensure satisfaction with our systems and tools.
Advocate for users with project management and development teams.
Work closely with QA to report and track issues identified by users.
Ensure proper escalation for unresolved issues to maintain user satisfaction.
Participate in production support rotations, including off-hours coverage.
Identify gaps in support processes and create documentation or workflows in collaboration with development and business teams.
Diagnose and resolve system issues, including debugging code, analyzing logs, and investigating performance or resource problems.
Collaborate across teams to resolve complex technical problems quickly and efficiently.
Maintain documentation of system behavior, root causes, and process improvements.
Contribute to strategic initiatives that enhance system reliability and operational efficiency.
Qualifications
Bachelor's degree in Engineering, Computer Science, or equivalent experience.
2+ years of experience supporting complex software systems, collaborating with business users and technical teams.
Hands-on technical skills including SQL and programming/debugging (Python preferred).
Strong written and verbal communication skills.
Ability to work independently and within small teams.
Eagerness to learn new technologies and automate manual tasks to improve system reliability.
Calm under pressure and demonstrates responsibility, maturity, and trustworthiness.
Compensation & Benefits
Salary range: $115,000-$135,000 (may vary based on experience, location, or organizational needs).
Eligible for annual discretionary bonus.
Comprehensive benefits package including paid time off, medical/dental/vision coverage, 401(k), and other benefits as applicable.
The Phoenix Group Advisors is an equal opportunity employer. We are committed to creating a diverse and inclusive workplace and prohibit discrimination and harassment of any kind based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. We strive to attract talented individuals from all backgrounds and provide equal employment opportunities to all employees and applicants for employment.
Data Engineer
Data engineer job in New Haven, CT
Bexorg is transforming drug discovery by restoring molecular activity in postmortem human brains. Our groundbreaking BrainEx platform enables direct experimentation on functionally preserved human brain tissue, generating massive, high-fidelity molecular datasets that power AI-driven drug discovery for CNS diseases. We are seeking a Data Engineer to help harness this unprecedented data. In this onsite, mid-level role, you will design and optimize the pipelines and cloud infrastructure that turn terabytes of raw experimental data into actionable insights, driving our mission to revolutionize treatments for central nervous system disorders.
The Job:
Data Ingestion & Pipeline Management: Manage and optimize massive data ingestion pipelines from cutting-edge experimental devices, ensuring reliable, real-time capture of complex molecular data.
Cloud Data Architecture: Organize and structure large datasets in Google Cloud Platform, using tools like BigQuery and cloud storage to build a scalable data warehouse for fast querying and analysis of brain data.
Large-Scale Data Processing: Design and implement robust ETL/ELT processes to handle PB scale data, emphasizing speed, scalability, and data integrity at each step of the process.
Internal Data Services: Work closely with our software and analytics teams to expose processed data and insights to internal web applications. Build appropriate APIs or data access layers so that scientists and engineers can seamlessly visualize and interact with the data through our web platform.
Internal Experiment Services: Work with our life science teams to ensure data entry protocols for seamless metadata integration and association with experimental data
Infrastructure Innovation: Recommend and implement cloud infrastructure improvements (such as streaming technologies, distributed processing frameworks, and automation tools) that will future-proof our data pipeline. You will continually assess new technologies and best practices to increase throughput, reduce latency, and support our rapid growth in data volume.
Qualifications and Skills:
Experience with Google Cloud: Hands-on experience with Google Cloud services (especially BigQuery and related data tools) for managing and analyzing large datasets. You've designed or maintained data systems in a cloud environment and understand how to leverage GCP for big data workloads.
Data Engineering Background: 3+ years of experience in data engineering or a similar role. Proven ability to build and maintain data pipelines dealing with petabyte-scale data. Proficiency in programming (e.g., Python, Java, or Scala) and SQL for developing data processing jobs and queries.
Scalability & Performance Mindset: Familiarity with distributed systems or big data frameworks and a track record of optimizing data workflows for speed and scalability. You can architect solutions that handle exponential data growth without sacrificing performance.
Biology Domain Insight: Exposure to biology or experience working with scientific data (e.g. genomics, bioinformatics, neuroscience) is a strong plus. While deep domain expertise isn't required, you should be excited to learn about our experimental data and comfortable discussing requirements with biologists.
Problem-Solving & Collaboration: Excellent problem-solving skills, attention to detail, and a proactive attitude in tackling technical challenges. Ability to work closely with cross-functional teams (scientists, software engineers, data scientists) and communicate complex data systems in clear, approachable terms.
Passion for the Mission: A strong desire to apply your skills to transform drug discovery. You are inspired by Bexorg's mission and eager to build the data backbone of a platform that could unlock new therapies for CNS diseases.
Local to New Haven, CT preferred. No relocation offered for this position.
Bexorg is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do their best work. We seek applicants of all backgrounds and identities, across race, color, ethnicity, national origin or ancestry, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Bexorg will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law.
Data Platform Engineer (USA)
Data engineer job in Stamford, CT
Trexquant is a growing systematic fund at the forefront of quantitative finance, with a core team of highly accomplished researchers and engineers. To keep pace with our expanding global trading operations, we are seeking a highly motivated and technically rigorous Data Platform Engineer to help modernize our foundational data infrastructure. As a Data Platform Engineer, you will be at the center of building the systems that ensure the quality, reliability, and discoverability of mission-critical data. Your work will directly impact the data operators and downstream consumers by creating robust tools, monitoring, and workflows that ensure accuracy, validity, and timeliness of data across the firm.
Responsibilities
* Architect and maintain core components of the Data Platform with a strong focus on reliability and scalability.
* Build and maintain tools to manage data feeds, monitor validity, and ensure data timeliness.
* Design and implement event-based data orchestration pipelines.
* Evaluate and integrate data quality and observability tools via POCs and MVPs.
* Stand up a data catalog system to improve data discoverability and lineage tracking.
* Collaborate closely with infrastructure teams to support operational excellence and platform uptime.
* Write and maintain data quality checks to validate real-time and batch data.
* Validate incoming real-time data using custom Python-based validators.
* Ensure low-level data correctness and integrity, especially in high-precision environments.
* Build robust and extensible systems that will be used by data operators to ensure the health of our data ecosystem.
* Own the foundational systems used by analysts and engineers alike to trust and explore our datasets.