Senior Data Scientist (Senior Consultant)
Senior data scientist job in New York, NY
Job Family:
Data Science Consulting
Travel Required:
Up to 10%
Clearance Required:
Ability to Obtain Public Trust
About our AI and Data Capability Team
Our consultants on the AI and Data Analytics Capability team help clients maximize the value of their data and automate business processes. This high performing team works with clients to implement the full spectrum of data analytics and data science services, from data architecture and storage to data engineering and querying, to data visualization and dashboarding, to predictive analytics, machine learning, and artificial intelligence as well as intelligent automation. Our services enable our clients to define their information strategy, enable mission critical insights and data-driven decision making, reduce cost and complexity, increase trust, and improve operational effectiveness.
What You Will Do:
Data Collection & Management: Identify, gather, and manage data from primary and secondary sources, ensuring its accuracy and integrity.
Data Cleaning & Preprocessing: Clean raw data by identifying and addressing inconsistencies, missing values, and errors to prepare it for analysis.
Data Analysis & Interpretation: Apply statistical techniques and analytical methods to explore datasets, discover trends, find patterns, and derive insights.
Data Visualization & Reporting: Develop reports, dashboards, and visualizations using tools like Tableau or Power BI to present complex findings clearly to stakeholders.
Collaboration & Communication: Work with cross-functional teams, understand business requirements, and effectively communicate insights to support data-driven decision-making.
Problem Solving: Address specific business challenges by using data to identify underperforming processes, pinpoint areas for growth, and determine optimal strategies.
What You Will Need:
US Citizenship is required
Bachelor's degree is required
Minimum THREE (3) Years Experience using Power BI, Tableau and other visualization tools to develop intuitive and user friendly dashboards and visualizations.
Skilled in SQL, R, and other languages to assist in database querying and statistical programming.
Strong foundational knowledge and experience in statistics, probability, and experimental design.
Familiarity with cloud platforms (e.g., Amazon Web Services, Azure, or Google Cloud) and containerization (e.g., Docker).
Experience applying data governance concepts and techniques to assure greater data quality and reliability.
he curiosity and creativity to uncover hidden patterns and opportunities.
Strong communication skills to bridge technical and business worlds.
What Would Be Nice To Have:
Hands-on experience with Python, SQL, and modern ML frameworks.
Experience in data and AI system development, with a proven ability to design scalable architectures and implement reliable models.
Expertise in Python or Java for data processing.
Demonstrated work experience within the public sector.
Ability to support business development including RFP/RFQ/RFI responses involving data science / analytics.
The annual salary range for this position is $113,000.00-$188,000.00. Compensation decisions depend on a wide range of factors, including but not limited to skill sets, experience and training, security clearances, licensure and certifications, and other business and organizational needs.
What We Offer:
Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace.
Benefits include:
Medical, Rx, Dental & Vision Insurance
Personal and Family Sick Time & Company Paid Holidays
Position may be eligible for a discretionary variable incentive bonus
Parental Leave and Adoption Assistance
401(k) Retirement Plan
Basic Life & Supplemental Life
Health Savings Account, Dental/Vision & Dependent Care Flexible Spending Accounts
Short-Term & Long-Term Disability
Student Loan PayDown
Tuition Reimbursement, Personal Development & Learning Opportunities
Skills Development & Certifications
Employee Referral Program
Corporate Sponsored Events & Community Outreach
Emergency Back-Up Childcare Program
Mobility Stipend
About Guidehouse
Guidehouse is an Equal Opportunity Employer-Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation.
Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco.
If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at ************** or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation.
All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or ************************. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process.
If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse's Ethics Hotline. If you want to check the validity of correspondence you have received, please contact *************************. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant's dealings with unauthorized third parties.
Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.
Auto-ApplySenior Data Scientist
Senior data scientist job in Plainfield, NJ
Data Scientist - Pharmaceutical Analytics (PhD)
1 year Contract - Hybrid- Plainfield, NJ
We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes.
What You'll Do
Apply machine learning & advanced analytics to pharma commercial data
Deliver insights on market dynamics, physician prescribing, and patient behavior
Partner with R&D, medical affairs, and commercial teams to guide strategy
Build predictive models for sales effectiveness, adherence, and market forecasting
What We're Looking For
PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field
5+ years of pharma or healthcare analytics experience
Strong skills in enterprise-class software stacks and cloud computing
Deep knowledge of pharma market dynamics & healthcare systems
Excellent communication skills to translate data into strategy
Data Scientist
Senior data scientist job in Parsippany-Troy Hills, NJ
***This role is hybrid three days per week onsite in Parsippany, NJ. LOCAL CANDIDATES ONLY. No relocation***
Data Scientist
• Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC
health, prioritize improvements, validate pilot outcomes, and implement AI-driven
development lifecycle capabilities.
• Responsibilities:
o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and
delivery KPIs.
o Build dashboards, anomaly detection, and data models; implement GenAI
solutions (e.g., code suggestion, PR summarization, automated test
generation) to improve developer workflows.
o Design experiments and validate AI-driven features during the pilot.
o Collaborate with engineering and SRE to operationalize models and ensure
observability and data governance.
• Required skills:
o 3+ years applied data science/ML in production; hands-on experience with
GenAI/LLMs applied to developer workflows or DevOps automation.
o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data
visualization (Tableau/Power BI).
o Experience with observability/telemetry data (logs/metrics/traces) and A/B
experiment design.
• Preferred:
o Experience with model deployment, MLOps, prompt engineering, and cloud
data platforms (AWS/GCP/Azure).
Biostatistician
Senior data scientist job in Rahway, NJ
Join a Global Leader in Workforce Solutions - Net2Source Inc.
Who We Are
Net2Source Inc. isn't just another staffing company - we're a powerhouse of innovation, connecting top talent with the right opportunities. Recognized for 300% growth in the past three years, we operate in 34 countries with a global team of 5,500+. Our mission? To bridge the talent gap with precision- Right Talent. Right Time. Right Place. Right Price.
Title: Statistical Scientist
Duration: 12 Months (Start Date: First Week of January)
Location: Rahway, NJ (Onsite/Hybrid Only)
Rate: $65/hr on W2
Position Summary
We are seeking an experienced Statistical Scientist to support analytical method qualification, validation, and experimental design for Client's scientific and regulatory programs. The successful candidate will work closely with scientists to develop statistically sound protocols, contribute to method robustness and validation studies, and prepare reporting documentation for regulatory readiness. This position requires deep expertise in statistical methodologies and hands-on programming skills in SAS, R, and JMP.
Key Responsibilities
• Collaborate with scientists to design experiments, develop study protocols, and establish acceptance criteria for analytical applications.
• Support analytical method qualification and validation through statistical protocol development, analysis, and reporting.
• Write memos and technical reports summarizing statistical analyses for internal and regulatory audiences.
• Assist scientists in assessing protocol deviations and resolving investigation-related issues.
• Coordinate with the Quality Audit group to finalize statistical reports for BLA (Biologics License Application) readiness.
• Apply statistical modeling approaches to evaluate assay robustness and method reliability.
• Support data integrity and ensure compliance with internal processes and regulatory expectations.
Qualifications
Education (Required):
• Ph.D. in Statistics, Biostatistics, Applied Statistics, or related discipline with 3+ years of relevant experience, or
• M.S. in Statistics/Applied Statistics with 6+ years of relevant experience.
(BS/BA candidates will
not
be considered.)
Required Skills:
• Proficiency in SAS, R, and JMP.
• Demonstrated experience in analytical method qualification and validation, including protocol writing and statistical execution.
• Strong background in experimental design for analytical applications.
• Ability to interpret and communicate statistical results clearly in both written and verbal formats.
Preferred / Nice-to-Have:
• Experience with mixed-effect modeling.
• Experience with Bayesian analysis.
• Proven ability to write statistical software/code to automate routine analyses.
• Experience presenting complex statistical concepts to leadership.
• Experience in predictive stability analysis.
Why Work With Us?
At Net2Source, we believe in more than just jobs - we build careers. We champion leadership at all levels, celebrate diverse perspectives, and empower our people to make an impact. Enjoy a collaborative environment where your ideas matter, and your professional growth is our priority.
Our Commitment to Inclusion & Equity
Net2Source is an equal opportunity employer dedicated to fostering a workplace where diverse talents and perspectives are valued. We make all employment decisions based on merit, ensuring a culture of respect, fairness, and opportunity for all.
Awards & Recognition
• America's Most Honored Businesses (Top 10%)
• Fastest-Growing Staffing Firm by Staffing Industry Analysts
• INC 5000 List for Eight Consecutive Years
• Top 100 by Dallas Business Journal
• Spirit of Alliance Award by Agile1
Ready to Level Up Your Career?
Click Apply Now and let's make it happen.
Sr Data Modeler with Capital Markets/ Custody
Senior data scientist job in Jersey City, NJ
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree - a Larsen & Toubro Group company - combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit *******************
Job Title: Principal Data Modeler / Data Architecture Lead - Capital Markets
Work Location
Jersey City, NJ (Onsite, 5 days / week)
Job Description:
We are seeking a highly experienced Principal Data Modeler / Data Architecture Lead to reverse engineer an existing logical data model supporting all major lines of business in the capital markets domain.
The ideal candidate will have deep capital markets domain expertise and will work closely with business and technology stakeholders to elicit and document requirements, map those requirements to the data model, and drive enhancements or rationalization of the logical model prior to its conversion to a physical data model.
A software development background is not required.
Key Responsibilities
Reverse engineers the current logical data model, analyzing entities, relationships, and subject areas across capital markets (including customer, account, portfolio, instruments, trades, settlement, funds, reporting, and analytics).
Engage with stakeholders (business, operations, risk, finance, compliance, technology) to capture and document business and functional requirements, and map these to the data model.
Enhance or streamline the logical data model, ensuring it is fit-for-purpose, scalable, and aligned with business needs before conversion to a physical model.
Lead the logical-to-physical data model transformation, including schema design, indexing, and optimization for performance and data quality.
Perform advanced data analysis using SQL or other data analysis tools to validate model assumptions, support business decisions, and ensure data integrity.
Document all aspects of the data model, including entity and attribute definitions, ERDs, source-to-target mappings, and data lineage.
Mentor and guide junior data modelers, providing coaching, peer reviews, and best practices for modeling and documentation.
Champion a detail-oriented and documentation-first culture within the data modeling team.
Qualifications
Minimum 15 years of experience in data modeling, data architecture, or related roles within capital markets or financial services.
Strong domain expertise in capital markets (e.g., trading, settlement, reference data, funds, private investments, reporting, analytics).
Proven expertise in reverse engineering complex logical data models and translating business requirements into robust data architectures.
Strong skills in data analysis using SQL and/or other data analysis tools.
Demonstrated ability to engage with stakeholders, elicit requirements, and produce high-quality documentation.
Experience in enhancing, rationalizing, and optimizing logical data models prior to physical implementation.
Ability to mentor and lead junior team members in data modeling best practices.
Passion for detail, documentation, and continuous improvement.
Software development background is not required.
Preferred Skills
Experience with data modeling tools (e.g., ER/Studio, ERwin, Power Designer).
Familiarity with capital markets, business processes and data flows.
Knowledge of regulatory and compliance requirements in financial data management.
Exposure to modern data platforms (e.g., Snowflake, Databricks, cloud databases).
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, colour, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Data Engineer
Senior data scientist job in New York, NY
DL Software produces Godel, a financial information and trading terminal.
Role Description
This is a full-time, on-site role based in New York, NY, for a Data Engineer. The Data Engineer will design, build, and maintain scalable data systems and pipelines. Responsibilities include data modeling, developing and managing ETL workflows, optimizing data storage solutions, and supporting data warehousing initiatives. The role also involves collaborating with cross-functional teams to improve data accessibility and analytics capabilities.
Qualifications
Strong proficiency in Data Engineering and Data Modeling
Mandatory: strong experience in global financial instruments including equities, fixed income, options and exotic asset classes
Strong Python background
Expertise in Extract, Transform, Load (ETL) processes and tools
Experience in designing, managing, and optimizing Data Warehousing solutions
Senior Data Engineer
Senior data scientist job in New York, NY
Godel Terminal is a cutting edge financial platform that puts the world's financial data at your fingertips. From Equities and SEC filings, to global news delivered in milliseconds, thousands of customers rely on Godel every day to be their guide to the world of finance.
We are looking for a senior engineer in New York City to join our team and help build out live data services as well as historical data for US markets and international exchanges. This position will specifically work on new asset classes and exchanges, but will be expected to contribute to the core architecture as we expand to international markets.
Our team works quickly and efficiently, we are opinionated but flexible when it's time to ship. We know what needs to be done, and how to do it. We are laser focused on not just giving our customers what they want, but exceeding their expectations. We are very proud that when someone opens the app the first time they ask: “How on earth does this work so fast”. If that sounds like a team you want to be part of, here is what we need from you:
Minimum qualifications:
Able to work out of our Manhattan office minimum 4 days a week
5+ years of experience in a financial or startup environment
5+ years of experience working on live data as well as historical data
3+ years of experience in Java, Python, and SQL
Experience managing multiple production ETL pipelines that reliably store and validate financial data
Experience launching, scaling, and improving backend services in cloud environments
Experience migrating critical data across different databases
Experience owning and improving critical data infrastructure
Experience teaching best practices to junior developers
Preferred qualifications:
5+ years of experience in a fintech startup
5+ years of experience in Java, Kafka, Python, PostgreSQL
5+ years of experience working with Websockets like RXStomp or Socket.io
5+ years of experience wrangling cloud providers like AWS, Azure, GCP, or Linode
2+ years of experience shipping and optimizing Rust applications
Demonstrated experience keeping critical systems online
Demonstrated creativity and resourcefulness under pressure
Experience with corporate debt / bonds and commodities data
Salary range begins at $150,000 and increases with experience
Benefits: Health Insurance, Vision, Dental
To try the product, go to *************************
Azure Data Engineer
Senior data scientist job in Weehawken, NJ
· Expert level skills writing and optimizing complex SQL
· Experience with complex data modelling, ETL design, and using large databases in a business environment
· Experience with building data pipelines and applications to stream and process datasets at low latencies
· Fluent with Big Data technologies like Spark, Kafka and Hive
· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
· Designing and building of data pipelines using API ingestion and Streaming ingestion methods
· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
· Experience in developing NO SQL solutions using Azure Cosmos DB is essential
· Thorough understanding of Azure and AWS Cloud Infrastructure offerings
· Working knowledge of Python is desirable
· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Data Engineer
Senior data scientist job in New York, NY
Data Engineer (3-4 Years Experience)
Remote / On-site - based on client needs
)
Employment Type: Full-time (
Contract or Contract-to-Hire
)
Experience Level: Mid-level (3-4 years)
Company: Aaratech Inc
🛑 Eligibility:
Open to
U.S. Citizens
and
Green Card holders
only. We do not offer visa sponsorship.
🔍 About Aaratech Inc
Aaratech Inc is a specialized IT consulting and staffing company that places elite engineering talent into high-impact roles at leading U.S. organizations. We focus on modern technologies across cloud, data, and software disciplines. Our client engagements offer long-term stability, competitive compensation, and the opportunity to work on cutting-edge data projects.
🎯 Position Overview
We are seeking a Data Engineer with 3-4 years of experience to join a client-facing role focused on building and maintaining scalable data pipelines, robust data models, and modern data warehousing solutions. You'll work with a variety of tools and frameworks, including Apache Spark, Snowflake, and Python, to deliver clean, reliable, and timely data for advanced analytics and reporting.
🛠️ Key Responsibilities
Design and develop scalable Data Pipelines to support batch and real-time processing
Implement efficient Extract, Transform, Load (ETL) processes using tools like Apache Spark and dbt
Develop and optimize queries using SQL for data analysis and warehousing
Build and maintain Data Warehousing solutions using platforms like Snowflake or BigQuery
Collaborate with business and technical teams to gather requirements and create accurate Data Models
Write reusable and maintainable code in Python (Programming Language) for data ingestion, processing, and automation
Ensure end-to-end Data Processing integrity, scalability, and performance
Follow best practices for data governance, security, and compliance
✅ Required Skills & Experience
3-4 years of experience in Data Engineering or a similar role
Strong proficiency in SQL and Python (Programming Language)
Experience with Extract, Transform, Load (ETL) frameworks and building data pipelines
Solid understanding of Data Warehousing concepts and architecture
Hands-on experience with Snowflake, Apache Spark, or similar big data technologies
Proven experience in Data Modeling and data schema design
Exposure to Data Processing frameworks and performance optimization techniques
Familiarity with cloud platforms like AWS, GCP, or Azure
⭐ Nice to Have
Experience with streaming data pipelines (e.g., Kafka, Kinesis)
Exposure to CI/CD practices in data development
Prior work in consulting or multi-client environments
Understanding of data quality frameworks and monitoring strategies
Data Engineer - VC Backed Healthcare Firm - NYC or San Francisco
Senior data scientist job in New York, NY
Are you a data engineer who loves building systems that power real impact in the world?
A fast growing healthcare technology organization is expanding its innovation team and is looking for a Data Engineer II to help build the next generation of its data platform. This team sits at the center of a major transformation effort, partnering closely with engineering, analytics, and product to design the foundation that supports advanced automation, AI, intelligent workflows, and high scale data operations that drive measurable outcomes for hospitals, health systems, and medical groups.
In this role, you will design, develop, and maintain software applications that process large volumes of data every day. You will collaborate with cross functional teams to understand data requirements, build and optimize data models, and create systems that ensure accuracy, reliability, and performance. You will write code that extracts, transforms, and loads data from a variety of sources into modern data warehouses and data lakes, while implementing best in class data quality and governance practices. You will work hands on with big data technologies such as Hadoop, Spark, and Kafka, and you will play a critical role in troubleshooting, performance tuning, and ensuring the scalability of complex data applications.
To thrive here, you should bring strong problem solving ability, analytical thinking, and excellent communication skills. This is an opportunity to join an expanding innovation group within a leading healthcare platform that is investing heavily in data, AI, and the future of intelligent revenue operations. If you want to build systems that make a real difference and work with teams that care deeply about improving patient experiences and provider performance, this is a chance to do highly meaningful engineering at scale.
Market Data Engineer
Senior data scientist job in New York, NY
🚀 Market Data Engineer - New York | Cutting-Edge Trading Environment
I'm partnered with a leading technology-driven trading team in New York looking to bring on a Market Data Engineer to support global research, trading, and infrastructure groups. This role is central to managing the capture, normalization, and distribution of massive volumes of historical market data from exchanges worldwide.
What You'll Do
Own large-scale, time-sensitive market data capture + normalization pipelines
Improve internal data formats and downstream datasets used by research and quantitative teams
Partner closely with infrastructure to ensure reliability of packet-capture systems
Build robust validation, QA, and monitoring frameworks for new market data sources
Provide production support, troubleshoot issues, and drive quick, effective resolutions
What You Bring
Experience building or maintaining large-scale ETL pipelines
Strong proficiency in Python + Bash, with familiarity in C++
Solid understanding of networking fundamentals
Experience with workflow/orchestration tools (Airflow, Luigi, Dagster)
Exposure to distributed computing frameworks (Slurm, Celery, HTCondor, etc.)
Bonus Skills
Experience working with binary market data protocols (ITCH, MDP3, etc.)
Understanding of high-performance filesystems and columnar storage formats
Data Engineer (Web Scraping technologies)
Senior data scientist job in New York, NY
Title: Data Engineer (Web Scraping technologies)
Duration: FTE/Perm
Salary: 125-190k plus bonus
Responsibilities:
Utilize AI Models, Code, Libraries or applications to enable a scalable Web Scraping capability
Web Scraping Request Management including intake, assessment, accessing sites to scrape, utilizing tools to scrape, storage of scrape, validation and entitlement to users
Fielding Questions from users about the scrapes and websites
Coordinating with Compliance on approvals and TOU reviews
Some Experience building Data pipelines in AWS platform utilizing existing tools like Cron, Glue, Eventbridge, Python based ETL, AWS Redshift
Normalizing/standardizing vendor data, firm data for firm consumption
Implement data quality checks to ensure reliability and accuracy of scraped data
Coordinate with Internal teams on delivery, access, requests, support
Promote Data Engineering best practices
Required Skills and Qualifications:
Bachelor's degree in computer science, Engineering, Mathematics or related field
2-5 experience in a similar role
Prior buy side experience is strongly preferred (Multi-Strat/Hedge Funds)
Capital markets experience is necessary with good working knowledge of reference data across asset classes and experience with trading systems
AWS cloud experience with commons services (S3, lambda, cron, Event Bridge etc.)
Experience with web-scraping frameworks (Scrapy, BeautifulSoup, Selenium, Playwright etc.)
Strong hands-on skills with NoSQL and SQL databases, programming in Python, data pipeline orchestration tools and analytics tools
Familiarity with time series data and common market data sources (Bloomberg, Refinitiv etc.)
Familiarity with modern Dev Ops practices and infrastructure-as-code tools (e.g. Terraform, CloudFormation)
Strong communication skills to work with stakeholders across technology, investment, and operations teams.
Data Analytics Engineer
Senior data scientist job in Somerset, NJ
Client: manufacturing company
Type: direct hire
Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets.
This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams.
This role is on-site five days per week in Somerset, NJ.
Key Responsibilities
Power BI Reporting & Administration
Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets
Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions
Develop and maintain data models to ensure accuracy, consistency, and reliability
Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance
Optimize Power BI solutions for performance, scalability, and ease of use
ETL & Data Warehousing
Design and maintain data warehouse structures, including schema and database layouts
Develop and support ETL processes to ensure timely and accurate data ingestion
Integrate data from multiple systems while ensuring quality, consistency, and completeness
Work closely with database administrators to optimize data warehouse performance
Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed
Training & Documentation
Create and maintain technical documentation, including specifications, mappings, models, and architectural designs
Document data warehouse processes for reference, troubleshooting, and ongoing maintenance
Manage data definitions, lineage documentation, and data cataloging for all enterprise data models
Project Management
Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team
Collaborate with key business stakeholders to ensure departmental reporting needs are met
Record meeting notes in Confluence and document project updates in Jira
Data Governance
Implement and enforce data governance policies to ensure data quality, compliance, and security
Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness
Routine IT Functions
Resolve Help Desk tickets related to reporting, dashboards, and BI tools
Support general software and hardware installations when needed
Other Responsibilities
Manage email and phone communication professionally and promptly
Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel
Perform additional assigned duties as needed
Qualifications
Required
Minimum of 3 years of relevant experience
Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience
Experience with cloud-based BI environments (Azure, AWS, etc.)
Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica)
Proficiency in SQL for data extraction, manipulation, and transformation
Strong knowledge of DAX
Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake)
Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools
Strong analytical, problem-solving, and documentation skills
Excellent written and verbal communication abilities
High attention to detail and strong self-review practices
Effective time management and organizational skills; ability to prioritize workload
Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
Azure Data Engineer
Senior data scientist job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Azure Data Engineer
Senior data scientist job in Warren, NJ
Job Title: Data Engineer - SQL, Azure, ADF (Commercial Insurance)
Experience: 12 -20 Years
Job Type: Contract
Required Skills: SQL, Azure, ADF, Commercial Insurance
We are seeking a highly skilled Data Engineer with strong experience in SQL, Azure Data Platform, and Azure Data Factory, preferably within the Insurance domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, integrating data from multiple insurance systems, and enabling analytical and reporting capabilities for underwriting, claims, policy, billing, and risk management teams.
Required Skills & Experience
Minimum 12+ years of experience in Data Engineering or related roles.
Strong expertise in:
SQL, T-SQL, PL/SQL
Azure Data Factory (ADF)
Azure SQL, Synapse, ADLS
Data modeling for relational and analytical systems.
Hands-on experience with ETL/ELT development and complex pipeline orchestration.
Experience in Azure DevOps Git, CI/CD pipelines, and DataOps practices.
Understanding of insurance domain datasets: policy, premium, claims, exposures, brokers, reinsurers, underwriting workflows.
Strong analytical and problem-solving skills, with the ability to handle large datasets and complex transformations.
Preferred Qualifications
Experience with Databricks / PySpark for large-scale transformations.
Knowledge of Commercial Property & Casualty (P&C) insurance.
Experience integrating data from Guidewire ClaimCenter/PolicyCenter, DuckCreek, or similar platforms.
Exposure to ML/AI pipelines for underwriting or claims analytics.
Azure certifications such as:
DP-203 (Azure Data Engineer)
AZ-900, AZ-204, AI-900
Data Engineer
Senior data scientist job in Jersey City, NJ
ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES
Skillset: Data Engineer
Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3
Nice to Haves: Java, Spark, React Js
Interview Process: Interview Process: 2 rounds, 2nd will be on site
You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you.
As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives.
Job responsibilities:
• Supports review of controls to ensure sufficient protection of enterprise data.
• Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request.
• Updates logical or physical data models based on new use cases.
• Frequently uses SQL and understands NoSQL databases and their niche in the marketplace.
• Adds to team culture of diversity, opportunity, inclusion, and respect.
• Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals).
• Supports review of controls to ensure sufficient protection of enterprise data
Required qualifications, capabilities, and skills
• Formal training or certification on data engineering concepts and 2+ years applied experience
• Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases
• Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
• Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark.
• Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
• Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
• Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs
• Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
• Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift
• Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
• Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools.
Preferred qualifications, capabilities, and skills
• Knowledge of data governance and security best practices.
• Experience in carrying out data analysis to support business insights.
• Strong Python and Spark
Senior Data Engineer (Snowflake)
Senior data scientist job in Parsippany-Troy Hills, NJ
Senior Data Engineer (Snowflake & Python)
1-Year Contract | $60/hour + Benefit Options
Hybrid: On-site a few days per month (local candidates only)
Work Authorization Requirement
You must be authorized to work for any employer as a W2 employee. This is required for this role.
This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered.
Overview
We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake.
Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement.
What You'll Do
Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake
Participate across the full software development lifecycle - planning, requirements, development, testing, and QA
Partner closely with engineering and data teams to identify and implement optimal technical solutions
Build and maintain high-performance, scalable data pipelines and data warehouse architectures
Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards
Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions
Manage deliverables and priorities effectively in a fast-moving environment
Contribute to data governance practices including metadata management and data lineage
Support analytics and reporting use cases leveraging advanced SQL and analytical functions
Required Skills & Experience
8+ years of experience designing and developing data solutions in an enterprise environment
5+ years of hands-on Snowflake experience
Strong hands-on development skills with SQL and Python
Proven experience designing and developing data warehouses in Snowflake
Ability to diagnose, optimize, and tune SQL queries
Experience with Azure data frameworks (e.g., Azure Data Factory)
Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar
Solid understanding of metadata management and data lineage
Hands-on experience with SQL analytical functions
Working knowledge of Shell scripting and Java scripting
Experience using Git, Confluence, and Jira
Strong problem-solving and troubleshooting skills
Collaborative mindset with excellent communication skills
Nice to Have
Experience supporting Pharma industry data
Exposure to Omni-channel data environments
Why This Opportunity
$60/hour W2 on a long-term 1-year contract
Benefit options available
Hybrid structure with limited on-site requirement
High-impact role supporting enterprise data initiatives
Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
Data Engineer
Senior data scientist job in New York, NY
Haptiq is a leader in AI-powered enterprise operations, delivering digital solutions and consulting services that drive value and transform businesses. We specialize in using advanced technology to streamline operations, improve efficiency, and unlock new revenue opportunities, particularly within the private capital markets.
Our integrated ecosystem includes PaaS - Platform as a Service, the Core Platform, an AI-native enterprise operations foundation built to optimize workflows, surface insights, and accelerate value creation across portfolios; SaaS - Software as a Service, a cloud platform delivering unmatched performance, intelligence, and execution at scale; and S&C - Solutions and Consulting Suite, modular technology playbooks designed to manage, grow, and optimize company performance. With over a decade of experience supporting high-growth companies and private equity-backed platforms, Haptiq brings deep domain expertise and a proven ability to turn technology into a strategic advantage.
The Opportunity
As a Data Engineer within the Global Operations team, you will be responsible for managing the internal data infrastructure, building and maintaining data pipelines, and ensuring the integrity, cleanliness, and usability of data across our critical business systems. This role will play a foundational part in developing a scalable internal data capability to drive decision-making across Haptiq's operations.
Responsibilities and Duties
Design, build, and maintain scalable ETL/ELT pipelines to consolidate data from delivery, finance, and HR systems (e.g., Kantata, Salesforce, JIRA, HRIS platforms).
Ensure consistent data hygiene, normalization, and enrichment across source systems.
Develop and maintain data models and data warehouses optimized for analytics and operational reporting.
Partner with business stakeholders to understand reporting needs and ensure the data structure supports actionable insights.
Own the documentation of data schemas, definitions, lineage, and data quality controls.
Collaborate with the Analytics, Finance, and Ops teams to build centralized reporting datasets.
Monitor pipeline performance and proactively resolve data discrepancies or failures.
Contribute to architectural decisions related to internal data infrastructure and tools.
Requirements
3-5 years of experience as a data engineer, analytics engineer, or similar role.
Strong experience with SQL, data modeling, and pipeline orchestration (e.g., Airflow, dbt).
Hands-on experience with cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
Experience working with REST APIs and integrating with SaaS platforms like Salesforce, JIRA, or Workday.
Proficiency in Python or another scripting language for data manipulation.
Familiarity with modern data stack tools (e.g., Fivetran, Stitch, Segment).
Strong understanding of data governance, documentation, and schema management.
Excellent communication skills and ability to work cross-functionally.
Benefits
Flexible work arrangements (including hybrid mode)
Great Paid Time Off (PTO) policy
Comprehensive benefits package (Medical / Dental / Vision / Disability / Life)
Healthcare and Dependent Care Flexible Spending Accounts (FSAs)
401(k) retirement plan
Access to HSA-compatible plans
Pre-tax commuter benefits
Employee Assistance Program (EAP)
Opportunities for professional growth and development.
A supportive, dynamic, and inclusive work environment.
Why Join Us?
We value creative problem solvers who learn fast, work well in an open and diverse environment, and enjoy pushing the bar for success ever higher. We do work hard, but we also choose to have fun while doing it.
The compensation range for this role is $75,000 to $80,000 USD
Data Engineer
Senior data scientist job in Newark, NJ
NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises.
Role Description
This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role.
Key Responsibilities
Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools.
Data Integration: Integrate and transform data using industry-standard tools. Experience required with:
AWS Services: AWS Glue, Data Pipeline, Redshift, and S3.
Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage.
Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift.
Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity.
Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization.
Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions.
Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly.
Required Skills and Experience
Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar).
Integration: Experience integrating data via RESTful / GraphQL APIs.
Programming: Proficient in Python for ETL automation and SQL for database management.
Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) .
Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics.
Integration: Experience integrating data via RESTful APIs.
Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders.
Authorization: Must have valid work authorization in the United States.
Salary Range: $65,000- $80,000 per year
Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company.
Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
Senior Data Engineer
Senior data scientist job in New Providence, NJ
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
Job Description
Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems
Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance
Work in tandem with our engineering team to identify and implement the most optimal solutions
Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design
Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures
Able to manage deliverables in fast paced environments
Areas of Expertise
At least 10 years of experience designing and development of data solutions in enterprise environment
At least 5+ years' experience on Snowflake Platform
Strong hands-on SQL and Python development
Experience with designing and developing data warehouses in Snowflake
A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala
Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic
Good understanding on Metadata and data lineage
Hands-on knowledge on SQL Analytical functions
Strong knowledge and hands-on experience in Shell scripting, Java Scripting
Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering.
Good understanding and exposure to Git, Confluence and Jira
Good problem solving and troubleshooting skills.
Team player, collaborative approach and excellent communication skills
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)