Data Engineer
Data engineer job in Philadelphia, PA
Job Title: Data Engineer
Experience: 5+ years
We are seeking an experienced Data Engineer with strong expertise in PySpark and data pipeline operations. This role focuses heavily on performance tuning Spark applications, managing large-scale data pipelines, and ensuring high operational stability. The ideal candidate is a strong technical problem-solver, highly collaborative, and proactive in automation and process improvements.
Key Responsibilities:
Data Pipeline Management & Support
Operate and support Business-as-Usual (BAU) data pipelines, ensuring stability, SLA adherence, and timely incident resolution.
Identify and implement opportunities for optimization and automation across pipelines and operational workflows.
Spark Development & Performance Tuning
Design, develop, and optimize PySpark jobs for efficient large-scale data processing.
Diagnose and resolve complex Spark performance issues such as data skew, shuffle spill, executor OOM errors, slow-running stages, and partition imbalance.
Platform & Tool Management
Use Databricks for Spark job orchestration, workflow automation, and cluster configuration.
Debug and manage Spark on Kubernetes, addressing pod crashes, OOM kills, resource tuning, and scheduling problems.
Work with MinIO/S3 storage for bucket management, permissions, and large-volume file ingestion and retrieval.
Collaboration & Communication
Partner with onshore business stakeholders to clarify requirements and convert them into well-defined technical tasks.
Provide daily coordination and technical oversight to offshore engineering teams.
Participate actively in design discussions and technical reviews.
Documentation & Operational Excellence
Maintain accurate and detailed documentation, runbooks, and troubleshooting guides.
Contribute to process improvements that enhance operational stability and engineering efficiency.
Required Skills & Qualifications:
Primary Skills (Must-Have)
PySpark: Advanced proficiency in transformations, performance tuning, and Spark internals.
SQL: Strong analytical query design, performance tuning, and foundational data modeling (relational & dimensional).
Python: Ability to write maintainable, production-grade code with a focus on modularity, automation, and reusability.
Secondary Skills (Highly Desirable)
Kubernetes: Experience with Spark-on-K8s, including pod diagnostics, resource configuration, and log/monitoring tools.
Databricks: Hands-on experience with cluster management, workflow creation, Delta Lake optimization, and job monitoring.
MinIO / S3: Familiarity with bucket configuration, policies, and efficient ingestion patterns.
DevOps: Experience with Git, CI/CD, and cloud environments (Azure preferred).
Data Modeler
Data engineer job in Philadelphia, PA
Role: Data Modeler
Fulltime
The Senior Data Modeler is responsible for developing advanced data models associated with the enterprise data warehouse (DART), IHG operational systems and IHG data exchange with limited management oversight. Primary job function entails gathering and assessing business and technical requirements to create/update database objects. Delivery items include using ERwin Data Modeler to create and maintain entity relationship diagrams, DDL, and supporting documentation as needed. Additional job functions include identifying solution design options, advanced data profiling and subject matter expertise.
This role works closely with all development resources including Business Systems Analysts, Developers and Project Managers. The Senior Data Modeler works independently with minimal guidance and acts as a resource for colleagues with less experience.
A solid understanding of the following is required:
• Software Development Life Cycle (SDLC)
• Agile Methodology
• Logical and Physical data modeling
• Levels of Normalization
• Abstraction and Generalization
• Subtyping and Classification
• Relational model design
• Dimensional model design (Star Schemas, Snowflake designs)
• Inmon & Kimball methodologies
• Master Data Management (MDM)
• Data Profiling
• Metadata Management
• Data security and protecting information
• ETL processes, BI processes
• Cloud Platforms, especially Google Cloud Platform
Requirements:
• 7+ years of proven experience in Data Modeling
• 5+ years of experience with advanced SQL query techniques
• Highly skilled in Erwin, including but not limited to: Diagraming, Complete Compare, Reverse Engineering, Forward Engineering
• Strong multi-tasking capability, with adaptability to work simultaneously in multiple environments with differing procedures, SME support and level of accountability
• Ability to estimate scope of effort, to prioritize and/or fast track requirements, and to provide multiple options for data-driven solution design
• Demonstrated ability to recognize, elicit and/or decompose complex business requirements
• Demonstrated ability to perceive patterns and relationships in data
• Demonstrated ability to design models which integrate data from disparate sources
• Demonstrated ability to design data models for complex and ragged hierarchies
• Demonstrated ability to design data models for complicated historical perspectives
• Understanding of data warehousing and decision support tools and techniques
• Strong ability to multi-task with several concurrent issues and projects
• Demonstrated ability to interact effectively with all levels of the organization
• Strong interpersonal, written and verbal communication skills
• Experience advising or mentoring staff regarding data administration, data modeling, and data mapping
Desired
• Experience creating data models for BigQuery, SQL Server, Oracle and MySQL databases
• Experience in Healthcare Insurance or related field strongly preferred"
Data Engineer
Data engineer job in Hamilton, NJ
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
Senior Data Engineer
Data engineer job in Philadelphia, PA
Full-time Perm
Remote - EAST COAST ONLY
Role open to US Citizens and Green Card Holders only
We're looking for a Senior Data Engineer to lead the design, build, and optimization of modern data pipelines and cloud-native data infrastructure. This role is ideal for someone who thrives on solving complex data challenges, improving systems at scale, and collaborating across technical and business teams to deliver high-impact solutions.
What You'll Do
Architect, develop, and maintain scalable, secure data infrastructure supporting analytics, reporting, and operational workflows.
Design and optimize ETL/ELT pipelines to integrate data from diverse internal and external sources.
Prepare and transform structured and unstructured data to support modeling, reporting, and advanced analysis.
Improve data quality, reliability, and performance across platforms and workflows.
Monitor pipelines, troubleshoot discrepancies, and ensure accuracy and timely data delivery.
Identify architectural bottlenecks and drive long-term scalability improvements.
Collaborate with Product, BI, Finance, and engineering teams to build end-to-end data solutions.
Prototype algorithms, transformations, and automation tools to accelerate insights.
Lead cloud-native workflow design, including logging, monitoring, and storage best practices.
Create and maintain high-quality technical documentation.
Contribute to Agile ceremonies, engineering best practices, and continuous improvement initiatives.
Mentor teammates and guide adoption of data platform tools and patterns.
Participate in on-call rotation to maintain platform stability and availability.
What You Bring
Bachelor's degree in Computer Science or related technical field.
4+ years of advanced SQL experience (Oracle, PostgreSQL, etc.).
4+ years working with Java or Groovy.
3+ years integrating with SOAP or REST APIs.
2+ years with DBT and data modeling.
Strong understanding of modern data architectures, distributed systems, and performance optimization.
Experience with Snowflake or similar cloud data platforms (preferred).
Hands-on experience with Git, Jenkins, CI/CD, and automation/testing practices.
Solid grasp of cloud concepts and cloud-native engineering.
Excellent problem-solving, communication, and cross-team collaboration skills.
Ability to lead projects, own solutions end-to-end, and influence technical direction.
Proactive mindset with strong analytical and consultative abilities.
AWS Data engineer with Databricks || USC Only || W2 Only
Data engineer job in Princeton, NJ
AWS Data Engineer with Databricks
Princeton, NJ - Hybrid - Need Locals or Neaby
Duration: Long Term
is available only to U.S. citizens.
Key Responsibilities
Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data.
Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability.
Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions.
Document processes and mentor junior team members on best practices.
Required Qualifications
Bachelor's degree in Computer Science, Engineering, or related field.
5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda).
Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
Data Analytics Engineer
Data engineer job in Somerset, NJ
Client: manufacturing company
Type: direct hire
Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets.
This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams.
This role is on-site five days per week in Somerset, NJ.
Key Responsibilities
Power BI Reporting & Administration
Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets
Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions
Develop and maintain data models to ensure accuracy, consistency, and reliability
Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance
Optimize Power BI solutions for performance, scalability, and ease of use
ETL & Data Warehousing
Design and maintain data warehouse structures, including schema and database layouts
Develop and support ETL processes to ensure timely and accurate data ingestion
Integrate data from multiple systems while ensuring quality, consistency, and completeness
Work closely with database administrators to optimize data warehouse performance
Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed
Training & Documentation
Create and maintain technical documentation, including specifications, mappings, models, and architectural designs
Document data warehouse processes for reference, troubleshooting, and ongoing maintenance
Manage data definitions, lineage documentation, and data cataloging for all enterprise data models
Project Management
Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team
Collaborate with key business stakeholders to ensure departmental reporting needs are met
Record meeting notes in Confluence and document project updates in Jira
Data Governance
Implement and enforce data governance policies to ensure data quality, compliance, and security
Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness
Routine IT Functions
Resolve Help Desk tickets related to reporting, dashboards, and BI tools
Support general software and hardware installations when needed
Other Responsibilities
Manage email and phone communication professionally and promptly
Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel
Perform additional assigned duties as needed
Qualifications
Required
Minimum of 3 years of relevant experience
Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience
Experience with cloud-based BI environments (Azure, AWS, etc.)
Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica)
Proficiency in SQL for data extraction, manipulation, and transformation
Strong knowledge of DAX
Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake)
Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools
Strong analytical, problem-solving, and documentation skills
Excellent written and verbal communication abilities
High attention to detail and strong self-review practices
Effective time management and organizational skills; ability to prioritize workload
Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
Python Data Controls Developer
Data engineer job in Mount Laurel, NJ
Mount Laurel, NJ - (3 days onsite role)
Mode of Hiring: Full Time
Salary: Negotiable for right candidates
Minimum of 7 - 10 years of experience working in a financial institution, preferably in Global Banks
Minimum of 7 - 10 years of experience in SQL development, including query optimization, stored procedures, and indexing.
Strong working experience in Python for data manipulation, scripting, and automation
Understanding of the Compliance domain and concepts i.e., Anti-Money laundering (AML), Know your Customer (KYC), Customer Risk Rating etc. is a must
Minimum of 5 - 7 years of experience in data (data lifecycle, data governance, data quality, Metadata, Data issue resolution and other data concepts)
Java Software Engineer
Data engineer job in East Windsor, NJ
Job Responsibilities:
Develop applications using Java 8/JEE (and higher), Angular 2+, React.js, SQL, Spring, HTML5, CSS, JavaScript, and TypeScript, among other tools.
Write scalable, secure, and maintainable code that powers our clients' platforms.
Create, deploy, and maintain automated system tests.
Work with Testers to understand defects and resolve them in a timely manner.
Support continuous improvement by investigating alternatives and technologies, and presenting these for architectural review.
Collaborate effectively with other team members to accomplish shared user story and sprint goals.
Requirement:
Experience in programming languages: Java and JavaScript
Decent understanding of the software development life cycle
Basic programming skills using object-oriented programming (OOP) languages, with in-depth
knowledge of common APIs and data structures like Collections, Maps, Lists, Sets, etc.
Knowledge of relational databases (e.g., SQL Server, Oracle) and basic SQL query language skills
Preferred Qualifications:
Master's Degree in Computer Science (CS)
0-1 year of practical experience in Java coding
Experience using Spring, Maven, Angular frameworks, HTML, and CSS
Knowledge of other contemporary Java technologies (e.g., WebLogic, RabbitMQ, Tomcat)
Familiarity with JSP, J2EE, and JDBC
Senior Dotnet Developer
Data engineer job in Ewing, NJ
The Software Engineer, III is a fullstack engineer with strong Node.js and functional programming (Scala & Ruby) skills. Our applications leverage Node.js, Bootstrap, JQuery, Mondo DB, Elastic Search, Redis, React.js and delightful interactive experience to the web. Our applications run in the AWS cloud environment. We use Agile Methodologies to enable our engineering team to work closely with partners and with our design & product teams. This role is full time and preferably located long-term in New York City or southern New Jersey areas.
Essential Job Duties and Responsibilities include:
Design, develop, and maintain modern web applications and UIs using .NET technologies such as C#, ASP.NET MVC, ASP.NET Core, Razor Pages, and Blazor.
Create clean, maintainable, and well-documented code following industry best practices and coding standards.
Develop and consume RESTful APIs and web services.
Build responsive and accessible user interfaces using HTML, CSS, JavaScript, and UI libraries/frameworks such as React, Angular, Vue.js, or Bootstrap.
Work with relational and NoSQL databases (e.g., SQL Server, MongoDB) and object-relational mappers (ORMs) such as Entity Framework Core.
Conduct unit and integration testing to validate functionality and ensure high-quality deliverables.
Participate in peer code reviews and provide constructive feedback to ensure continuous improvement and knowledge sharing.
Identify, troubleshoot, and resolve complex technical issues in development and production environments.
Collaborate with cross-functional teams throughout the software development lifecycle.
Stay current with emerging .NET technologies and trends.
May mentor and support junior developers in their technical growth and day-to-day work.
Maintain regular and punctual attendance.
Preferred Qualifications:
Experience with CI/CD pipelines and DevOps practices.
Familiarity with cloud platforms (e.g., Azure, AWS) and deploying .NET applications in cloud environments.
Knowledge of Blazor for interactive web UIs using C# instead of JavaScript
Education and/or Experience:
7+ years of professional software development experience with a strong focus on web and UI development in the .NET ecosystem.
Advanced proficiency in C#, ASP.NET, ASP.NET Core, and MVC frameworks; experience with VBScript is a plus.
Deep understanding of object-oriented programming (OOP) and design patterns.
Strong front-end development skills, including HTML, CSS, JavaScript, and at least one modern UI framework (React, Angular, Vue.js, etc.).
Proven experience developing and integrating RESTful APIs.
Hands-on experience with SQL Server and/or NoSQL databases; proficient in using Entity Framework Core or similar ORMs.
Familiarity with version control systems such as Git.
Solid grasp of Agile/Scrum development methodologies.
Excellent problem-solving abilities and strong attention to detail.
Effective communication and interpersonal skills with the ability to work independently and within a team.
SRE/DevOps w/ HashiCorp & Clojure Exp
Data engineer job in Philadelphia, PA
Locals Only! SRE/DevOps w/ HashiCorp & Clojure Exp Philadelphia, PA: 100% Onsite! 12 + Months
MUST: HashiCorp Clojure
Role: Lead SRE initiatives, automating and monitoring cloud infrastructure to ensure reliable, scalable, and secure systems for eCommerce.
Required: Must Have:
AWS, Terraform, HashiCorp Stack (Nomad, Vault, Consul)
Programming in Python/Clojure
Automation, monitoring, and log centralization (Splunk)
Experience leading large-scale cloud infrastructure
Desired Skills and Experience
Locals Only!
SRE/DevOps w/ HashiCorp & Clojure Exp
Philadelphia, PA: 100% Onsite!
12 + Months
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Sr. C++FIX or Market Data Developer
Data engineer job in Princeton, NJ
Looking for a highly motivated C++ Trading Systems Developer with demonstrated experience in designing, developing and delivering core production software solutions in a mission critical trading systems environment. Major responsibilities include: Assessing business and systems requirements and developing functional specifications Designing and developing high quality, high performance trading systems software written in C++ to meet deliverable timelines and requirements Adhering to software development life cycle process/methodology Building business level subject matter expertise in trading systems functionality and processing Provide second level support for production on an ad hoc basis when necessary Location: Princeton, NJ Organizational Structure: The developer will be an integral part of a core development team and report to the Trading System Development management team. Qualifications: Full software development life cycle experience in a mission critical trading systems environment a must… Options, Equities, Futures, etc. Must possess excellent software design skills and knowledge of advanced data structures Must have exceptionally strong C++ knowledge and debugging skills in a Linux environment Solid knowledge of Object Oriented Programming concepts a must Strong knowledge of TCP/IP multicast and socket programming required Knowledge of the BOOST libraries and STL required Must have experience in developing real-time applications in a distributed processing architecture Must have excellent organizational and communication skills Must be able to work effectively in a team environment Strong knowledge of the logical business domain in Options or Equities trading systems a big plus Experience coding interface solutions for FIX, OPRA, CTA or UTP a big plus Knowledge of scripting languages such as Python, Shell, and Perl a plus Education and Experience: Minimum of a Bachelor's degree or equivalent in IT/Computer Science 7+ years of experience in C++ development 5+ years of demonstrated experience in delivering software solutions in a trading systems environment for an Exchange or a Wall Street firm
Identity and Access Management Software Engineering Lead
Data engineer job in Philadelphia, PA
Identity and Access Management - Software Engineering Lead- Must have either (KeyCloak, Auth0, Okta, or similar)
Are you a Software Engineering lead with a strong security background ready to broaden your impact and take on a hands-on software engineering leadership role?
Are you a collaborative Software Engineering Lead looking to work for a mission driven global organization?
About the role - As an Engineering Lead for NeoID-Elsevier's next-generation Identity and Access Management (IAM) platform-you'll leverage your deep security expertise to architect, build, and evolve the authentication and authorization backbone for Elsevier's global products. You'll also lead and manage a team of 5 engineers, fostering their growth and ensuring delivery excellence. You'll have the opportunity to work with industry standard protocols such as OAuth2, OIDC and SAML, as well as healthcare's SMART on FHIR and EHR integrations.
About the team- This team is entrusted with building Elsevier's next-generation Identity and Access Management (IAM) platform. This diverse team of engineers are also building and evolving the authentication and authorization backbone for Elsevier's global products. This team is building a brand new product in Cyber Security that will provide Authorization and Authentication for ALL Elsevier products
Qualifications
Current and extensive experience with at least one major IAM platform (KeyCloak, Auth0, Okta, or similar) - KeyCloak and Auth0 experience are strong pluses. Only candidates with this experience will be considered for this critical role.
Possess an in-depth security mindset, with proven experience designing and implementing secure authentication and authorization systems
Have an extensive understanding of OAuth2, OIDC and SAML protocols, including relevant RFCs and enterprise/server-side implementations
Familiarity with healthcare identity protocols, including SMART on FHIR and EHR integrations
Have current hands-on experience with AWS cloud services and infrastructure management. Proficiency in Infrastructure as Code (IaC) tools, especially Terraform
Strong networking skills, including network security, protocols, and troubleshooting
Familiarity with software development methodologies (Agile, Waterfall, etc.)
Experience with Java/J2EE, JavaScript, and related technologies, or willingness to learn and deepen expertise
Knowledge of data modeling, optimization, and secure data handling best practices
Accountabilities
Leading the design and implementation of secure, scalable IAM solutions, with a focus on OAuth2/OIDC and healthcare protocols such as SMART on FHIR and EHR integrations
Managing, mentoring and supporting a team of 5 engineers, fostering a culture of security, innovation, and technical excellence
Collaborating with product managers and stakeholders to define requirements and strategic direction for the platform, including healthcare and life sciences use cases
Writing and reviewing code, performing code reviews, and ensuring adherence to security and engineering best practices
Troubleshooting and resolving complex technical issues, providing expert guidance on IAM, security, and healthcare protocol topics
Contributing to architectural decisions and long-term platform strategy
Staying current with industry trends, emerging technologies, and evolving security threats in the IAM and healthcare space
Why Elsevier?
Join a global leader in information and analytics, and help shape the future of secure, seamless access to knowledge for millions of users worldwide, including healthcare professionals and researchers. If you are an Engineering Lead ready to expand your skills, take on a hands-on software engineering leadership role, and grow as a people manager, we want to hear from you.
MLOps Engineer
Data engineer job in Philadelphia, PA
Role : ML Ops Lead
Duration : Long Term
Skills :
4 - 7 years of experience in DevOps, MLOps, platform engineering, or cloud infrastructure.
Strong skills in containerization (Docker, Kubernetes), API hosting, and cloud-native services.
Experience with vector DBs (e.g., FAISS, Pinecone, Weaviate) and model hosting stacks.
Familiarity with logging frameworks, APM tools, tracing layers, and prompt/versioning logs.
Bonus: exposure to LangChain, LangGraph, LLM APIs, and retrieval-based architectures.
Responsibilities :
Set up and manage runtime environments for LLMs, vector DBs, and orchestration flows (e.g., LangGraph).
Support deployments in cloud, hybrid, and client-hosted environments.
Containerize systems for deployment (Docker, Kubernetes, etc.) and manage inference scaling.
Integrate observability tooling: prompt tracing, version logs, eval hooks, error pipelines.
Collaborate on RAG stack deployments (retriever, ranker, vector DB, toolchains).
Support CI/CD, secrets management, error triage, and environment configuration.
Contribute to platform-level IP, including reusable scaffolding and infrastructure accelerators.
Ensure systems are compliant with governance expectations and auditable (esp. in insurance contexts).
Preferred Attributes :
Systems thinker with strong debugging skills..
Able to work across cloud, on-prem, and hybrid client environments.
Comfortable partnering with architects and engineers to ensure smooth delivery.
Proactive about observability, compliance, and runtime reliability.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
Our culture - Our fuel
At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other.
People first - We make employees win.
Nurture leaders - We nurture from within.
Enjoy wins - Celebrating wins and creating leaders.
Collaboration - A culture of collaboration and people-centricity.
Diversity - Committed to diversity, equity, and inclusion.
Fun - Help people have fun at work.
Data Scientist (Technical Leadership)
Data engineer job in Trenton, NJ
We are seeking experienced Data Scientists to join our team and drive impact across various product areas. As a Data Scientist, you will collaborate with cross-functional partners to identify and solve complex problems using data and analysis. Your role will involve shaping product development, quantifying new opportunities, and ensuring products bring value to users and the company. You will guide teams using data-driven insights, develop hypotheses, and employ rigorous analytical approaches to test them. You will tell data-driven stories, present clear insights, and build credibility with stakeholders. By joining our team, you will become part of a world-class analytics community dedicated to skill development and career growth in analytics and beyond.
**Required Skills:**
Data Scientist (Technical Leadership) Responsibilities:
1. Work with complex data sets to solve challenging problems using analytical and statistical approaches
2. Apply technical expertise in quantitative analysis, experimentation, and data mining to develop product strategies
3. Identify and measure success through goal setting, forecasting, and monitoring key metrics
4. Partner with cross-functional teams to inform and execute product strategy and investment decisions
5. Build long-term vision and strategy for programs and products
6. Collaborate with executives to define and develop data platforms and instrumentation
7. Effectively communicate insights and recommendations to stakeholders
8. Define success metrics, forecast changes, and set team goals
9. Support developing roadmaps and coordinate analytics efforts across teams
**Minimum Qualifications:**
Minimum Qualifications:
10. Mathematics, Statistics, Operations Research) 5+ years of experience with data querying languages (e.g. SQL), scripting languages (e.g. Python), or statistical/mathematical software (e.g. R, SAS, Matlab)
11. 8+ years of work experience leading analytics work in IC capacity, working collaboratively with Engineering and cross-functional partners, and guiding data-influenced product planning, prioritization and strategy development
12. Experience with predictive modeling, machine learning, and experimentation/causal inference methods
13. Experience communicating complex technical topics in a clear, precise, and actionable manner
**Preferred Qualifications:**
Preferred Qualifications:
14. 10+ years of experience communicating the results of analyses to leadership teams to influence the strategy
15. Masters or Ph.D. Degree in a quantitative field
16. Bachelor's Degree in an analytical or scientific field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research)
17. 10+ years of experience doing complex quantitative analysis in product analytics
**Public Compensation:**
$206,000/year to $281,000/year + bonus + equity + benefits
**Industry:** Internet
**Equal Opportunity:**
Meta is proud to be an Equal Employment Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Meta participates in the E-Verify program in certain locations, as required by law. Please note that Meta may leverage artificial intelligence and machine learning technologies in connection with applications for employment.
Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.
Data Scientist, NLP
Data engineer job in Trenton, NJ
Datavant is a data platform company and the world's leader in health data exchange. Our vision is that every healthcare decision is powered by the right data, at the right time, in the right format. Our platform is powered by the largest, most diverse health data network in the U.S., enabling data to be secure, accessible and usable to inform better health decisions. Datavant is trusted by the world's leading life sciences companies, government agencies, and those who deliver and pay for care.
By joining Datavant today, you're stepping onto a high-performing, values-driven team. Together, we're rising to the challenge of tackling some of healthcare's most complex problems with technology-forward solutions. Datavanters bring a diversity of professional, educational and life experiences to realize our bold vision for healthcare.
We are looking for a motivated Data Scientist to help Datavant revolutionize the healthcare industry with AI. This is a critical role where the right candidate will have the ability to work on a wide range of problems in the healthcare industry with an unparalleled amount of data.
You'll join a team focused on deep medical document understanding, extracting meaning, intent, and structure from unstructured medical and administrative records. Our mission is to build intelligent systems that can reliably interpret complex, messy, and high-stakes healthcare documentation at scale.
This role is a unique blend of applied machine learning, NLP, and product thinking. You'll collaborate closely with cross-functional teams to:
+ Design and develop models to extract entities, detect intents, and understand document structure
+ Tackle challenges like long-context reasoning, layout-aware NLP, and ambiguous inputs
+ Evaluate model performance where ground truth is partial, uncertain, or evolving
+ Shape the roadmap and success metrics for replacing legacy document processing systems with smarter, scalable solutions
We operate in a high-trust, high-ownership environment where experimentation and shipping value quickly are key. If you're excited by building systems that make healthcare data more usable, accurate, and safe, please reach out.
**Qualifications**
+ 3+ years of experience with data science and machine learning in an industry setting, particularly in designing and building NLP models.
+ Proficiency with Python
+ Experience with the latest in language models (transformers, LLMs, etc.)
+ Proficiency with standard data analysis toolkits such as SQL, Numpy, Pandas, etc.
+ Proficiency with deep learning frameworks like PyTorch (preferred) or TensorFlow
+ Industry experience shepherding ML/AI projects from ideation to delivery
+ Demonstrated ability to influence company KPIs with AI
+ Demonstrated ability to navigate ambiguity
**Bonus Experience**
+ Experience with document layout analysis (using vision, NLP, or both).
+ Experience with Spark/PySpark
+ Experience with Databricks
+ Experience in the healthcare industry
**Responsibilities**
+ Play a key role in the success of our products by developing models for document understanding tasks.
+ Perform error analysis, data cleaning, and other related tasks to improve models.
+ Collaborate with your team by making recommendations for the development roadmap of a capability.
+ Work with other data scientists and engineers to optimize machine learning models and insert them into end-to-end pipelines.
+ Understand product use-cases and define key performance metrics for models according to business requirements.
+ Set up systems for long-term improvement of models and data quality (e.g. active learning, continuous learning systems, etc.).
**After 3 Months, You Will...**
+ Have a strong grasp of technologies upon which our platform is built.
+ Be fully integrated into ongoing model development efforts with your team.
**After 1 Year, You Will...**
+ Be independent in reading literature and doing research to develop models for new and existing products.
+ Have ownership over models internally, communicating with product managers, customer success managers, and engineers to make the model and the encompassing product succeed.
+ Be a subject matter expert on Datavant's models and a source from which other teams can seek information and recommendations.
\#LI-BC1
We are committed to building a diverse team of Datavanters who are all responsible for stewarding a high-performance culture in which all Datavanters belong and thrive. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status.
At Datavant our total rewards strategy powers a high-growth, high-performance, health technology company that rewards our employees for transforming health care through creating industry-defining data logistics products and services.
The range posted is for a given job title, which can include multiple levels. Individual rates for the same job title may differ based on their level, responsibilities, skills, and experience for a specific job.
The estimated total cash compensation range for this role is:
$136,000-$170,000 USD
To ensure the safety of patients and staff, many of our clients require post-offer health screenings and proof and/or completion of various vaccinations such as the flu shot, Tdap, COVID-19, etc. Any requests to be exempted from these requirements will be reviewed by Datavant Human Resources and determined on a case-by-case basis. Depending on the state in which you will be working, exemptions may be available on the basis of disability, medical contraindications to the vaccine or any of its components, pregnancy or pregnancy-related medical conditions, and/or religion.
This job is not eligible for employment sponsorship.
Datavant is committed to a work environment free from job discrimination. We are proud to be an Equal Employment Opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, or other legally protected status. To learn more about our commitment, please review our EEO Commitment Statement here (************************************************** . Know Your Rights (*********************************************************************** , explore the resources available through the EEOC for more information regarding your legal rights and protections. In addition, Datavant does not and will not discharge or in any other manner discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay.
At the end of this application, you will find a set of voluntary demographic questions. If you choose to respond, your answers will be anonymous and will help us identify areas for improvement in our recruitment process. (We can only see aggregate responses, not individual ones. In fact, we aren't even able to see whether you've responded.) Responding is entirely optional and will not affect your application or hiring process in any way.
Datavant is committed to working with and providing reasonable accommodations to individuals with physical and mental disabilities. If you need an accommodation while seeking employment, please request it here, (************************************************************** Id=**********48790029&layout Id=**********48795462) by selecting the 'Interview Accommodation Request' category. You will need your requisition ID when submitting your request, you can find instructions for locating it here (******************************************************************************************************* . Requests for reasonable accommodations will be reviewed on a case-by-case basis.
For more information about how we collect and use your data, please review our Privacy Policy (**************************************** .
ETL/BI/Data reports consultants
Data engineer job in Philadelphia, PA
AYR Global IT Solutions is a national staffing firm focused on cloud, cyber security, web application services, ERP, and BI implementations by providing proven and experienced consultants to our clients.
Our competitive, transparent pricing
model and industry experience make us a top choice of Global System
Integrators and enterprise customers with federal and commercial
projects supported nationwide.
Job Description
Subject: ETL/BI/Data reports consultants
Location: Philadelphia, Pa.
Duration: One year plus
Qualifications
Experience - Must haves!
5+ years coding advanced SQL queries, ETL automation, and stored procedures to support business inquiries
5+ years of demonstrated reporting, analytical, and database experience in a dynamic business environment
Advanced BI/Reporting Tool experience (SSRS) required.
Experience with transforming complex datasets into relevant visualizations
Familiar with relational database technology and terminology
Functional Competencies
SQL coding proficiency (5+ years)
Microsoft Reporting Services (SSRS) - Ability to develop advanced reports, manage subscriptions, etc.
Microsoft Integration Services (SSIS) - Ability to manage SQL Server ETL processes / job failures, etc.
Relational database development (SQL Server, Teradata)
SQL Server - Table, View & Procedure development
Teradata environment experience
Teradata SQL Assistant experience
Excel - Charts, Pivot Tables, Equations, VBA
Tableau experience a plus
Oracle experience a plus
Qualifications
Bachelor's degree required (preferably in Information Systems, Business Intelligence, or Computer Science) with related experience, or an equivalent combination of education and experience from which comparable knowledge and abilities can be acquired.
A sound technical background with the ability to analyze complex data sets, business processes, and quickly adapt to the use of new technologies.
Must have excellent communication skills with the ability to share technical capabilities and work cross-functionally between business functions.
Additional Information
If anyone might be intersted please send resumes to kmarsh@ayrglobal (dot) com or you can reach me direct at **************
Data & AI Consultant - Philadelphia - Graduates
Data engineer job in Philadelphia, PA
Thorogood is an independent, specialized, data and analytics consulting firm that works with some of the biggest and best-known companies around the world. We have offices in London, Philadelphia, Boston, Singapore, São Paulo, and Bangalore, and we work as a global unit to service blue chip customers drawing from the consumer goods, pharmaceutical, insurance, and banking sectors, among others. Together, we compete on the basis of our specialism, our people, our independence, high quality, and our experience delivering assuredly for customers over several decades.
We help our customers leverage the latest technologies to make timely, data-driven business decisions. Our multi-skilled teams understand the critical interplay of data engineering, data science, and data visualization. We work across these areas to create business-focused, end-to-end technology solutions robust enough to meet enterprise needs. We recognize the importance of environmental sustainability, both in how we work and in the solutions we build for our clients. Our services span strategy, implementation, advanced analytics services, user empowerment, and support.
Additional information about us, our people, what we do, and some of perspectives can be found on our website here.
About the Position Overview
Our consultants are expected to have a strong interest in business. Understanding our clients' organizations and industries is key to identifying areas of opportunity for us to help. Similarly, consultants must have excellent listening, writing, and presenting skills, alongside the aptitude and motivation to master multiple data-focused technologies in a fast-moving field. This combination of business, people, and technical skills is essential.
Our Brazil and US entities are managed together as one Thorogood Americas in support of development of our business not only in the region, but around the world. We work with our colleagues in the US, UK, India, Singapore, and Brazil to provide effective solutions for our clients situated across the globe. The global nature of our work exposes our consultants to a variety of experiences across industries and cultures. The ability to work collaboratively and effectively as part of a distributed team is therefore imperative for the position.
Our most successful consultants develop a well-rounded set of skills in each of these areas of business, people, and technology. You'll do so via a number of opportunities:
Comprehensive training: During your first months at Thorogood, experienced consultants will introduce you to the technologies we work with, the industries we serve, and the ways in which we develop our business. Crucially, we will share how we live our values. When you've finished, we'll plan to have you working on a client project immediately.
Building your core technology skills: We're the kind of people who learn best by doing. You'll be working on a team locked onto business value for the customer to develop fast, effective, and high-quality technical solutions.
Working with top clients: Thorogood works with some of the biggest and best-known companies in the world. You'll be a part of client interactions, sometimes leading them, to make sure we're keeping our projects on track and delivering value to our customers.
Developing relationships: We consider ourselves trusted advisors for our clients. We earn this trust through listening carefully, sharing the expertise we have acquired, and helping to progress the objectives within their organizations.
Shaping future projects: We are always listening to customers to understand their objectives and needs and working with our consultants to share proposals on how we can help.
Working as part of a supportive project team: You'll be working from our offices in Philadelphia, with occasional travel externally to client sites and internally to other Thorogood offices. Your project teams will be comprised of consultants across many levels of experience, working together daily to support one another along the way.
Becoming an expert, yourself: Our subject matter experts are not necessarily the most tenured people at Thorogood. In the rapidly changing world of data and analytics, we look to all levels of experience to gain expertise in new technologies and analytical techniques.
In addition to project work, Thorogood consultants run our company. We expect you to contribute here, as well:
Helping to grow our business: Interact with current and prospective clients, listen to their needs, and drive business development activities and discussions. Together, we'll consider the business objectives in conjunction with the technical possibilities in order to understand the opportunities for their organizations.
Delivering educationally focused marketing events: Prepare and present informational events for contacts at prospective and current clients.
Fostering vendor partnerships: Help maintain and further our relationships with key vendor partners in the space including Microsoft, Databricks, Amazon Web Services, Google Cloud and others, whose offerings we implement for customers.
Developing our consultancy: Contribute to recruiting, training, and managing our people.
Additional Benefits
We individually select each of our consultants very carefully through our comprehensive interview and assessment process. We place a great deal of value on consultants' growth and wellbeing. In addition to a challenging and rewarding career path, Thorogood offers the following benefits:
Very competitive salary and compensation program
Company-sponsored health, dental, and vision coverage
401(k) program with employer match
Generous vacation allowance with incremental increases over time based on consultant tenure
Incentive programs
Opportunities for national and international travel
Focus on work-life balance
Location and Timing
This position is based at our Center City office in Philadelphia, PA. Our consultants are currently working in a flexible, hybrid work model. However, a level of in-person attendance at our office is required. The expected start date for this position is approximately between January 2026 or July 2026, depending on candidate start date availability.
About You
Qualified candidates must have
Strong interest in business, people, and technology
Strong academic record in a related discipline, including engineering, computer science, mathematics, statistics, business, management, or another business and quantitative-focused field
Bachelor's degree with graduation between December 2025 and June 2026
Candidates must demonstrate effectiveness in
Working individually and as part of project teams
Maintaining responsibility and accountability for quality and work delivery
Client focus and relationship management
Time management and priority setting
Data awareness and business interest
Learning new technologies
This is an entry-level position; no prior full-time employment experience is required. Please do highlight if you have studied or worked with
Cloud BI & Analytics technology platforms, e.g. Databricks, Snowflake, Amazon Web Services, Microsoft Azure, or Google Cloud Platform
SQL, Python, R, Scala, Spark, or any other programming or statistical programming languages and frameworks
Applied mathematics, statistics, probability, machine learning, artificial intelligence, large language models, and/or generative artificial intelligence
Data science, data engineering, data analysis, or data visualization technologies
Managed structured, semi-structured, and unstructured data storage, transformation, or analysis
Integrating data solutions into web applications
Please note: Thorogood will ensure that every employee is authorized to work in the United States. If you are offered employment with the Company, we shall require proof of your authorization to work within the United States. Thorogood does not sponsor applicants for work visas. Applicants must be currently authorized to work within the United States on a full-time basis
Data Scientist
Data engineer job in Camden, NJ
The Cooper Health System is seeking a Data Scientist to join its analytics team. This role focuses on leveraging advanced analytics techniques to drive clinical and business decision-making. You will work with healthcare data to build predictive models, apply machine learning and NLP methods, and optimize data pipelines. The ideal candidate combines strong technical skills with the ability to communicate insights effectively to stakeholders.
Key Responsibilities
Develop and implement machine learning models for predictive analytics and clinical decision support.
Apply NLP and LLM techniques to extract insights from structured and unstructured data.
Build and optimize data pipelines using Python and SQL for ETL processes.
Preprocess and clean datasets to support analytics initiatives.
Collaborate with stakeholders to understand data needs and deliver actionable insights.
Interpret complex datasets and provide clear, data-driven recommendations.
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity/affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to ********************.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy: ****************************************************
Skills and Requirements
1yr min professional/post-grad Data Scientist experience, and knowledge across areas such as Machine Learning, NLP, LLMs, etc
Proficiency in Python and SQL for data manipulation and pipeline development
Strong communication skills for stakeholder engagement
Bachelor's Degree Master's Degree
Azure experience (and/or other MS tools)
Experience working with healthcare data, preferably from Epic
Strong skills in data visualization, dashboard design, and interpreting complex datasets
Microsoft Dynamics 365 CE Data Migration Consultant
Data engineer job in Middletown, PA
Job DescriptionSalary:
Data-Core Systems, Inc. is a provider of information technology, consulting and business process services. We offer breakthrough tech solutions and have worked with companies, hospitals, universities and government organizations. A proven partner with a passion for client satisfaction, we combine technology innovation, business process expertise and a global, collaborative workforce that exemplifies the future of work. For more information about Data-Core Systems, Inc., please visit*****************************
Our client is a roadway system and as a part of their digital transformation they are implementing a solution based on SAP BRIM & Microsoft Dynamics CE.
Data-Core Systems Inc. is seeking Microsoft Dynamics 365 CE Data Migration Consultantto be a part of our Consulting team. You will be responsible for planning, designing, and executing the migration of customer, account, vehicle, financial, and transaction data from a variety of source systemsincluding legacy CRMs, ERPs, SQL databases, flat files, Excel, cloud platforms, and tolling systemsinto Microsoft Dynamics 365 Customer Engagement (CE). This role involves understanding complex data models, extracting structured and unstructured data, transforming and mapping it to Dynamics CE entities, and ensuring data quality, integrity, and reconciliation throughout the migration lifecycle.
Roles & Responsibilities:
Analyze source system data structures, including customer profiles, accounts, vehicles, transponders, payment methods, transactions, violations, invoices, and billing records
Identify critical data relationships, parent/child hierarchies, and foreign key dependencies
Develop detailed data mapping and transformation documentation from source systems to Dynamics 365 CE entities (standard and custom)
Build, test, and execute ETL pipelines using tools such as SSIS/KingswaySoft, Azure Data Factory, Power Platform Dataflows, or custom .NET utilities
Perform data cleansing, normalization, deduplication, and standardization to meet Dynamics CE data model requirements
Execute multiple migration cycles, including test loads, validation, and final production migration
Ensure referential integrity, high data quality, and accuracy of historical data
Generate reconciliation reports, resolve data inconsistencies, and troubleshoot migration errors
Document migration strategies, execution runbooks, and transformation rules for future reference
Required Skills & Experience:
8-12 years of proven experience migrating data from tolling systems, transportation platforms, legacy CRMs, or other high-volume transactional systems
Strong SQL skills for complex queries, stored procedures, data transformation, and data validation
Hands-on experience with Microsoft Dynamics 365 CE / CRM data model, entities, and relationships
Proficiency with ETL/migration tools: SSIS with KingswaySoft, Azure Data Factory, Power Platform Dataflows, Custom C#/.NET migration scripts
Experience with large-scale migrations involving millions of records
Strong understanding of relational data structures such as: Customer Account Vehicle Transponder Transaction
Ability to analyze large datasets, identify anomalies, and resolve inconsistencies
Bachelors degree in engineering or a bachelors degree in technology from a recognized university
Preferred Skills & Experience:
Experience with financial transactions, billing data, or violation/enforcement records.
Experience in enterprise-scale Dynamics 365 CE migrations.
Familiarity with data governance, security, and compliance requirements for financial or transportation data.
Knowledge of historical data migration and archival strategies.
We are an equal opportunity employer.
Business Intelligence Data Engineer
Data engineer job in Philadelphia, PA
Cozen O'Connor is hiring a Business Intelligence Architect to join the IS team. This position is a staff-level role on the Enterprise Data Services team. The individual in this role is responsible for the design, development, and maintenance of data presentation tools within Cozen O'Connor. The role also participates in data engineering responsibilities in collaboration with other team members.
Responsibilities
Learn and maintain an understanding of the transactional system data used throughout the firm, and the end-user delivery methods for reporting and dashboards.
Development, administration, and support of the firms' PowerBI-based reporting and dashboards.
Administration, support and preparation for decommissioning of the firms' Qlik-based reporting/dashboard tools.
Act as the (internal) user-facing subject matter expert on firm-wide reporting tools and design practices.
Apply best practices for Data Governance and Metadata Management.
Create and maintain well organized and easy to use documentation for end-user applications.
In collaboration with IS Compliance team, provide evidence as required to satisfy audit requirements and ISO certification renewal.
Qualifications
Minimum of 4 years of development experience with PowerBI.
Minimum of 4 years of development experience with Microsoft Fabric ETL methods.
Some development experience with Qlik Suite (Qlik Sense, NPrinting) is preferred, but not required.
Some experience with source control/release processes using Azure DevOps as a repository, as well as Git for version comparison.
BS or BA Degree in Information Systems, Information Technology or other related field and/or commensurate work experience.
Knowledge of enterprise-scale data management methodology, best practices, and associated frameworks (DAMA DMBOK, etc.).
Expert-level proficiency in working with technical and non-technical users to facilitate data presentation needs.
Some experience with T-SQL query building preferred.
Excellent analytical and communication skills.
Auto-Apply