Data Scientist
Data scientist job in Camden, NJ
Title: Data Scientist
Duration: Direct Hire
Schedule: Hybrid (Mon/Fri WFH, onsite Tues-Thurs)
Interview Process: 2 rounds, virtual (2nd/final round is a case study)
Salary Range: $95-120k/yr (with benefits)
Must haves:
1yr min professional/post-grad Data Scientist experience, and knowledge across areas such as Machine Learning, NLP, LLMs, etc
Proficiency in Python and SQL for data manipulation and pipeline development
Strong communication skills for stakeholder engagement
Bachelor's Degree
Plusses
Master's Degree
Azure experience (and/or other MS tools)
Experience working with healthcare data, preferably from Epic
Strong skills in data visualization, dashboard design, and interpreting complex datasets
Day to Day:
We are seeking a Data Scientist to join our clients analytics team. This role focuses on leveraging advanced analytics techniques to drive clinical and business decision-making. You will work with healthcare data to build predictive models, apply machine learning and NLP methods, and optimize data pipelines. The ideal candidate combines strong technical skills with the ability to communicate insights effectively to stakeholders.
Key Responsibilities
Develop and implement machine learning models for predictive analytics and clinical decision support.
Apply NLP and LLM techniques to extract insights from structured and unstructured data.
Build and optimize data pipelines using Python and SQL for ETL processes.
Preprocess and clean datasets to support analytics initiatives.
Collaborate with stakeholders to understand data needs and deliver actionable insights.
Interpret complex datasets and provide clear, data-driven recommendations.
Machine Learning Data Scientist
Data scientist job in Pittsburgh, PA
Machine Learning Data Scientist
Length: 6 Month Contract to Start
* Please no agencies. Direct employees currently authorized to work in the United States - no sponsorship available.*
Job Description:
We are looking for a Data Scientist/Engineer with Machine Learning and strong skills in Python, time-series modeling, and SCADA/industrial data. In this role, you will build and deploy ML models for forecasting, anomaly detection, and predictive maintenance using high-frequency sensor and operational data.
Essential Duties and Responsibilities:
Develop ML models for time-series forecasting and anomaly detection
Build data pipelines for SCADA/IIoT data ingestion and processing
Perform feature engineering and signal analysis on time-series data
Deploy models in production using APIs, microservices, and MLOps best practices
Collaborate with data engineers and domain experts to improve data quality and model performance
Qualifications:
Strong Python skills
Experience working with SCADA systems or industrial data historians
Solid understanding of time-series analytics and signal processing
Experience with cloud platforms and containerization (AWS/Azure/GCP, Docker)
POST-OFFER BACKGROUND CHECK IS REQUIRED. Digital Prospectors is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Digital Prospectors affirms the right of all individuals to equal opportunity and prohibits any form of discrimination or harassment.
Come see why DPC has achieved:
4.9/5 Star Glassdoor rating and the only staffing company (< 1000 employees) to be voted in the national Top 10 ‘Employee's Choice - Best Places to Work' by Glassdoor.
Voted ‘Best Staffing Firm to Temp/Contract For' seven times by Staffing Industry Analysts as well as a ‘Best Company to Work For' by Forbes, Fortune and Inc. magazine.
As you are applying, please join us in fostering diversity, equity, and inclusion by completing the Invitation to Self-Identify form today!
*******************
Job #18135
Data Scientist
Data scientist job in Parsippany-Troy Hills, NJ
***This role is hybrid three days per week onsite in Parsippany, NJ. LOCAL CANDIDATES ONLY. No relocation***
Data Scientist
• Summary: Provide analytics, telemetry, ML/GenAI-driven insights to measure SDLC
health, prioritize improvements, validate pilot outcomes, and implement AI-driven
development lifecycle capabilities.
• Responsibilities:
o Define metrics and instrumentation for SDLC/CI pipelines, incidents, and
delivery KPIs.
o Build dashboards, anomaly detection, and data models; implement GenAI
solutions (e.g., code suggestion, PR summarization, automated test
generation) to improve developer workflows.
o Design experiments and validate AI-driven features during the pilot.
o Collaborate with engineering and SRE to operationalize models and ensure
observability and data governance.
• Required skills:
o 3+ years applied data science/ML in production; hands-on experience with
GenAI/LLMs applied to developer workflows or DevOps automation.
o Strong Python (pandas, scikit-learn), ML frameworks, SQL, and data
visualization (Tableau/Power BI).
o Experience with observability/telemetry data (logs/metrics/traces) and A/B
experiment design.
• Preferred:
o Experience with model deployment, MLOps, prompt engineering, and cloud
data platforms (AWS/GCP/Azure).
Data Scientist
Data scientist job in Wilmington, DE
Role: Data Scientist
Contract: W2 Only (No chance of C2C)
6+ Years experience will work for this role
This person would be helping to support an expanding scope of work involving GenAI and contract intelligence. This falls into the PBM underwriting group.
Focus: Core data science (not data engineering or MLOps)
they do not want someone who gives off a lot of ML engineering they want more core data science.
Key Responsibilities
· Apply traditional machine learning (e.g., decision trees, forecasting)
· Work with GenAI to extract insights from contract documents
· Use Python extensively for data processing and modeling
· Collaborate and communicate effectively on nuanced, domain-specific language
Required Skills
-Strong Python/ Pandas
-SQL (dont need to be as strong here)
-Gen AI (prompt engineering experience is a plus)
-ML
-Statistical experience
-Core data science skills (random trees, forecasting, etc.)
Senior Data Scientist
Data scientist job in Plainfield, NJ
Data Scientist - Pharmaceutical Analytics (PhD)
1 year Contract - Hybrid- Plainfield, NJ
We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes.
What You'll Do
Apply machine learning & advanced analytics to pharma commercial data
Deliver insights on market dynamics, physician prescribing, and patient behavior
Partner with R&D, medical affairs, and commercial teams to guide strategy
Build predictive models for sales effectiveness, adherence, and market forecasting
What We're Looking For
PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field
5+ years of pharma or healthcare analytics experience
Strong skills in enterprise-class software stacks and cloud computing
Deep knowledge of pharma market dynamics & healthcare systems
Excellent communication skills to translate data into strategy
Data Scientist
Data scientist job in Lewistown, PA
Founded over 35 years ago, First Quality is a family-owned company that has grown from a small business in McElhattan, Pennsylvania into a group of companies, employing over 5,000 team members, while maintaining our family values and entrepreneurial spirit. With corporate offices in New York and Pennsylvania and 8 manufacturing campuses across the U.S. and Canada, the companies within the First Quality group produce high-quality personal care and household products for large retailers and healthcare organizations. Our personal care and household product portfolio includes baby diapers, wipes, feminine pads, paper towels, bath tissue, adult incontinence products, laundry detergents, fabric finishers, and dishwash solutions. In addition, we manufacture certain raw materials and components used in the manufacturing of these products, including flexible print and packaging solutions.
Guided by our values of humility, unity, and integrity, we leverage advanced technology and innovation to drive growth and create new opportunities. At First Quality, you'll find a collaborative environment focused on continuous learning, professional development, and our mission to Make Things Better .
We are seeking a Data Scientist for our First Quality facilities located in McElhattan, PA; Lewistown, PA; and Macon, GA.
**Must have manufacturing experience with consumer goods.**
The role will provide meaningful insight on how to improve our current business operations. This position will work closely with domain experts and SMEs to understand the business problem or opportunity and assess the potential of machine learning to enable accelerated performance improvements.
Principle Accountabilities/Responsibilities
Design, build, tune, and deploy divisional AI/ML tools that meet the agreed upon functional and non-functional requirements within the framework established by the Enterprise IT and IS departments.
Perform large scale experimentation to identify hidden relationships between different data sets and engineer new features
Communicate model performance & results & tradeoffs to stake holders
Determine requirements that will be used to train and evolve deep learning models and algorithms
Visualize information and develop engaging dashboards on the results of data analysis.
Build reports and advanced dashboards to tell stories with the data.
Lead, develop and deliver divisional strategies to demonstrate the: what, why and how of delivering AI/ML business outcomes
Build and deploy divisional AI strategy and roadmaps that enable long-term success for the organization that aligned with the Enterprise AI strategy.
Proactively mine data to identify trends and patterns and generate insights for business units and management.
Mentor other stakeholders to grow in their expertise, particularly in AI / ML, and taking an active leadership role in divisional executive forums
Work collaboratively with the business to maximize the probability of success of AI projects and initiatives.
Identify technical areas for improvement and present detailed business cases for improvements or new areas of opportunities.
Qualifications/Education/Experience Requirements
PhD or master's degree in Statistics, Mathematics, Computer Science or other relevant discipline.
5+ years of experience using large scale data to solve problems and answer questions.
Prior experience in the Manufacturing Industry.
Skills/Competencies Requirements
Experience in building and deploying predictive models and scalable data pipelines
Demonstrable experience with common data science toolkits, such as Python, PySpark, R, Weka, NumPy, Pandas, scikit-learn, SpaCy/Gensim/NLTK etc.
Knowledge of data warehousing concepts like ETL, dimensional modeling, and sematic/reporting layer design.
Knowledge of emerging technologies such as columnar and NoSQL databases, predictive analytics, and unstructured data.
Fluency in data science, analytics tools, and a selection of machine learning methods - Clustering, Regression, Decision Trees, Time Series Analysis, Natural Language Processing.
Strong problem solving and decision-making skills
Ability to explain deep technical information to non-technical parties
Demonstrated growth mindset, enthusiastic about learning new technologies quickly and applying the gained knowledge to address business problems.
Strong understanding of data governance/management concepts and practices.
Strong background in systems development, including an understanding of project management methodologies and the development lifecycle.
Proven history managing stakeholder relationships.
Business case development.
What We Offer You
We believe that by continuously improving the quality of our benefits, we can help to raise the quality of life for our team members and their families. At First Quality you will receive:
Competitive base salary and bonus opportunities
Paid time off (three-week minimum)
Medical, dental and vision starting day one
401(k) with employer match
Paid parental leave
Child and family care assistance (dependent care FSA with employer match up to $2500)
Bundle of joy benefit (year's worth of free diapers to all team members with a new baby)
Tuition assistance
Wellness program with savings of up to $4,000 per year on insurance premiums
...and more!
First Quality is committed to protecting information under the care of First Quality Enterprises commensurate with leading industry standards and applicable regulations. As such, First Quality provides at least annual training regarding data privacy and security to employees who, as a result of their role specifications, may come in to contact with sensitive data.
First Quality is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, sexual orientation, gender identification, or protected Veteran status.
For immediate consideration, please go to the Careers section at ********************
to complete our online application.
Data Architect
Data scientist job in Ridgefield, NJ
Immediate need for a talented Data Architect. This is a 12 month contract opportunity with long-term potential and is located in Basking Ridge, NJ (Hybrid). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-93859
Pay Range: $110 - $120/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Key Skills; ETL, LTMC, SaaS .
5 years as a Data Architect
5 years in ETL
3 years in LTMC
Our client is a leading Telecom Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, colour, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Azure Data Engineer
Data scientist job in Weehawken, NJ
· Expert level skills writing and optimizing complex SQL
· Experience with complex data modelling, ETL design, and using large databases in a business environment
· Experience with building data pipelines and applications to stream and process datasets at low latencies
· Fluent with Big Data technologies like Spark, Kafka and Hive
· Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required
· Designing and building of data pipelines using API ingestion and Streaming ingestion methods
· Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential
· Experience in developing NO SQL solutions using Azure Cosmos DB is essential
· Thorough understanding of Azure and AWS Cloud Infrastructure offerings
· Working knowledge of Python is desirable
· Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
· Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
· Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance
· Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information
· Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
· Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making.
· Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards
· Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging
Best Regards,
Dipendra Gupta
Technical Recruiter
*****************************
Data Engineer
Data scientist job in Hamilton, NJ
Key Responsibilities:
Manage and support batch processes and data pipelines in Azure Databricks and Azure Data Factory.
Integrate and process Bloomberg market data feeds and files into trading or analytics platforms.
Monitor, troubleshoot, and resolve data and system issues related to trading applications and market data ingestion.
Develop, automate, and optimize ETL pipelines using Python, Spark, and SQL.
Manage FTP/SFTP file transfers between internal systems and external vendors.
Ensure data quality, completeness, and timeliness for downstream trading and reporting systems.
Collaborate with operations, application support, and infrastructure teams to resolve incidents and enhance data workflows.
Required Skills & Experience:
10+ years of experience in data engineering or production support within financial services or trading environments.
Hands-on experience with Azure Databricks, Azure Data Factory, and Azure Storage, Logic Apps, Fabric.
Strong Python and SQL programming skills.
Experience with Bloomberg data feeds (BPIPE, TSIP,SFTP).
Experience with Git, CI/CD pipelines, and Azure DevOps.
Proven ability to support batch jobs, troubleshoot failures, and manage job scheduling.
Experience handling FTP/SFTP file transfers and automation (e.g., using scripts or managed file transfer tools).
Solid understanding of equities trading, fixed income trading, trading workflows, and financial instruments.
Excellent communication, problem-solving, and stakeholder management skills.
Data Engineer (IoT)
Data scientist job in Pittsburgh, PA
As an IoT Data Engineer at CurvePoint, you will design, build, and optimize the data pipelines that power our Wi-AI sensing platform. Your work will focus on reliable, low-latency data acquisition from constrained on-prem IoT devices, efficient buffering and streaming, and scalable cloud-based storage and training workflows.
You will own how raw sensor data (e.g., wireless CSI, video, metadata) moves from edge devices with limited disk and compute into durable, well-structured datasets used for model training, evaluation, and auditability. You will work closely with hardware, ML, and infrastructure teams to ensure our data systems are fast, resilient, and cost-efficient at scale.
Duties and Responsibilities
Edge & On-Prem Data Acquisition
Design and improve data capture pipelines on constrained IoT devices and host servers (limited disk, intermittent connectivity, real-time constraints).
Implement buffering, compression, batching, and backpressure strategies to prevent data loss.
Optimize data transfer from edge → on-prem host → cloud.
Streaming & Ingestion Pipelines
Build and maintain streaming or near-real-time ingestion pipelines for sensor data (e.g., CSI, video, logs, metadata).
Ensure data integrity, ordering, and recoverability across failures.
Design mechanisms for replay, partial re-ingestion, and audit trails.
Cloud Data Pipelines & Storage
Own cloud-side ingestion, storage layout, and lifecycle policies for large time-series datasets.
Balance cost, durability, and performance across hot, warm, and cold storage tiers.
Implement data versioning and dataset lineage to support model training and reproducibility.
Training Data Enablement
Structure datasets to support efficient downstream ML training, evaluation, and experimentation.
Work closely with ML engineers to align data formats, schemas, and sampling strategies with training needs.
Build tooling for dataset slicing, filtering, and validation.
Reliability & Observability
Add monitoring, metrics, and alerts around data freshness, drop rates, and pipeline health.
Debug pipeline failures across edge, on-prem, and cloud environments.
Continuously improve system robustness under real-world operating conditions.
Cross-Functional Collaboration
Partner with hardware engineers to understand sensor behavior and constraints.
Collaborate with ML engineers to adapt pipelines as model and data requirements evolve.
Contribute to architectural decisions as the platform scales from pilots to production deployments.
Must Haves
Bachelor's degree in Computer Science, Electrical Engineering, or a related field (or equivalent experience).
3+ years of experience as a Data Engineer or Backend Engineer working with production data pipelines.
Strong Python skills; experience building reliable data processing systems.
Hands-on experience with streaming or near-real-time data ingestion (e.g., Kafka, Kinesis, MQTT, custom TCP/UDP pipelines).
Experience working with on-prem systems or edge/IoT devices, including disk, bandwidth, or compute constraints.
Familiarity with cloud storage and data lifecycle management (e.g., S3-like object stores).
Strong debugging skills across distributed systems.
Nice to Have
Experience with IoT or sensor data (RF/CSI, video, audio, industrial telemetry).
Familiarity with data compression, time-series formats, or binary data handling.
Experience supporting ML training pipelines or large-scale dataset management.
Exposure to containerized or GPU-enabled data processing environments.
Knowledge of data governance, retention, or compliance requirements.
Location
Pittsburgh, PA (hybrid preferred; some on-site work with hardware teams)
Salary
$110,000 - $135,000 / year (depending on experience and depth in streaming + IoT systems)
Data Engineer
Data scientist job in Fort Lee, NJ
The Senior Data Analyst will be responsible for developing MS SQL queries and procedures, building custom reports, and modifying ERP user forms to support and enhance organizational productivity. This role will also design and maintain databases, ensuring high levels of stability, reliability, and performance.
Responsibilities
Analyze, structure, and interpret raw data.
Build and maintain datasets for business use.
Design and optimize database tables, schemas, and data structures.
Enhance data accuracy, consistency, and overall efficiency.
Develop views, functions, and stored procedures.
Write efficient SQL queries to support application integration.
Create database triggers to support automation processes.
Oversee data quality, integrity, and database security.
Translate complex data into clear, actionable insights.
Collaborate with cross-functional teams on multiple projects.
Present data through graphs, infographics, dashboards, and other visualization methods.
Define and track KPIs to measure the impact of business decisions.
Prepare reports and presentations for management based on analytical findings.
Conduct daily system maintenance and troubleshoot issues across all platforms.
Perform additional ad hoc analysis and tasks as needed.
Qualification
Bachelor's Degree in Information Technology or relevant
4+ years of experience as a Data Analyst or Data Engineer, including database design experience.
Strong ability to extract, manipulate, analyze, and report on data, as well as develop clear and effective presentations.
Proficiency in writing complex SQL queries, including table joins, data aggregation (SUM, AVG, COUNT), and creating, retrieving, and updating views.
Excellent written, verbal, and interpersonal communication skills.
Ability to manage multiple tasks in a fast-paced and evolving environment.
Strong work ethic, professionalism, and integrity.
Advanced proficiency in Microsoft Office applications.
AWS Data engineer with Databricks || USC Only || W2 Only
Data scientist job in Princeton, NJ
AWS Data Engineer with Databricks
Princeton, NJ - Hybrid - Need Locals or Neaby
Duration: Long Term
is available only to U.S. citizens.
Key Responsibilities
Design and implement ETL/ELT pipelines with Databricks, Apache Spark, AWS Glue, S3, Redshift, and EMR for processing large-scale structured and unstructured data.
Optimize data flows, monitor performance, and troubleshoot issues to maintain reliability and scalability.
Collaborate on data modeling, governance, security, and integration with tools like Airflow or Step Functions.
Document processes and mentor junior team members on best practices.
Required Qualifications
Bachelor's degree in Computer Science, Engineering, or related field.
5+ years of data engineering experience, with strong proficiency in Databricks, Spark, Python, SQL, and AWS services (S3, Glue, Redshift, Lambda).
Familiarity with big data tools like Kafka, Hadoop, and data warehousing concepts.
Azure Data Engineer
Data scientist job in Jersey City, NJ
Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years)
The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices.
Key Responsibilities:
Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows.
Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions.
Ensure data security, compliance, lineage, and governance controls.
Partner with architecture, data governance, and business teams to deliver high-quality data solutions.
Troubleshoot performance issues and improve system efficiency.
Required Skills:
10+ years of data engineering experience.
Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL.
Azure certifications strongly preferred.
Strong SQL, Python, and cloud data architecture skills.
Experience in financial services or large enterprise environments preferred.
Hadoop Data Engineer
Data scientist job in Pittsburgh, PA
About the job:
We are seeking an accomplished Tech Lead - Data Engineer to architect and drive the development of large-scale, high-performance data platforms supporting critical customer and transaction-based systems. The ideal candidate will have a strong background in data pipeline design, Hadoop ecosystem, and real-time data processing, with proven experience building data solutions that power digital products and decisioning platforms in a complex, regulated environment.
As a technical leader, you will guide a team of engineers to deliver scalable, secure, and reliable data solutions enabling advanced analytics, operational efficiency, and intelligent customer experiences.
Key Roles & Responsibilities
Lead and oversee the end-to-end design, implementation, and optimization of data pipelines supporting key customer onboarding, transaction, and decisioning workflows.
Architect and implement data ingestion, transformation, and storage frameworks leveraging Hadoop, Avro, and distributed data processing technologies.
Partner with product, analytics, and technology teams to translate business requirements into scalable data engineering solutions that enhance real-time data accessibility and reliability.
Provide technical leadership and mentorship to a team of data engineers, ensuring adherence to coding, performance, and data quality standards.
Design and implement robust data frameworks to support next-generation customer and business product launches.
Develop best practices for data governance, security, and compliance aligned with enterprise and regulatory requirements.
Drive optimization of existing data pipelines and workflows for improved efficiency, scalability, and maintainability.
Collaborate closely with analytics and risk modeling teams to ensure data readiness for predictive insights and strategic decision-making.
Evaluate and integrate emerging data technologies to future-proof the data platform and enhance performance.
Must-Have Skills
8-10 years of experience in data engineering, with at least 2-3 years in a technical leadership role.
Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce, HBase, Pig, etc.).
Experience working with Avro, Parquet, or other serialization formats.
Proven ability to design and maintain ETL / ELT pipelines using tools such as Spark, Flink, Airflow, or NiFi.
Proficiency in Python, Scala for large-scale data processing.
Strong understanding of data modeling, data warehousing, and data lake architectures.
Hands-on experience with SQL and both relational and NoSQL data stores.
Cloud data platform experience with AWS.
Deep understanding of data security, compliance, and governance frameworks.
Excellent problem-solving, communication, and leadership skills.
Senior Data Engineer (Snowflake)
Data scientist job in Parsippany-Troy Hills, NJ
Senior Data Engineer (Snowflake & Python)
1-Year Contract | $60/hour + Benefit Options
Hybrid: On-site a few days per month (local candidates only)
Work Authorization Requirement
You must be authorized to work for any employer as a W2 employee. This is required for this role.
This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered.
Overview
We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake.
Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement.
What You'll Do
Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake
Participate across the full software development lifecycle - planning, requirements, development, testing, and QA
Partner closely with engineering and data teams to identify and implement optimal technical solutions
Build and maintain high-performance, scalable data pipelines and data warehouse architectures
Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards
Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions
Manage deliverables and priorities effectively in a fast-moving environment
Contribute to data governance practices including metadata management and data lineage
Support analytics and reporting use cases leveraging advanced SQL and analytical functions
Required Skills & Experience
8+ years of experience designing and developing data solutions in an enterprise environment
5+ years of hands-on Snowflake experience
Strong hands-on development skills with SQL and Python
Proven experience designing and developing data warehouses in Snowflake
Ability to diagnose, optimize, and tune SQL queries
Experience with Azure data frameworks (e.g., Azure Data Factory)
Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar
Solid understanding of metadata management and data lineage
Hands-on experience with SQL analytical functions
Working knowledge of Shell scripting and Java scripting
Experience using Git, Confluence, and Jira
Strong problem-solving and troubleshooting skills
Collaborative mindset with excellent communication skills
Nice to Have
Experience supporting Pharma industry data
Exposure to Omni-channel data environments
Why This Opportunity
$60/hour W2 on a long-term 1-year contract
Benefit options available
Hybrid structure with limited on-site requirement
High-impact role supporting enterprise data initiatives
Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp
This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
Senior Data Engineer
Data scientist job in Philadelphia, PA
Full-time Perm
Remote - EAST COAST ONLY
Role open to US Citizens and Green Card Holders only
We're looking for a Senior Data Engineer to lead the design, build, and optimization of modern data pipelines and cloud-native data infrastructure. This role is ideal for someone who thrives on solving complex data challenges, improving systems at scale, and collaborating across technical and business teams to deliver high-impact solutions.
What You'll Do
Architect, develop, and maintain scalable, secure data infrastructure supporting analytics, reporting, and operational workflows.
Design and optimize ETL/ELT pipelines to integrate data from diverse internal and external sources.
Prepare and transform structured and unstructured data to support modeling, reporting, and advanced analysis.
Improve data quality, reliability, and performance across platforms and workflows.
Monitor pipelines, troubleshoot discrepancies, and ensure accuracy and timely data delivery.
Identify architectural bottlenecks and drive long-term scalability improvements.
Collaborate with Product, BI, Finance, and engineering teams to build end-to-end data solutions.
Prototype algorithms, transformations, and automation tools to accelerate insights.
Lead cloud-native workflow design, including logging, monitoring, and storage best practices.
Create and maintain high-quality technical documentation.
Contribute to Agile ceremonies, engineering best practices, and continuous improvement initiatives.
Mentor teammates and guide adoption of data platform tools and patterns.
Participate in on-call rotation to maintain platform stability and availability.
What You Bring
Bachelor's degree in Computer Science or related technical field.
4+ years of advanced SQL experience (Oracle, PostgreSQL, etc.).
4+ years working with Java or Groovy.
3+ years integrating with SOAP or REST APIs.
2+ years with DBT and data modeling.
Strong understanding of modern data architectures, distributed systems, and performance optimization.
Experience with Snowflake or similar cloud data platforms (preferred).
Hands-on experience with Git, Jenkins, CI/CD, and automation/testing practices.
Solid grasp of cloud concepts and cloud-native engineering.
Excellent problem-solving, communication, and cross-team collaboration skills.
Ability to lead projects, own solutions end-to-end, and influence technical direction.
Proactive mindset with strong analytical and consultative abilities.
Data Engineer
Data scientist job in Jersey City, NJ
ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES
Skillset: Data Engineer
Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3
Nice to Haves: Java, Spark, React Js
Interview Process: Interview Process: 2 rounds, 2nd will be on site
You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you.
As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives.
Job responsibilities:
• Supports review of controls to ensure sufficient protection of enterprise data.
• Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request.
• Updates logical or physical data models based on new use cases.
• Frequently uses SQL and understands NoSQL databases and their niche in the marketplace.
• Adds to team culture of diversity, opportunity, inclusion, and respect.
• Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals).
• Supports review of controls to ensure sufficient protection of enterprise data
Required qualifications, capabilities, and skills
• Formal training or certification on data engineering concepts and 2+ years applied experience
• Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases
• Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
• Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark.
• Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional).
• Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck.
• Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs
• Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar
• Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift
• Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar
• Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools.
Preferred qualifications, capabilities, and skills
• Knowledge of data governance and security best practices.
• Experience in carrying out data analysis to support business insights.
• Strong Python and Spark
Senior Data Engineer
Data scientist job in New Providence, NJ
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents.
Job Description
Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems
Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance
Work in tandem with our engineering team to identify and implement the most optimal solutions
Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design
Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures
Able to manage deliverables in fast paced environments
Areas of Expertise
At least 10 years of experience designing and development of data solutions in enterprise environment
At least 5+ years' experience on Snowflake Platform
Strong hands-on SQL and Python development
Experience with designing and developing data warehouses in Snowflake
A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala
Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic
Good understanding on Metadata and data lineage
Hands-on knowledge on SQL Analytical functions
Strong knowledge and hands-on experience in Shell scripting, Java Scripting
Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering.
Good understanding and exposure to Git, Confluence and Jira
Good problem solving and troubleshooting skills.
Team player, collaborative approach and excellent communication skills
Our Commitment to Diversity & Inclusion:
Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
Azure data engineer
Data scientist job in Pittsburgh, PA
Job Title - DataBricks Data Engineer
**Must have 8+ years of real hands on experience**
We are specifically seeking a Data Engineer-Lead with strong expertise in Databricks development.
The role involves:
Building and testing data pipelines using Python/Scala on Databricks
Hands on experience to develop and lead the offshore team to perform development/testing work in Azure data bricks
Architect data platforms using Azure services such as Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and PySpark.
Collaborate with stakeholders to understand business needs and translate them into technical solutions.
Provide technical leadership and guidance to the data engineering team and need to perform development.
Familiar with Safe Agile concepts and good to have working experience in agile model.
Develop and maintain data pipelines for efficient data movement and transformation.
Onsite and offshore team communication and co-ordination.
Create and update the documentation to facilitate cross-training and troubleshooting
Hands on experience in scheduling tools like BMC control-M and setup jobs and test the schedules.
Understand the data models and schemas to support the development work and help in creation of tables in databricks
Proficiency in Azure Data Factory (ADF), Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools.
Implementing automated data validation frameworks such as Great Expectations or Deequ
Reconciling large-scale datasets
Ensuring data reliability across both batch and streaming processes
The ideal candidate will have hands-on experience with:
PySpark, Scala, Delta Lake, and Unity Catalog
Devops CI/CD automation
Cloud-native data services
Azure databricks/Oracle
BMC Control-M
Location: Pittsburgh, PA
Data Engineer
Data scientist job in Newark, NJ
NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises.
Role Description
This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role.
Key Responsibilities
Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools.
Data Integration: Integrate and transform data using industry-standard tools. Experience required with:
AWS Services: AWS Glue, Data Pipeline, Redshift, and S3.
Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage.
Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift.
Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity.
Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization.
Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions.
Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly.
Required Skills and Experience
Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar).
Integration: Experience integrating data via RESTful / GraphQL APIs.
Programming: Proficient in Python for ETL automation and SQL for database management.
Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) .
Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics.
Integration: Experience integrating data via RESTful APIs.
Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders.
Authorization: Must have valid work authorization in the United States.
Salary Range: $65,000- $80,000 per year
Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company.
Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.