Post job

Data scientist jobs in Scranton, PA

- 2,585 jobs
All
Data Scientist
Data Engineer
Actuary
Actuarial Analyst
Principal Biostatistician
Senior Data Scientist
Analytical Data Miner
  • Principal Biostatistician

    CSL Behring 4.6company rating

    Data scientist job in King of Prussia, PA

    CSL's R&D organization is accelerating innovation to deliver greater impact for patients. With a project-led structure and a focus on collaboration, we're building a future-ready team that thrives in dynamic biotech ecosystems. Joining CSL now means being part of an agile team committed to developing therapies that make a meaningful difference worldwide. Could you be our next Principal Biostatistician? The job is in our King of Prussia, PA, Waltham MA and Maidenhead UK office. This is a hybrid position and is onsite three days a week. You will report to the Director of Biostats. You will lead components of statistical contribution to a clinical development program. The Principal Biostatistician implements the statistical strategies for the clinical trials and regulatory submissions within the program, and is accountable for the statistical deliverables Main Responsibilities: Input to statistical strategy and ensure appropriate statistical methodologies applied to study design and data analysis for clinical trials and regulatory submissions. Lead components and fully support Biostatistics conduct in study design, protocol development, data collection, data analysis, reporting, and submission preparation. Author the initial statistical analysis plan for clinical trials and regulatory submissions. Be accountable for timely completion and quality of the statistical analysis plan. Support Biostatistics interactions with regulatory authorities (eg FDA, EMA, PMDA) Be responsible for interpreting analysis results and ensuring reporting accuracy. Manage outsourcing operations or work with internal statistical programmers within the responsible projects. Ensure timeliness and quality of deliverables by CRO/FSP. Conduct reviews of deliverables to ensure quality. Be accountable for the TFL/CDISC package for study report and regulatory submission. Provide statistical thought partnership for innovative study design and clinical development plans, including Go-No Go criteria and probability of technical success calculations Qualifications and Experience Requirements: PhD or MS in Biostatistics, Statistics 7+ years or relevant work experience Experience with CROs (either managing a CRO, or having worked in a CRO) Experience providing statistical leadership at a study level Demonstrated statistical contribution in facilitating and optimizing clinical development #LI-HYBRID Our Benefits CSL employees that work at least 30 hours per week are eligible for benefits effective day 1. We are committed to the wellbeing of our employees and their loved ones. CSL offers resources and benefits, from health care to financial protection, so you can focus on doing work that matters. Our benefits are designed to support the needs of our employees at every stage of their life. Whether you are considering starting a family, need help paying for emergency back up care or summer camp, looking for mental health resources, planning for your financial future, or supporting your favorite charity with a matching contribution, CSL has many benefits to help achieve your goals. Please take the time to review our benefits site to see what's available to you as a CSL employee. About CSL Behring CSL Behring is a global biotherapeutics leader driven by our promise to save lives. Focused on serving patients' needs by using the latest technologies, we discover, develop and deliver innovative therapies for people living with conditions in the immunology, hematology, cardiovascular and metabolic, respiratory, and transplant therapeutic areas. We use three strategic scientific platforms of plasma fractionation, recombinant protein technology, and cell and gene therapy to support continued innovation and continually refine ways in which products can address unmet medical needs and help patients lead full lives. CSL Behring operates one of the world's largest plasma collection networks, CSL Plasma. Our parent company, CSL, headquartered in Melbourne, Australia, employs 32,000 people, and delivers its lifesaving therapies to people in more than 100 countries. We want CSL to reflect the world around us At CSL, Inclusion and Belonging is at the core of our mission and who we are. It fuels our innovation day in and day out. By celebrating our differences and creating a culture of curiosity and empathy, we are able to better understand and connect with our patients and donors, foster strong relationships with our stakeholders, and sustain a diverse workforce that will move our company and industry into the future. Learn more Inclusion and Belonging | CSL. Do work that matters at CSL Behring!
    $83k-118k yearly est. 3d ago
  • Data Scientist

    Insight Global

    Data scientist job in Camden, NJ

    Title: Data Scientist Duration: Direct Hire Schedule: Hybrid (Mon/Fri WFH, onsite Tues-Thurs) Interview Process: 2 rounds, virtual (2nd/final round is a case study) Salary Range: $95-120k/yr (with benefits) Must haves: 1yr min professional/post-grad Data Scientist experience, and knowledge across areas such as Machine Learning, NLP, LLMs, etc Proficiency in Python and SQL for data manipulation and pipeline development Strong communication skills for stakeholder engagement Bachelor's Degree Plusses Master's Degree Azure experience (and/or other MS tools) Experience working with healthcare data, preferably from Epic Strong skills in data visualization, dashboard design, and interpreting complex datasets Day to Day: We are seeking a Data Scientist to join our clients analytics team. This role focuses on leveraging advanced analytics techniques to drive clinical and business decision-making. You will work with healthcare data to build predictive models, apply machine learning and NLP methods, and optimize data pipelines. The ideal candidate combines strong technical skills with the ability to communicate insights effectively to stakeholders. Key Responsibilities Develop and implement machine learning models for predictive analytics and clinical decision support. Apply NLP and LLM techniques to extract insights from structured and unstructured data. Build and optimize data pipelines using Python and SQL for ETL processes. Preprocess and clean datasets to support analytics initiatives. Collaborate with stakeholders to understand data needs and deliver actionable insights. Interpret complex datasets and provide clear, data-driven recommendations.
    $95k-120k yearly 2d ago
  • Data Scientist

    First Quality 4.7company rating

    Data scientist job in Lewistown, PA

    Founded over 35 years ago, First Quality is a family-owned company that has grown from a small business in McElhattan, Pennsylvania into a group of companies, employing over 5,000 team members, while maintaining our family values and entrepreneurial spirit. With corporate offices in New York and Pennsylvania and 8 manufacturing campuses across the U.S. and Canada, the companies within the First Quality group produce high-quality personal care and household products for large retailers and healthcare organizations. Our personal care and household product portfolio includes baby diapers, wipes, feminine pads, paper towels, bath tissue, adult incontinence products, laundry detergents, fabric finishers, and dishwash solutions. In addition, we manufacture certain raw materials and components used in the manufacturing of these products, including flexible print and packaging solutions. Guided by our values of humility, unity, and integrity, we leverage advanced technology and innovation to drive growth and create new opportunities. At First Quality, you'll find a collaborative environment focused on continuous learning, professional development, and our mission to Make Things Better . We are seeking a Data Scientist for our First Quality facilities located in McElhattan, PA; Lewistown, PA; and Macon, GA. **Must have manufacturing experience with consumer goods.** The role will provide meaningful insight on how to improve our current business operations. This position will work closely with domain experts and SMEs to understand the business problem or opportunity and assess the potential of machine learning to enable accelerated performance improvements. Principle Accountabilities/Responsibilities Design, build, tune, and deploy divisional AI/ML tools that meet the agreed upon functional and non-functional requirements within the framework established by the Enterprise IT and IS departments. Perform large scale experimentation to identify hidden relationships between different data sets and engineer new features Communicate model performance & results & tradeoffs to stake holders Determine requirements that will be used to train and evolve deep learning models and algorithms Visualize information and develop engaging dashboards on the results of data analysis. Build reports and advanced dashboards to tell stories with the data. Lead, develop and deliver divisional strategies to demonstrate the: what, why and how of delivering AI/ML business outcomes Build and deploy divisional AI strategy and roadmaps that enable long-term success for the organization that aligned with the Enterprise AI strategy. Proactively mine data to identify trends and patterns and generate insights for business units and management. Mentor other stakeholders to grow in their expertise, particularly in AI / ML, and taking an active leadership role in divisional executive forums Work collaboratively with the business to maximize the probability of success of AI projects and initiatives. Identify technical areas for improvement and present detailed business cases for improvements or new areas of opportunities. Qualifications/Education/Experience Requirements PhD or master's degree in Statistics, Mathematics, Computer Science or other relevant discipline. 5+ years of experience using large scale data to solve problems and answer questions. Prior experience in the Manufacturing Industry. Skills/Competencies Requirements Experience in building and deploying predictive models and scalable data pipelines Demonstrable experience with common data science toolkits, such as Python, PySpark, R, Weka, NumPy, Pandas, scikit-learn, SpaCy/Gensim/NLTK etc. Knowledge of data warehousing concepts like ETL, dimensional modeling, and sematic/reporting layer design. Knowledge of emerging technologies such as columnar and NoSQL databases, predictive analytics, and unstructured data. Fluency in data science, analytics tools, and a selection of machine learning methods - Clustering, Regression, Decision Trees, Time Series Analysis, Natural Language Processing. Strong problem solving and decision-making skills Ability to explain deep technical information to non-technical parties Demonstrated growth mindset, enthusiastic about learning new technologies quickly and applying the gained knowledge to address business problems. Strong understanding of data governance/management concepts and practices. Strong background in systems development, including an understanding of project management methodologies and the development lifecycle. Proven history managing stakeholder relationships. Business case development. What We Offer You We believe that by continuously improving the quality of our benefits, we can help to raise the quality of life for our team members and their families. At First Quality you will receive: Competitive base salary and bonus opportunities Paid time off (three-week minimum) Medical, dental and vision starting day one 401(k) with employer match Paid parental leave Child and family care assistance (dependent care FSA with employer match up to $2500) Bundle of joy benefit (year's worth of free diapers to all team members with a new baby) Tuition assistance Wellness program with savings of up to $4,000 per year on insurance premiums ...and more! First Quality is committed to protecting information under the care of First Quality Enterprises commensurate with leading industry standards and applicable regulations. As such, First Quality provides at least annual training regarding data privacy and security to employees who, as a result of their role specifications, may come in to contact with sensitive data. First Quality is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, sexual orientation, gender identification, or protected Veteran status. For immediate consideration, please go to the Careers section at ******************** to complete our online application.
    $57k-73k yearly est. 2d ago
  • Senior Data Scientist

    Entech 4.0company rating

    Data scientist job in Plainfield, NJ

    Data Scientist - Pharmaceutical Analytics (PhD) 1 year Contract - Hybrid- Plainfield, NJ We're looking for a PhD-level Data Scientist with experience in the pharmaceutical industry and expertise working with commercial data sets (IQVIA, claims, prescription data). This role will drive insights that shape drug launches, market access, and patient outcomes. What You'll Do Apply machine learning & advanced analytics to pharma commercial data Deliver insights on market dynamics, physician prescribing, and patient behavior Partner with R&D, medical affairs, and commercial teams to guide strategy Build predictive models for sales effectiveness, adherence, and market forecasting What We're Looking For PhD in Data Science, Statistics, Computer Science, Bioinformatics, or related field 5+ years of pharma or healthcare analytics experience Strong skills in enterprise-class software stacks and cloud computing Deep knowledge of pharma market dynamics & healthcare systems Excellent communication skills to translate data into strategy
    $84k-120k yearly est. 1d ago
  • Reinsurer Actuary

    BJRC Recruiting

    Data scientist job in New York, NY

    Pricing Reinsurance Actuary New York, USA Our Client Our client is a modern reinsurance company specializing in data-driven portfolios of Property & Casualty (P&C) risk. The firm's approach combines deep industry relationships, a sustainable capital structure, and a partner-first philosophy to deliver efficient and innovative reinsurance solutions. As the company continues to grow, Northern is seeking a Pricing Actuary to strengthen its technical and analytical capabilities within the underwriting team. This is a unique opportunity to join a collaborative and forward-thinking environment that values expertise, precision, and long-term partnerships. Responsibilities Lead the pricing of reinsurance treaties, with a primary focus on U.S. Casualty Programs, and expand to other lines as the portfolio grows. Provide core input into transactional and portfolio-level underwriting decisions. Conduct peer reviews of pricing models and assumptions prepared by other team members. Support quarterly loss ratio analyses to monitor portfolio performance, identify trends, and inform capital management strategies. Contribute to the annual reserve study process and present insights to the executive leadership team. Execute benchmark parameter studies and communicate findings to management to support strategic planning. Participate in broker meetings and client visits, helping to structure and negotiate optimal treaty terms. Enhance portfolio monitoring and efficiency through process improvement, automation, and the development of analytical tools and dashboards. Some travel may be required. Qualifications Degree in Actuarial Science, Mathematics, Statistics, or a related field. FCAS or ACAS designation (or international equivalent) strongly preferred. Minimum of 8-10 years of actuarial experience in Property & Casualty reinsurance or insurance pricing. Proven expertise in casualty and specialty lines, including professional liability, D&O, and E&O. Strong analytical and problem-solving skills, with demonstrated experience in data modeling, pricing tools, and portfolio analytics. Excellent communication and collaboration abilities, capable of working closely with underwriting, finance, and executive teams. The Ideal Profile A technically strong, business-minded actuary who thrives in a dynamic, collaborative, and data-driven environment. Recognized for analytical rigor, attention to detail, and the ability to translate complex models into actionable insights. Values teamwork, intellectual curiosity, and innovation in solving industry challenges. Seeks to contribute meaningfully to a growing, entrepreneurial organization shaping the future of reinsurance. Why Join the Team This is a rare opportunity to join a modern, high-growth reinsurer where your analytical insight and strategic thinking will directly influence underwriting decisions and company performance. Northern offers a collaborative culture, flexible hybrid work model, and the chance to shape cutting-edge reinsurance strategies in partnership with leading underwriters and actuaries. REF# WEB1429
    $91k-142k yearly est. 23h ago
  • Data & Performance Analytics (Hedge Fund)

    Coda Search│Staffing

    Data scientist job in New York, NY

    Our client is a $28B NY based multi-strategy Hedge Fund currently seeking to add a talented Associate to their Data & Performance Analytics Team. This individual will be working closely with senior managers across finance, investment management, operations, technology, investor services, compliance/legal, and marketing. Responsibilities This role will be responsible for Compiling periodical fund performance analyses Review and analyze portfolio performance data, benchmark performance and risk statistics Review and make necessary adjustments to client quarterly reports to ensure reports are sent out in a timely manner Work with all levels of team members across the organization to help coordinate data feeds for various internal and external databases, in effort to ensure the integrity and consistency of portfolio data reported across client reporting systems Apply queries, pivot tables, filters and other tools to analyze data. Maintain client relationship management database and providing reports to Directors on a regular basis Coordinate submissions of RFPs by working with RFP/Marketing Team and other groups internally to gather information for accurate data and performance analysis Identifying opportunities to enhance the strategic reporting platform by gathering and analyzing field feedback and collaborating with partners across the organization Provide various ad hoc data research and analysis as needed. Desired Skills and Experience Bachelor's Degree with at least 2+ years of Financial Services/Private Equity data/client reporting experience Proficiency in Microsoft Office, particularly Excel Modeling Technical knowledge, data analytics using CRMs (Salesforce), Excel, PowerPoint Outstanding communication skills, proven ability to effectively work with all levels of Managment Comfortable working in a fast-paced, dead-line driven dynamic environment Innovative and creative thinker Must be detail oriented
    $68k-96k yearly est. 23h ago
  • P&C Reinsurance Actuary - PR12960

    Pryor Associates Executive Search

    Data scientist job in Philadelphia, PA

    P&C Reinsurance Actuary opening in NY, PA, IL, or MN. Perform reinsurance pricing, stochastic modeling, and portfolio optimization analysis involving large account Property, Casualty, Professional Liability, and/or Cyber lines; collaborate and communicate strategic insight and pricing results with key shareholders. Ideal candidate is FCAS, ACAS, or near ACAS with 5-12 years of any P&C pricing experience; must be dynamic, personable, and articulate; exposure to R, Python, or SQL a plus. (PR12960)
    $70k-108k yearly est. 23h ago
  • Data Analytics Engineer

    Dale Workforce Solutions

    Data scientist job in Somerset, NJ

    Client: manufacturing company Type: direct hire Our client is a publicly traded, globally recognized technology and manufacturing organization that relies on data-driven insights to support operational excellence, strategic decision-making, and digital transformation. They are seeking a Power BI Developer to design, develop, and maintain enterprise reporting solutions, data pipelines, and data warehousing assets. This role works closely with internal stakeholders across departments to ensure reporting accuracy, data availability, and the long-term success of the company's business intelligence initiatives. The position also plays a key role in shaping BI strategy and fostering collaboration across cross-functional teams. This role is on-site five days per week in Somerset, NJ. Key Responsibilities Power BI Reporting & Administration Lead the design, development, and deployment of Power BI and SSRS reports, dashboards, and analytics assets Collaborate with business stakeholders to gather requirements and translate needs into scalable technical solutions Develop and maintain data models to ensure accuracy, consistency, and reliability Serve as the Power BI tenant administrator, partnering with security teams to maintain data protection and regulatory compliance Optimize Power BI solutions for performance, scalability, and ease of use ETL & Data Warehousing Design and maintain data warehouse structures, including schema and database layouts Develop and support ETL processes to ensure timely and accurate data ingestion Integrate data from multiple systems while ensuring quality, consistency, and completeness Work closely with database administrators to optimize data warehouse performance Troubleshoot data pipelines, ETL jobs, and warehouse-related issues as needed Training & Documentation Create and maintain technical documentation, including specifications, mappings, models, and architectural designs Document data warehouse processes for reference, troubleshooting, and ongoing maintenance Manage data definitions, lineage documentation, and data cataloging for all enterprise data models Project Management Oversee Power BI and reporting projects, offering technical guidance to the Business Intelligence team Collaborate with key business stakeholders to ensure departmental reporting needs are met Record meeting notes in Confluence and document project updates in Jira Data Governance Implement and enforce data governance policies to ensure data quality, compliance, and security Monitor report usage metrics and follow up with end users as needed to optimize adoption and effectiveness Routine IT Functions Resolve Help Desk tickets related to reporting, dashboards, and BI tools Support general software and hardware installations when needed Other Responsibilities Manage email and phone communication professionally and promptly Respond to inquiries to resolve issues, provide information, or direct to appropriate personnel Perform additional assigned duties as needed Qualifications Required Minimum of 3 years of relevant experience Bachelor's degree in Computer Science, Data Analytics, Machine Learning, or equivalent experience Experience with cloud-based BI environments (Azure, AWS, etc.) Strong understanding of data modeling, data visualization, and ETL tools (e.g., SSIS, Azure Synapse, Snowflake, Informatica) Proficiency in SQL for data extraction, manipulation, and transformation Strong knowledge of DAX Familiarity with data warehouse technologies (e.g., Azure Blob Storage, Redshift, Snowflake) Experience with Power Pivot, SSRS, Azure Synapse, or similar reporting tools Strong analytical, problem-solving, and documentation skills Excellent written and verbal communication abilities High attention to detail and strong self-review practices Effective time management and organizational skills; ability to prioritize workload Professional, adaptable, team-oriented, and able to thrive in a dynamic environment
    $82k-112k yearly est. 1d ago
  • Azure Data Engineer

    Programmers.Io 3.8company rating

    Data scientist job in Weehawken, NJ

    · Expert level skills writing and optimizing complex SQL · Experience with complex data modelling, ETL design, and using large databases in a business environment · Experience with building data pipelines and applications to stream and process datasets at low latencies · Fluent with Big Data technologies like Spark, Kafka and Hive · Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required · Designing and building of data pipelines using API ingestion and Streaming ingestion methods · Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential · Experience in developing NO SQL solutions using Azure Cosmos DB is essential · Thorough understanding of Azure and AWS Cloud Infrastructure offerings · Working knowledge of Python is desirable · Designing and implementing scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services · Managing and optimizing data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB · Monitoring and troubleshooting data-related issues within the Azure environment to maintain high availability and performance · Implementing data security measures, including encryption, access controls, and auditing, to protect sensitive information · Automating data pipelines and workflows to streamline data ingestion, processing, and distribution tasks · Utilizing Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making. · Documenting data procedures, systems, and architectures to maintain clarity and ensure compliance with regulatory standards · Providing guidance and support for data governance, including metadata management, data lineage, and data cataloging Best Regards, Dipendra Gupta Technical Recruiter *****************************
    $92k-132k yearly est. 1d ago
  • Data Engineer

    Beaconfire Inc.

    Data scientist job in East Windsor, NJ

    🚀 Junior Data Engineer 📝 E-Verified | Visa Sponsorship Available 🔍 About Us: BeaconFire, based in Central NJ, is a fast-growing company specializing in Software Development, Web Development, and Business Intelligence. We're looking for self-motivated and strong communicators to join our team as a Junior Data Engineer! If you're passionate about data and eager to learn, this is your opportunity to grow in a collaborative and innovative environment. 🌟 🎓 Qualifications We're Looking For: Passion for data and a strong desire to learn and grow. Master's Degree in Computer Science, Information Technology, Data Analytics, Data Science, or a related field. Intermediate Python skills (Experience with NumPy, Pandas, etc. is a plus!) Experience with relational databases like SQL Server, Oracle, or MySQL. Strong written and verbal communication skills. Ability to work independently and collaboratively within a team. 🛠️ Your Responsibilities: Collaborate with analytics teams to deliver reliable, scalable data solutions. Design and implement ETL/ELT processes to meet business data demands. Perform data extraction, manipulation, and production from database tables. Build utilities, user-defined functions, and frameworks to optimize data flows. Create automated unit tests and participate in integration testing. Troubleshoot and resolve operational and performance-related issues. Work with architecture and engineering teams to implement high-quality solutions and follow best practices. 🌟 Why Join BeaconFire? ✅ E-Verified employer 🌍 Work Visa Sponsorship Available 📈 Career growth in data engineering and BI 🤝 Supportive and collaborative work culture 💻 Exposure to real-world, enterprise-level projects 📩 Ready to launch your career in Data Engineering? Apply now and let's build something amazing together! 🚀
    $82k-112k yearly est. 2d ago
  • Azure Data Engineer

    Wall Street Consulting Services LLC

    Data scientist job in Warren, NJ

    Job Title: Data Engineer - SQL, Azure, ADF (Commercial Insurance) Experience: 12 -20 Years Job Type: Contract Required Skills: SQL, Azure, ADF, Commercial Insurance We are seeking a highly skilled Data Engineer with strong experience in SQL, Azure Data Platform, and Azure Data Factory, preferably within the Insurance domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines, integrating data from multiple insurance systems, and enabling analytical and reporting capabilities for underwriting, claims, policy, billing, and risk management teams. Required Skills & Experience Minimum 12+ years of experience in Data Engineering or related roles. Strong expertise in: SQL, T-SQL, PL/SQL Azure Data Factory (ADF) Azure SQL, Synapse, ADLS Data modeling for relational and analytical systems. Hands-on experience with ETL/ELT development and complex pipeline orchestration. Experience in Azure DevOps Git, CI/CD pipelines, and DataOps practices. Understanding of insurance domain datasets: policy, premium, claims, exposures, brokers, reinsurers, underwriting workflows. Strong analytical and problem-solving skills, with the ability to handle large datasets and complex transformations. Preferred Qualifications Experience with Databricks / PySpark for large-scale transformations. Knowledge of Commercial Property & Casualty (P&C) insurance. Experience integrating data from Guidewire ClaimCenter/PolicyCenter, DuckCreek, or similar platforms. Exposure to ML/AI pipelines for underwriting or claims analytics. Azure certifications such as: DP-203 (Azure Data Engineer) AZ-900, AZ-204, AI-900
    $82k-112k yearly est. 3d ago
  • Time-Series Data Engineer

    Kane Partners LLC 4.1company rating

    Data scientist job in Doylestown, PA

    Local Candidates Only - No Sponsorship** A growing technology company in the Warrington, PA area is seeking a Data Engineer to join its analytics and machine learning team. This is a hands-on, engineering-focused role working with real operational time-series data-not a dashboard or BI-heavy position. We're looking for someone who's naturally curious, self-driven, and enjoys taking ownership. If you like solving real-world problems, building clean and reliable data systems, and contributing ideas that actually get implemented, you'll enjoy this environment. About the Role You will work directly with internal engineering teams to build and support production data pipelines, deploy Python-based analytics and ML components, and work with high-volume time-series data from complex systems. This is a hybrid position requiring regular on-site collaboration. What You'll Do ● Build and maintain data pipelines for time-series and operational datasets ● Deploy Python and SQL-based data processing components using cloud resources ● Troubleshoot issues, optimize performance, and support new customer implementations ● Document deployment workflows and data behaviors ● Work with engineering/domain specialists to identify opportunities for improvement ● Proactively correct inefficiencies-if something can work better, you take the initiative Required Qualifications ● 2+ years of professional experience in data engineering, data science, ML engineering, or a related field ● Strong Python and SQL skills ● Experience with time-series data or operational/industrial datasets (preferred) ● Exposure to cloud environments; Azure experience is a plus but not required ● Ability to think independently, problem-solve, and build solutions with minimal oversight ● Strong communication skills and attention to detail Local + Work Authorization Requirements (Strict) ● Must currently live within daily commuting distance of Warrington, PA (Philadelphia suburbs / Montgomery County / Bucks County / surrounding PA/NJ areas) ● No relocation, no remote-only applicants ● No sponsorship-must be authorized to work in the U.S. now and in the future These requirements are firm and help ensure strong team collaboration. What's Offered ● Competitive salary + bonus potential ● Health insurance and paid time off ● Hybrid work flexibility ● Opportunity to grow, innovate, and have a direct impact on meaningful technical work ● Supportive, engineering-first culture If This Sounds Like You We'd love to hear from local candidates who are excited about Python, data engineering, and solving real-world problems with time-series data. Work Authorization: Applicants must have valid, independent authorization to work in the United States. This position does not offer, support, or accept any form of sponsorship-whether employer, third-party, future, contingent, transfer, or otherwise. Candidates must be able to work for any employer in the U.S. without current or future sponsorship of any kind. Work authorization will be verified, and misrepresentation will result in immediate removal from consideration.
    $86k-116k yearly est. 1d ago
  • Azure Data Engineer

    Sharp Decisions 4.6company rating

    Data scientist job in Jersey City, NJ

    Title: Senior Azure Data Engineer Client: Major Japanese Bank Experience Level: Senior (10+ Years) The Senior Azure Data Engineer will design, build, and optimize enterprise data solutions within Microsoft Azure for a major Japanese bank. This role focuses on architecting scalable data pipelines, enhancing data lake environments, and ensuring security, compliance, and data governance best practices. Key Responsibilities: Develop, maintain, and optimize Azure-based data pipelines and ETL/ELT workflows. Design and implement Azure Data Lake, Synapse, Databricks, and ADF solutions. Ensure data security, compliance, lineage, and governance controls. Partner with architecture, data governance, and business teams to deliver high-quality data solutions. Troubleshoot performance issues and improve system efficiency. Required Skills: 10+ years of data engineering experience. Strong hands-on expertise with Azure Synapse, Azure Data Factory, Azure Databricks, Azure Data Lake, and Azure SQL. Azure certifications strongly preferred. Strong SQL, Python, and cloud data architecture skills. Experience in financial services or large enterprise environments preferred.
    $77k-101k yearly est. 1d ago
  • Senior Data Engineer (Snowflake)

    Epic Placements

    Data scientist job in Parsippany-Troy Hills, NJ

    Senior Data Engineer (Snowflake & Python) 1-Year Contract | $60/hour + Benefit Options Hybrid: On-site a few days per month (local candidates only) Work Authorization Requirement You must be authorized to work for any employer as a W2 employee. This is required for this role. This position is W-2 only - no C2C, no third-party submissions, and no sponsorship will be considered. Overview We are seeking a Senior Data Engineer to support enterprise-scale data initiatives for a highly collaborative engineering organization. This is a new, long-term contract opportunity for a hands-on data professional who thrives in fast-paced environments and enjoys building high-quality, scalable data solutions on Snowflake. Candidates must be based in or around New Jersey, able to work on-site at least 3 days per month, and meet the W2 employment requirement. What You'll Do Design, develop, and support enterprise-level data solutions with a strong focus on Snowflake Participate across the full software development lifecycle - planning, requirements, development, testing, and QA Partner closely with engineering and data teams to identify and implement optimal technical solutions Build and maintain high-performance, scalable data pipelines and data warehouse architectures Ensure platform performance, reliability, and uptime, maintaining strong coding and design standards Troubleshoot production issues, identify root causes, implement fixes, and document preventive solutions Manage deliverables and priorities effectively in a fast-moving environment Contribute to data governance practices including metadata management and data lineage Support analytics and reporting use cases leveraging advanced SQL and analytical functions Required Skills & Experience 8+ years of experience designing and developing data solutions in an enterprise environment 5+ years of hands-on Snowflake experience Strong hands-on development skills with SQL and Python Proven experience designing and developing data warehouses in Snowflake Ability to diagnose, optimize, and tune SQL queries Experience with Azure data frameworks (e.g., Azure Data Factory) Strong experience with orchestration tools such as Airflow, Informatica, Automic, or similar Solid understanding of metadata management and data lineage Hands-on experience with SQL analytical functions Working knowledge of Shell scripting and Java scripting Experience using Git, Confluence, and Jira Strong problem-solving and troubleshooting skills Collaborative mindset with excellent communication skills Nice to Have Experience supporting Pharma industry data Exposure to Omni-channel data environments Why This Opportunity $60/hour W2 on a long-term 1-year contract Benefit options available Hybrid structure with limited on-site requirement High-impact role supporting enterprise data initiatives Clear expectations: W-2 only, no third-party submissions, no Corp-to-Corp This employer participates in E-Verify and will provide the federal government with your Form I-9 information to confirm that you are authorized to work in the U.S.
    $60 hourly 23h ago
  • Senior Data Engineer

    Vysystems

    Data scientist job in Jersey City, NJ

    Hi #Connections, We do have a Job Opening for, Role-----Senior Data Engineer with Databricks and Python Locations-----Jersey City, NJ & Wilmington, DE (Day 1 Onsite) (Need Local to Texas & Delaware Only) Please share the suitable resumes to ******************* & ***************************************** Job Description: Key Responsibilities: • Design and build ETL/ELT pipelines using Databricks and PySpark • Develop and maintain data models and data warehouse structures (dimensional modeling, star/snowflake schemas) • Optimize data workflows for performance, scalability, and cost • Work with cloud platforms (Azure/AWS/GCP) for storage, compute, and orchestration • Ensure data quality, reliability, and security across pipelines • Collaborate with cross-functional teams (Data Science, BI, Product) • Write clean, reusable code and follow engineering best practices • Troubleshoot issues in production data pipelines Required Skills: • Strong hands-on skills in Databricks, PySpark, and SQL • Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines • Solid understanding of Delta Lake and Lakehouse architecture • Experience with at least one cloud platform (Azure preferred) • Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.) Nice to Have: • Experience with CI/CD for data pipelines • Knowledge of data governance tools (Unity Catalog or similar) • Exposure to ML data preparation pipelines Soft Skills: • Strong communication and documentation skills • Ability to work independently and mentor others • Problem-solver with a focus on delivering business value Please share the suitable resumes to ******************* & ***************************************** Please attach the Updated Resume: Kindly fill details, 1. Years of Exp---- 2. Visa Status----- 3. Current Location-------- 4. Linkedin ID----- 5. Share Updated Resume Thanks & Regards., Ramkumar.R || Sr.Technical Recruiter Email: ******************* Linkedin ID: ***************************************** 4701 Patrick Henry Drive Building 16 Santa Clara CA 95054, USA.
    $82k-112k yearly est. 23h ago
  • Azure Data Engineer

    Kaizen Technologies 3.6company rating

    Data scientist job in Princeton, NJ

    We are seeking an experienced Azure Data Engineer with strong expertise in modern data platform technologies including Azure Synapse, Microsoft Fabric, SQL Server, Azure Storage, Azure Data Factory (ADF), Python, Power BI, and Azure OpenAI. The ideal candidate will design, build, and optimize scalable data pipelines and analytics solutions to support enterprise-wide reporting, AI, and data integration initiatives. Key Responsibilities. Design, develop, and maintain Azure-based data pipelines using ADF, Synapse Pipelines, and Fabric Dataflows. Build and optimize data warehouses, data lakes, and lakehouse architectures on Azure. Develop complex SQL queries, stored procedures, and data transformations in SQL Server and Synapse SQL Pools. Implement data ingestion, transformation, and orchestration solutions using Python and Azure services. Manage and optimize Azure Storage solutions (ADLS Gen2, Blob Storage). Leverage Power BI and Fabric for data modeling, dataset creation, and dashboard/report development. Integrate and utilize Azure OpenAI for data enrichment, intelligent automation, and advanced analytics where applicable. Ensure data quality, data governance, and security best practices across the data lifecycle. Troubleshoot data pipeline issues, optimize performance, and support production workloads. Collaborate with data architects, analysts, BI developers, and business stakeholders to deliver end-to-end data solutions.
    $76k-106k yearly est. 23h ago
  • Data Engineer

    The Judge Group 4.7company rating

    Data scientist job in Jersey City, NJ

    ONLY LOCALS TO NJ/NY - NO RELOCATION CANDIDATES Skillset: Data Engineer Must Haves: Python, PySpark, AWS - ECS, Glue, Lambda, S3 Nice to Haves: Java, Spark, React Js Interview Process: Interview Process: 2 rounds, 2nd will be on site You're ready to gain the skills and experience needed to grow within your role and advance your career - and we have the perfect software engineering opportunity for you. As a Data Engineer III - Python / Spark / Data Lake at JPMorgan Chase within the Consumer and Community Bank , you will be a seasoned member of an agile team, tasked with designing and delivering reliable data collection, storage, access, and analytics solutions that are secure, stable, and scalable. Your responsibilities will include developing, testing, and maintaining essential data pipelines and architectures across diverse technical areas, supporting various business functions to achieve the firm's business objectives. Job responsibilities: • Supports review of controls to ensure sufficient protection of enterprise data. • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request. • Updates logical or physical data models based on new use cases. • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace. • Adds to team culture of diversity, opportunity, inclusion, and respect. • Develop enterprise data models, Design/ develop/ maintain large-scale data processing pipelines (and infrastructure), Lead code reviews and provide mentoring thru the process, Drive data quality, Ensure data accessibility (to analysts and data scientists), Ensure compliance with data governance requirements, and Ensure business alignment (ensure data engineering practices align with business goals). • Supports review of controls to ensure sufficient protection of enterprise data Required qualifications, capabilities, and skills • Formal training or certification on data engineering concepts and 2+ years applied experience • Experience across the data lifecycle, advanced experience with SQL (e.g., joins and aggregations), and working understanding of NoSQL databases • Experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis • Extensive experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark. • Proficient in Python and PySpark, able to write and execute complex queries to perform curation and build views required by end users (single and multi-dimensional). • Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck. • Advanced proficiency in leveraging Gen AI models from Anthropic (or OpenAI, or Google) using APIs/SDKs • Advanced proficiency in cloud data lakehouse platform such as AWS data lake services, Databricks or Hadoop, relational data store such as Postgres, Oracle or similar, and at least one NOSQL data store such as Cassandra, Dynamo, MongoDB or similar • Advanced proficiency in Cloud Data Warehouse Snowflake, AWS Redshift • Advanced proficiency in at least one scheduling/orchestration tool such as Airflow, AWS Step Functions or similar • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools. Preferred qualifications, capabilities, and skills • Knowledge of data governance and security best practices. • Experience in carrying out data analysis to support business insights. • Strong Python and Spark
    $79k-111k yearly est. 2d ago
  • Senior Data Engineer

    Apexon

    Data scientist job in New Providence, NJ

    Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies - in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences - to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients' toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. Job Description Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance Work in tandem with our engineering team to identify and implement the most optimal solutions Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Able to manage deliverables in fast paced environments Areas of Expertise At least 10 years of experience designing and development of data solutions in enterprise environment At least 5+ years' experience on Snowflake Platform Strong hands-on SQL and Python development Experience with designing and developing data warehouses in Snowflake A minimum of three years' experience in developing production-ready data ingestion and processing pipelines using Spark, Scala Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic Good understanding on Metadata and data lineage Hands-on knowledge on SQL Analytical functions Strong knowledge and hands-on experience in Shell scripting, Java Scripting Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. Good understanding and exposure to Git, Confluence and Jira Good problem solving and troubleshooting skills. Team player, collaborative approach and excellent communication skills Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work , the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We are taking affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com)
    $82k-112k yearly est. 4d ago
  • Data Engineer

    Neenopal Inc.

    Data scientist job in Newark, NJ

    NeenOpal is a global consulting firm specializing in Data Science and Business Intelligence, with offices in Bengaluru, Newark, and Fredericton. We provide end-to-end solutions tailored to the unique needs of businesses, from startups to large organizations, across domains like digital strategy, sales and marketing, supply chain, and finance. Our mission is to help organizations achieve operational excellence and transform into data-driven enterprises. Role Description This is a full-time, hybrid, Data Engineer role located in Newark, NJ. The Data Engineer will be responsible for designing, implementing, and managing data engineering solutions to support business needs. Day-to-day tasks include building and optimizing data pipelines, developing and maintaining data models and ETL processes, managing data warehousing solutions, and contributing to the organization's data analytics initiatives. Collaboration with cross-functional teams to ensure robust data infrastructure will be a key aspect of this role. Key Responsibilities Data Pipeline Development: Design, implement, and manage robust data pipelines to ensure efficient data flow into data warehouses. Automate ETL processes using Python and advanced data engineering tools. Data Integration: Integrate and transform data using industry-standard tools. Experience required with: AWS Services: AWS Glue, Data Pipeline, Redshift, and S3. Azure Services: Azure Data Factory, Synapse Analytics, and Blob Storage. Data Warehousing: Implement and optimize solutions using Snowflake and Amazon Redshift. Database Management: Develop and manage relational databases (SQL Server, MySQL, PostgreSQL) to ensure data integrity. Performance Optimization: Continuously monitor and improve data processing workflows and apply best practices for query optimization. Global Collaboration: Work closely with cross-functional teams in the US, India, and Canada to deliver high-quality solutions. Governance & Support: Document ETL processes and data mappings in line with governance standards. Diagnose and resolve data-related issues promptly. Required Skills and Experience Experience: Minimum 2+ years of experience designing and developing ETL processes (AWS Glue, Azure Data Factory, or similar). Integration: Experience integrating data via RESTful / GraphQL APIs. Programming: Proficient in Python for ETL automation and SQL for database management. Cloud Platforms: Strong experience with AWS or Azure data services. (GCP familiarity is a plus) . Data Warehousing: Expertise with Snowflake, Amazon Redshift, or Azure Synapse Analytics. Integration: Experience integrating data via RESTful APIs. Communication: Excellent articulation skills to explain technical work directly to clients and stakeholders. Authorization: Must have valid work authorization in the United States. Salary Range: $65,000- $80,000 per year Benefits: This role includes health insurance, paid time off, and opportunities for professional growth and continuous learning within a fast-growing global analytics company. Equal Opportunity Employer NeenOpal Inc. is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
    $65k-80k yearly 3d ago
  • Python Data Engineer

    Tekvana Inc.

    Data scientist job in Iselin, NJ

    Job Title:Data Engineer (Python, Spark, Cloud) Pay :$90000 per year DOE Term : Contract Work Authorization: US Citizens only ( may need Security clearance in future) Job Summary: We are seeking a mid-level Data Engineer with strong Python and Big Data skills to design, develop, and maintain scalable data pipelines and cloud-based solutions. This role involves hands-on coding, data integration, and collaboration with cross-functional teams to support enterprise analytics and reporting. Key Responsibilities: Build and maintain ETL pipelines using Python and PySpark for batch and streaming data. Develop data ingestion frameworks for structured/unstructured sources. Implement data workflows using Airflow and integrate with Kafka for real-time processing. Deploy solutions on Azure or GCP using container platforms (Kubernetes/OpenShift). Optimize SQL queries and ensure data quality and governance. Collaborate with data architects and analysts to deliver reliable data solutions. Required Skills: Python (3.x) - scripting, API development, automation. Big Data: Spark/PySpark, Hadoop ecosystem. Streaming: Kafka. SQL: Oracle, Teradata, or SQL Server. Cloud: Azure or GCP (BigQuery, Dataflow). Containers: Kubernetes/OpenShift. CI/CD: GitHub, Jenkins. Preferred Skills: Airflow for orchestration. ETL tools (Informatica, Talend). Financial services experience. Education & Experience: Bachelor's in Computer Science or related field. 3-5 years of experience in data engineering and Python development. Keywords for Visibility: Python, PySpark, Spark, Hadoop, Kafka, Airflow, Azure, GCP, Kubernetes, CI/CD, ETL, Data Lake, Big Data, Cloud Data Engineering. Reply with your profiles to this posting and send it to ******************
    $90k yearly 3d ago

Learn more about data scientist jobs

How much does a data scientist earn in Scranton, PA?

The average data scientist in Scranton, PA earns between $62,000 and $118,000 annually. This compares to the national average data scientist range of $75,000 to $148,000.

Average data scientist salary in Scranton, PA

$86,000
Job type you want
Full Time
Part Time
Internship
Temporary