Post job

Data engineer jobs in Pennsylvania

- 1,561 jobs
  • Senior Data Engineer

    Brooksource 4.1company rating

    Data engineer job in Bethlehem, PA

    Hybrid (Bethlehem, PA) Contract We're looking for a Senior Data Engineer to join our growing technology team and help shape the future of our enterprise data landscape. This is a hands-on, high-impact opportunity to make recommendations, build and evolve a modern data platform using Snowflake and cloud-based EDW Solutions. How You'll Impact Results: Drive the evolution and architecture of scalable, secure, cloud-native data platforms Design, build, and maintain data models, pipelines, and integration patterns across the data lake, data warehouse, and consumption layers Lead deployment of long-term data products and infuse data and analytics capabilities across business and IT Optimize data pipelines and warehouse performance for accuracy, accessibility, and speed Collaborate cross-functionally to deliver data, experimentation, and analytics solutions Implement systems to monitor data quality and ensure reliability and availability of Production data for downstream users, leadership teams, and business processes Recommend and implement best practices for query performance, storage, and resource efficiency Test and clearly document data assets, pipelines, and architecture to support usability and scale Engage across project phases and serve as a key contributor in strategic data architecture initiatives Your Qualifications That Will Ensure Success: Required: 10+ years of experience in Information Technology Data Engineering: professional database and data warehouse development Advanced proficiency in SQL, data modeling, and performance tuning Experience in system configuration, security administration, and performance optimization Deep experience required with Snowflake and modern cloud data platforms (AWS, Azure, or GCP) Familiarity with developing cloud data applications (AWS, Azure, Google Cloud) and/or standard CI/CD tools like Azure DevOps or GitHub Strong analytical, problem-solving, and documentation skills Experience in system configuration, security administration, and performance optimization Proficiency with Microsoft Excel and common data analysis tools Ability to troubleshoot technical issues and provide system support to non-technical users. Preferred: Experience integrating SAP ECC data into cloud-native platforms Exposure to AI/ML, API development, or Boomi Atmosphere Prior experience in consumer packaged goods (CPG), Food / Beverage industry, or manufacturing
    $91k-126k yearly est. 3d ago
  • Senior Data Engineer

    Eigen X 3.9company rating

    Data engineer job in Philadelphia, PA

    We are seeking a passionate and skilled Senior Data Engineer to join our dynamic team in Philadelphia, PA. In this role, you will lead the design and implementation of advanced data pipelines for Business Intelligence (BI) and reporting. Your expertise will transform complex data into actionable insights, driving significant business value for our clients. Key Responsibilities: Design and implement scalable and efficient data pipelines for BI and reporting. Define and manage key business metrics, build automated dashboards, and develop analytic self-service capabilities. Write comprehensive technical documentation to outline data solutions and architectures. Lead requirements gathering, solution design, and implementation for data projects. Develop and maintain ETL frameworks for large real-world data (RWD) assets. Mentor and guide technical teams, fostering a culture of innovation. Stay updated with new technologies and solve complex data problems. Facilitate the deployment and integration of AI models, ensuring data quality and compatibility with existing analytics infrastructure. Collaborate with cross-functional stakeholders to understand data needs and deliver impactful analytics and reports. Required Qualifications: Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. 4+ years of SQL experience. Experience with data modeling, warehousing, and building ETL pipelines. Proficiency in at least one modern scripting or programming language (e.g., Python, Java, Scala, NodeJS). Experience working directly with business stakeholders to align data solutions with business needs. Working knowledge of Snowflake as a data warehousing solution. Experience with workflow orchestration tools like Apache Airflow. Knowledge of data transformation tools and frameworks such as dbt (Data Build Tool), PySpark, or Snowpark. Experience with open-source table formats (e.g., Apache Iceberg, Delta, Hudi). Familiarity with container technologies like Docker and Kubernetes. Experience with on-premises and cloud MDM deployments. Preferred Qualifications: Proficiency with data visualization tools (e.g., Tableau, Power BI, Quicksight). Certifications in Snowflake or Azure Data Engineering Experience with Agile methodologies and project management tools (e.g., Jira). Experience deploying and managing data solutions within Azure AI, Azure ML, or similar environments. Familiarity with DevOps practices, particularly CI/CD for data solutions. Knowledge of emerging data architectures, including Data Mesh, Data Fabric, Multimodal Data Management, and AI/ML integration. Familiarity with ETL tools like Informatica and Matillion. Previous experience in professional services or consultancy environments. Experience in technical pre-sales, solution demos, and proposal development.
    $91k-130k yearly est. 5d ago
  • Data Engineer

    EXL 4.5company rating

    Data engineer job in Philadelphia, PA

    Job Title: Data Engineer Experience: 5+ years We are seeking an experienced Data Engineer with strong expertise in PySpark and data pipeline operations. This role focuses heavily on performance tuning Spark applications, managing large-scale data pipelines, and ensuring high operational stability. The ideal candidate is a strong technical problem-solver, highly collaborative, and proactive in automation and process improvements. Key Responsibilities: Data Pipeline Management & Support Operate and support Business-as-Usual (BAU) data pipelines, ensuring stability, SLA adherence, and timely incident resolution. Identify and implement opportunities for optimization and automation across pipelines and operational workflows. Spark Development & Performance Tuning Design, develop, and optimize PySpark jobs for efficient large-scale data processing. Diagnose and resolve complex Spark performance issues such as data skew, shuffle spill, executor OOM errors, slow-running stages, and partition imbalance. Platform & Tool Management Use Databricks for Spark job orchestration, workflow automation, and cluster configuration. Debug and manage Spark on Kubernetes, addressing pod crashes, OOM kills, resource tuning, and scheduling problems. Work with MinIO/S3 storage for bucket management, permissions, and large-volume file ingestion and retrieval. Collaboration & Communication Partner with onshore business stakeholders to clarify requirements and convert them into well-defined technical tasks. Provide daily coordination and technical oversight to offshore engineering teams. Participate actively in design discussions and technical reviews. Documentation & Operational Excellence Maintain accurate and detailed documentation, runbooks, and troubleshooting guides. Contribute to process improvements that enhance operational stability and engineering efficiency. Required Skills & Qualifications: Primary Skills (Must-Have) PySpark: Advanced proficiency in transformations, performance tuning, and Spark internals. SQL: Strong analytical query design, performance tuning, and foundational data modeling (relational & dimensional). Python: Ability to write maintainable, production-grade code with a focus on modularity, automation, and reusability. Secondary Skills (Highly Desirable) Kubernetes: Experience with Spark-on-K8s, including pod diagnostics, resource configuration, and log/monitoring tools. Databricks: Hands-on experience with cluster management, workflow creation, Delta Lake optimization, and job monitoring. MinIO / S3: Familiarity with bucket configuration, policies, and efficient ingestion patterns. DevOps: Experience with Git, CI/CD, and cloud environments (Azure preferred).
    $74k-100k yearly est. 3d ago
  • Time-Series Data Engineer

    Kane Partners LLC 4.1company rating

    Data engineer job in Doylestown, PA

    Local Candidates Only - No Sponsorship** A growing technology company in the Warrington, PA area is seeking a Data Engineer to join its analytics and machine learning team. This is a hands-on, engineering-focused role working with real operational time-series data-not a dashboard or BI-heavy position. We're looking for someone who's naturally curious, self-driven, and enjoys taking ownership. If you like solving real-world problems, building clean and reliable data systems, and contributing ideas that actually get implemented, you'll enjoy this environment. About the Role You will work directly with internal engineering teams to build and support production data pipelines, deploy Python-based analytics and ML components, and work with high-volume time-series data from complex systems. This is a hybrid position requiring regular on-site collaboration. What You'll Do ● Build and maintain data pipelines for time-series and operational datasets ● Deploy Python and SQL-based data processing components using cloud resources ● Troubleshoot issues, optimize performance, and support new customer implementations ● Document deployment workflows and data behaviors ● Work with engineering/domain specialists to identify opportunities for improvement ● Proactively correct inefficiencies-if something can work better, you take the initiative Required Qualifications ● 2+ years of professional experience in data engineering, data science, ML engineering, or a related field ● Strong Python and SQL skills ● Experience with time-series data or operational/industrial datasets (preferred) ● Exposure to cloud environments; Azure experience is a plus but not required ● Ability to think independently, problem-solve, and build solutions with minimal oversight ● Strong communication skills and attention to detail Local + Work Authorization Requirements (Strict) ● Must currently live within daily commuting distance of Warrington, PA (Philadelphia suburbs / Montgomery County / Bucks County / surrounding PA/NJ areas) ● No relocation, no remote-only applicants ● No sponsorship-must be authorized to work in the U.S. now and in the future These requirements are firm and help ensure strong team collaboration. What's Offered ● Competitive salary + bonus potential ● Health insurance and paid time off ● Hybrid work flexibility ● Opportunity to grow, innovate, and have a direct impact on meaningful technical work ● Supportive, engineering-first culture If This Sounds Like You We'd love to hear from local candidates who are excited about Python, data engineering, and solving real-world problems with time-series data. Work Authorization: Applicants must have valid, independent authorization to work in the United States. This position does not offer, support, or accept any form of sponsorship-whether employer, third-party, future, contingent, transfer, or otherwise. Candidates must be able to work for any employer in the U.S. without current or future sponsorship of any kind. Work authorization will be verified, and misrepresentation will result in immediate removal from consideration.
    $86k-116k yearly est. 3d ago
  • Azure Data Architect

    Capgemini 4.5company rating

    Data engineer job in Malvern, PA

    Key Responsibilities 1. Design and Build Data Solutions Architect and implement modern data platforms using Azure services such as Azure Data Factory, Azure Synapse Analytics, Azure Databricks, Data Lake Storage, and Cosmos DB. Develop and maintain data pipelines for ingestion, transformation, and storage. Design data models and schemas to support business intelligence and advanced analytics. 2. Data Governance and Compliance Implement data security, privacy, and compliance measures. Establish governance frameworks for data quality and lifecycle management. 3. Collaboration and Leadership Partner with business stakeholders, data scientists, and engineering teams to align architecture with business objectives. Provide technical leadership, mentoring, and enforce best practices for data engineering teams. 4. Performance and Optimization Monitor system performance and troubleshoot issues. Optimize architecture for cost efficiency, scalability, and high availability. 5. Innovation and Enablement Stay updated on emerging Azure technologies and industry trends. Conduct workshops and enablement sessions to drive adoption of Azure data solutions. Required Skills and Experience Technical Expertise Strong proficiency in Azure services: Data Factory, Synapse, Databricks, Data Lake, Power BI. Hands-on experience with data modeling, ETL design, and data warehousing. Knowledge of SQL, NoSQL, PySpark, and BI tools. Architecture and Strategy 7+ years in data architecture roles; 3+ years with Azure data solutions. Familiarity with Lakehouse architecture, Delta/Parquet formats, and data governance tools. Preferred Qualifications Experience in regulated industries (e.g., Financial Services). Knowledge of Microsoft Fabric, Generative AI, and RAG-based architectures. Education & Certifications Bachelor's or Master's degree in Computer Science, Information Systems, or related fields. Certifications such as Microsoft Certified: Azure Solutions Architect Expert or Azure Data Engineer Associate are highly desirable.
    $92k-129k yearly est. 4d ago
  • Azure data engineer

    Cognizant 4.6company rating

    Data engineer job in Pittsburgh, PA

    Job Title - DataBricks Data Engineer **Must have 8+ years of real hands on experience** We are specifically seeking a Data Engineer-Lead with strong expertise in Databricks development. The role involves: Building and testing data pipelines using Python/Scala on Databricks Hands on experience to develop and lead the offshore team to perform development/testing work in Azure data bricks Architect data platforms using Azure services such as Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and PySpark. Collaborate with stakeholders to understand business needs and translate them into technical solutions. Provide technical leadership and guidance to the data engineering team and need to perform development. Familiar with Safe Agile concepts and good to have working experience in agile model. Develop and maintain data pipelines for efficient data movement and transformation. Onsite and offshore team communication and co-ordination. Create and update the documentation to facilitate cross-training and troubleshooting Hands on experience in scheduling tools like BMC control-M and setup jobs and test the schedules. Understand the data models and schemas to support the development work and help in creation of tables in databricks Proficiency in Azure Data Factory (ADF), Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools. Implementing automated data validation frameworks such as Great Expectations or Deequ Reconciling large-scale datasets Ensuring data reliability across both batch and streaming processes The ideal candidate will have hands-on experience with: PySpark, Scala, Delta Lake, and Unity Catalog Devops CI/CD automation Cloud-native data services Azure databricks/Oracle BMC Control-M Location: Pittsburgh, PA
    $77k-101k yearly est. 4d ago
  • Senior Data Engineer

    Realtime Recruitment

    Data engineer job in Philadelphia, PA

    Full-time Perm Remote - EAST COAST ONLY Role open to US Citizens and Green Card Holders only We're looking for a Senior Data Engineer to lead the design, build, and optimization of modern data pipelines and cloud-native data infrastructure. This role is ideal for someone who thrives on solving complex data challenges, improving systems at scale, and collaborating across technical and business teams to deliver high-impact solutions. What You'll Do Architect, develop, and maintain scalable, secure data infrastructure supporting analytics, reporting, and operational workflows. Design and optimize ETL/ELT pipelines to integrate data from diverse internal and external sources. Prepare and transform structured and unstructured data to support modeling, reporting, and advanced analysis. Improve data quality, reliability, and performance across platforms and workflows. Monitor pipelines, troubleshoot discrepancies, and ensure accuracy and timely data delivery. Identify architectural bottlenecks and drive long-term scalability improvements. Collaborate with Product, BI, Finance, and engineering teams to build end-to-end data solutions. Prototype algorithms, transformations, and automation tools to accelerate insights. Lead cloud-native workflow design, including logging, monitoring, and storage best practices. Create and maintain high-quality technical documentation. Contribute to Agile ceremonies, engineering best practices, and continuous improvement initiatives. Mentor teammates and guide adoption of data platform tools and patterns. Participate in on-call rotation to maintain platform stability and availability. What You Bring Bachelor's degree in Computer Science or related technical field. 4+ years of advanced SQL experience (Oracle, PostgreSQL, etc.). 4+ years working with Java or Groovy. 3+ years integrating with SOAP or REST APIs. 2+ years with DBT and data modeling. Strong understanding of modern data architectures, distributed systems, and performance optimization. Experience with Snowflake or similar cloud data platforms (preferred). Hands-on experience with Git, Jenkins, CI/CD, and automation/testing practices. Solid grasp of cloud concepts and cloud-native engineering. Excellent problem-solving, communication, and cross-team collaboration skills. Ability to lead projects, own solutions end-to-end, and influence technical direction. Proactive mindset with strong analytical and consultative abilities.
    $81k-111k yearly est. 3d ago
  • Hadoop Data Engineer

    Smart It Frame LLC

    Data engineer job in Pittsburgh, PA

    About the job: We are seeking an accomplished Tech Lead - Data Engineer to architect and drive the development of large-scale, high-performance data platforms supporting critical customer and transaction-based systems. The ideal candidate will have a strong background in data pipeline design, Hadoop ecosystem, and real-time data processing, with proven experience building data solutions that power digital products and decisioning platforms in a complex, regulated environment. As a technical leader, you will guide a team of engineers to deliver scalable, secure, and reliable data solutions enabling advanced analytics, operational efficiency, and intelligent customer experiences. Key Roles & Responsibilities Lead and oversee the end-to-end design, implementation, and optimization of data pipelines supporting key customer onboarding, transaction, and decisioning workflows. Architect and implement data ingestion, transformation, and storage frameworks leveraging Hadoop, Avro, and distributed data processing technologies. Partner with product, analytics, and technology teams to translate business requirements into scalable data engineering solutions that enhance real-time data accessibility and reliability. Provide technical leadership and mentorship to a team of data engineers, ensuring adherence to coding, performance, and data quality standards. Design and implement robust data frameworks to support next-generation customer and business product launches. Develop best practices for data governance, security, and compliance aligned with enterprise and regulatory requirements. Drive optimization of existing data pipelines and workflows for improved efficiency, scalability, and maintainability. Collaborate closely with analytics and risk modeling teams to ensure data readiness for predictive insights and strategic decision-making. Evaluate and integrate emerging data technologies to future-proof the data platform and enhance performance. Must-Have Skills 8-10 years of experience in data engineering, with at least 2-3 years in a technical leadership role. Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce, HBase, Pig, etc.). Experience working with Avro, Parquet, or other serialization formats. Proven ability to design and maintain ETL / ELT pipelines using tools such as Spark, Flink, Airflow, or NiFi. Proficiency in Python, Scala for large-scale data processing. Strong understanding of data modeling, data warehousing, and data lake architectures. Hands-on experience with SQL and both relational and NoSQL data stores. Cloud data platform experience with AWS. Deep understanding of data security, compliance, and governance frameworks. Excellent problem-solving, communication, and leadership skills.
    $79k-107k yearly est. 4d ago
  • Data Scientist

    First Quality 4.7company rating

    Data engineer job in Lewistown, PA

    Founded over 35 years ago, First Quality is a family-owned company that has grown from a small business in McElhattan, Pennsylvania into a group of companies, employing over 5,000 team members, while maintaining our family values and entrepreneurial spirit. With corporate offices in New York and Pennsylvania and 8 manufacturing campuses across the U.S. and Canada, the companies within the First Quality group produce high-quality personal care and household products for large retailers and healthcare organizations. Our personal care and household product portfolio includes baby diapers, wipes, feminine pads, paper towels, bath tissue, adult incontinence products, laundry detergents, fabric finishers, and dishwash solutions. In addition, we manufacture certain raw materials and components used in the manufacturing of these products, including flexible print and packaging solutions. Guided by our values of humility, unity, and integrity, we leverage advanced technology and innovation to drive growth and create new opportunities. At First Quality, you'll find a collaborative environment focused on continuous learning, professional development, and our mission to Make Things Better . We are seeking a Data Scientist for our First Quality facilities located in McElhattan, PA; Lewistown, PA; and Macon, GA. **Must have manufacturing experience with consumer goods.** The role will provide meaningful insight on how to improve our current business operations. This position will work closely with domain experts and SMEs to understand the business problem or opportunity and assess the potential of machine learning to enable accelerated performance improvements. Principle Accountabilities/Responsibilities Design, build, tune, and deploy divisional AI/ML tools that meet the agreed upon functional and non-functional requirements within the framework established by the Enterprise IT and IS departments. Perform large scale experimentation to identify hidden relationships between different data sets and engineer new features Communicate model performance & results & tradeoffs to stake holders Determine requirements that will be used to train and evolve deep learning models and algorithms Visualize information and develop engaging dashboards on the results of data analysis. Build reports and advanced dashboards to tell stories with the data. Lead, develop and deliver divisional strategies to demonstrate the: what, why and how of delivering AI/ML business outcomes Build and deploy divisional AI strategy and roadmaps that enable long-term success for the organization that aligned with the Enterprise AI strategy. Proactively mine data to identify trends and patterns and generate insights for business units and management. Mentor other stakeholders to grow in their expertise, particularly in AI / ML, and taking an active leadership role in divisional executive forums Work collaboratively with the business to maximize the probability of success of AI projects and initiatives. Identify technical areas for improvement and present detailed business cases for improvements or new areas of opportunities. Qualifications/Education/Experience Requirements PhD or master's degree in Statistics, Mathematics, Computer Science or other relevant discipline. 5+ years of experience using large scale data to solve problems and answer questions. Prior experience in the Manufacturing Industry. Skills/Competencies Requirements Experience in building and deploying predictive models and scalable data pipelines Demonstrable experience with common data science toolkits, such as Python, PySpark, R, Weka, NumPy, Pandas, scikit-learn, SpaCy/Gensim/NLTK etc. Knowledge of data warehousing concepts like ETL, dimensional modeling, and sematic/reporting layer design. Knowledge of emerging technologies such as columnar and NoSQL databases, predictive analytics, and unstructured data. Fluency in data science, analytics tools, and a selection of machine learning methods - Clustering, Regression, Decision Trees, Time Series Analysis, Natural Language Processing. Strong problem solving and decision-making skills Ability to explain deep technical information to non-technical parties Demonstrated growth mindset, enthusiastic about learning new technologies quickly and applying the gained knowledge to address business problems. Strong understanding of data governance/management concepts and practices. Strong background in systems development, including an understanding of project management methodologies and the development lifecycle. Proven history managing stakeholder relationships. Business case development. What We Offer You We believe that by continuously improving the quality of our benefits, we can help to raise the quality of life for our team members and their families. At First Quality you will receive: Competitive base salary and bonus opportunities Paid time off (three-week minimum) Medical, dental and vision starting day one 401(k) with employer match Paid parental leave Child and family care assistance (dependent care FSA with employer match up to $2500) Bundle of joy benefit (year's worth of free diapers to all team members with a new baby) Tuition assistance Wellness program with savings of up to $4,000 per year on insurance premiums ...and more! First Quality is committed to protecting information under the care of First Quality Enterprises commensurate with leading industry standards and applicable regulations. As such, First Quality provides at least annual training regarding data privacy and security to employees who, as a result of their role specifications, may come in to contact with sensitive data. First Quality is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, sexual orientation, gender identification, or protected Veteran status. For immediate consideration, please go to the Careers section at ******************** to complete our online application.
    $57k-73k yearly est. 4d ago
  • Angular Engineer

    Firstpro, Inc. 4.5company rating

    Data engineer job in Reading, PA

    A technology-driven organization is seeking an Angular Engineer to contribute to the design, development, and support of business-critical applications. In this role, you will work across multiple projects, troubleshoot production issues, and participate in the full software development lifecycle. You will also have opportunities to lead the design of select components, introduce new ideas based on industry trends, collaborate closely with cross-functional teams, and mentor junior engineers as part of a diverse technical group. Responsibilities Serve as a primary support contact for multiple applications. Participate in the full application lifecycle, including designing, developing, testing, releasing, and supporting software. Collaborate with product owners and technical/business leaders to understand requirements and acceptance criteria. Develop, maintain, test, and troubleshoot applications, ensuring performance, reliability, and maintainability. Support mission-critical applications and assist in resolving customer issues. Design backend database schemas and contribute to overall system architecture. Produce clean, well-documented, and maintainable code that follows defined standards. Write unit and UI tests; leverage CI/CD pipelines for building and deploying code. Triage production issues and work with multiple teams to perform root-cause analysis. Assign and review tasks for junior and offshore engineers. Participate in interviewing new engineering hires. Provide input into standards, tools, conventions, and design patterns during discovery and decision-making processes. Support users by addressing technical questions, concerns, and feasibility inquiries. Perform other software development-related duties as assigned. Requirements Bachelor's Degree in Computer Science, Engineering, or equivalent work experience. 5-7 years of experience with applicable programming languages (e.g., Java, RPG). Full-stack development experience using technologies such as React, Angular, jQuery, HTML, JavaScript, CSS, Spring Framework, Spring MVC, MyBatis, and RESTful APIs. Understanding of technical project management principles. Experience implementing design frameworks, patterns, and software development best practices. Knowledge of industry technology strategies and modern engineering standards. Experience with relational database design. Knowledge of Agile methodologies. Strong troubleshooting and problem-solving skills. Ability to research emerging tools and frameworks. Experience estimating medium to large development efforts. Excellent communication and interpersonal skills. Understanding of the full software development lifecycle. Some exposure to DevOps tools and automation practices. Ability to meet attendance expectations and work required hours. Willingness to travel when needed and complete standard pre-employment processes (background checks, screenings, etc.).
    $85k-130k yearly est. 2d ago
  • Cloud Engineer

    Pride Health 4.3company rating

    Data engineer job in Philadelphia, PA

    Pride Health is hiring a Cloud Security Principal Engineer to support our client's medical facility based in Pennsylvania. This is a 6-month contract with the possibility of an extension, competitive pay and benefits, and a great way to start working with a top-tier healthcare organization. Job Title: Cloud Security Principal Engineer Location: Philadelphia, PA 19104 (Hybrid) Pay Range: $75/hr. - $80.00/hr. Shift: Day Shift Duration: 6 months + Possible extension Job Duties: Proven experience in securing a multi-cloud environment. Proven experience with Identity and access management in the cloud. Proven experience with all security service lines in a cloud environment and the supporting security tools and processes to be successful. Demonstrate collaboration with internal stakeholders, vendors, and supporting teams to design, implement, and maintain security technologies across the network, endpoint, identity, and cloud infrastructure. Drive continuous improvement and coverage of cloud security controls by validating alerts, triaging escalations, and working with the MSP to fine-tune detection and prevention capabilities. Lead or support the development of incident response plans, engineering runbooks, tabletop exercises, and system hardening guides. Ensure alignment of security architectures with policies, standards, and external frameworks such as NIST SP 800-53, HIPAA, PCI-DSS, CISA ZTMM, CIS Benchmarks, and Microsoft CAF Secure Methodology, AWS CAF, AWS Well-Architected framework, Google CAF. Participate in design and governance forums to provide security input into infrastructure, DevSecOps, and cloud-native application strategies. Assist with audits, compliance assessments, risk remediation plans, and evidence collection with internal compliance and external third-party stakeholders. Required Bachelor's Degree At least twelve (12) years industry-related experience, including experience in one to two IT disciplines (such as technical architecture, network management, application development, middleware, information analysis, database management or operations) in a multitier environment. At least six (6) years' experience with information security, regulatory compliance and risk management concepts. At least three (3) years' experience with Identity and Access Management, user provisioning, Role Based Access Control, or control self-assessment methodologies and security awareness training. Experience with Cloud and/or Virtualization technologies. As a certified minority-owned business, Pride Global and its affiliates - including Russell Tobin, Pride Health, and Pride Now - are committed to creating a diverse environment and are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics. Pride Global offers eligible employee's comprehensive healthcare coverage (medical, dental, and vision plans), supplemental coverage (accident insurance, critical illness insurance and hospital indemnity), a 401(k)-retirement savings, life & disability insurance, an employee assistance program, identity theft protection, legal support, auto and home insurance, pet insurance, and employee discounts with some preferred vendors.
    $75 hourly 4d ago
  • Software Engineer - Test Systems Developer

    Catapult Federal Services

    Data engineer job in Canonsburg, PA

    Job Title: Software Engineer - Test Systems Developer*** Education & Experience: Requires a Bachelor's degree in Software Engineering, or a related Science, Engineering or Mathematics field. Also requires 2+ years of job-related experience or a Master's degree. Agile experience preferred. CLEARANCE REQUIREMENTS: Secret Qualifications: As a Software Engineer - Test Systems Developer (Sr Software Engineer) for the Torpedo Systems Group you will be a member of a cross functional team responsible for sustaining and creating software for embedded applications. You will participate in all phases of the Software Development Life Cycle (SDLC) including requirements analysis, design, implementation, and testing. We encourage you to apply if you have any of these preferred skills or experiences: C/C++ LabWindows/CVI Object Oriented Development. Windows/Visual Studio SQL/SQL Server or like relational database experience. Comfortable in implementing ideas from scratch, owning major application features, and take responsibility for their maintenance and improvement over time. Experience participating in technical architecture decisions for complex products. A significant level of Windows application development architecture expertise (e.g., Win32 apps, WPF apps, WinUI 3 apps). Deep understanding of software design patterns such as MVVM, MVP, etc. Experience with Windows kernel level debugging and diagnostics using tools such as Windows DDK or WinDBG or equivalent. Demonstrated in-depth experience developing, testing and debugging software for Windows OS using Visual Studio IDE and Windows SDK. Demonstrated in-depth understand of Windows Low Level Systems development and API. Experience with DevOps concepts such as: Implementing Version Control and standing up branching strategies. Automating processes for build, test, and deploy. Applied experience with agile/lean principles in software development. What sets you apart: Welcoming contribution to build a strong collaborative team culture. Strong understanding of software development process, as well as software engineering concepts, principles, and theories Creative thinker capable of applying new information quickly to solve challenging problems Comfortable providing technical leadership Team player who thrives in collaborative environments and revels in team success Commitment to ongoing professional development for yourself and others
    $81k-114k yearly est. 3d ago
  • Azure DevOps Engineer with P&C exp.

    Valuemomentum 3.6company rating

    Data engineer job in Pittsburgh, PA

    Responsibilities Following are the day-to-day work activities: CI/CD Pipeline Management: Design, implement, and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines for Guidewire applications using tools like TeamCity, GitLab CI, and others. Infrastructure Automation: Automate infrastructure provisioning and configuration management using tools such as Terraform, Ansible, or CloudFormation. Monitoring and Logging: Implement and manage monitoring and logging solutions to ensure system reliability, performance, and security. Collaboration: Work closely with development, QA, and operations teams to streamline processes and improve efficiency. Security: Enhance the security of the IT infrastructure and ensure compliance with industry standards and best practices. Troubleshooting: Identify and resolve infrastructure and application issues, ensuring minimal downtime and optimal performance. Documentation: Maintain comprehensive documentation of infrastructure configurations, processes, and procedures. Requirements Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are: Educational Background: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience: 6-10 years of experience in a DevOps or systems engineering role. Hands-on experience with cloud platforms (AWS, Azure, GCP). Technical Skills: Proficiency in scripting languages (e.g., Python, Power Shell). (2-3 years) Experience with CI/CD tools (e.g., Jenkins, GitLab CI). (3-5 yrs) Knowledge of containerization technologies (e.g., Docker, Kubernetes).- good to have. Strong understanding of networking, security, and system administration. ((3-5 yrs) Familiarity with monitoring toolssuch as DynaTrace/Datadog / Splunk Familiarity with Agile developmentmethodologies. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and teamwork abilities. Ability to work independently About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
    $78k-100k yearly est. 5d ago
  • Software Engineer

    ESB Technologies

    Data engineer job in Malvern, PA

    Day-to-Day Responsibilities: Develop and deploy full-stack applications using AWS services (Lambda, S3, DynamoDB, ECS, Glue, Step Functions, and more). Design, build, and maintain REST and GraphQL APIs and microservices using Python, Java, JavaScript, and Go. Apply DevOps principles with CI/CD pipelines using Bamboo, Bitbucket, Git, and JIRA. Monitor product health and troubleshoot production issues with tools like Honeycomb, Splunk, and CloudWatch. Collaborate with stakeholders to gather requirements, present demos, and coordinate tasks across teams. Resolve complex technical challenges and recommend enterprise-wide improvements. Must-Haves: Minimum 5 years of related experience in software development. Proficient in AWS services, full-stack development, and microservices. Experience with Python, Java, JavaScript, and Go. Strong DevOps experience and familiarity with CI/CD pipelines. Ability to learn new business domains and applications quickly. Nice-to-Haves: Experience with monitoring/observability tools like Honeycomb, Splunk, CloudWatch. Familiarity with serverless and large-scale cloud architectures. Agile or Scrum experience. Strong communication and stakeholder collaboration skills.
    $69k-93k yearly est. 2d ago
  • SRE/DevOps w/ HashiCorp & Clojure Exp

    Dexian

    Data engineer job in Philadelphia, PA

    Locals Only! SRE/DevOps w/ HashiCorp & Clojure Exp Philadelphia, PA: 100% Onsite! 12 + Months MUST: HashiCorp Clojure Role: Lead SRE initiatives, automating and monitoring cloud infrastructure to ensure reliable, scalable, and secure systems for eCommerce. Required: Must Have: AWS, Terraform, HashiCorp Stack (Nomad, Vault, Consul) Programming in Python/Clojure Automation, monitoring, and log centralization (Splunk) Experience leading large-scale cloud infrastructure Desired Skills and Experience Locals Only! SRE/DevOps w/ HashiCorp & Clojure Exp Philadelphia, PA: 100% Onsite! 12 + Months Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support. Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ******************** Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
    $85k-112k yearly est. 5d ago
  • Java Software Engineer

    Ltimindtree

    Data engineer job in Pittsburgh, PA

    About Us: LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree a Larsen & Toubro Group company combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit. ******************************** Job Title: Java Developer Location: Pittsburgh, PA (4 days onsite/week) Duration: FTE Job description: 8 to 10 Years of experience Strong knowledge of Java and FrontEnd UI Technologies Experience of working in UI tool sets programming languages Core JavaScript Angular 11 or higher JavaScript frameworks CSS HTML Experience in Spring Framework Hibernate and proficiency with Spring Boot Solid coding and troubleshooting experience on Web Services and RESTful API Experience and understanding of design patterns culminating into microservices development Strong SQL skills to work on relational databases Strong experience in SDLC DevOps processes CICD tools Git etc Strong problem solver with ability to manage and lead the team to push the solution Strong Communication Skills Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”): Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation. Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting. LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $68k-90k yearly est. 5d ago
  • Identity and Access Management Software Engineering Lead

    Elsevier 4.2company rating

    Data engineer job in Philadelphia, PA

    Identity and Access Management - Software Engineering Lead- Must have either (KeyCloak, Auth0, Okta, or similar) Are you a Software Engineering lead with a strong security background ready to broaden your impact and take on a hands-on software engineering leadership role? Are you a collaborative Software Engineering Lead looking to work for a mission driven global organization? About the role - As an Engineering Lead for NeoID-Elsevier's next-generation Identity and Access Management (IAM) platform-you'll leverage your deep security expertise to architect, build, and evolve the authentication and authorization backbone for Elsevier's global products. You'll also lead and manage a team of 5 engineers, fostering their growth and ensuring delivery excellence. You'll have the opportunity to work with industry standard protocols such as OAuth2, OIDC and SAML, as well as healthcare's SMART on FHIR and EHR integrations. About the team- This team is entrusted with building Elsevier's next-generation Identity and Access Management (IAM) platform. This diverse team of engineers are also building and evolving the authentication and authorization backbone for Elsevier's global products. This team is building a brand new product in Cyber Security that will provide Authorization and Authentication for ALL Elsevier products Qualifications Current and extensive experience with at least one major IAM platform (KeyCloak, Auth0, Okta, or similar) - KeyCloak and Auth0 experience are strong pluses. Only candidates with this experience will be considered for this critical role. Possess an in-depth security mindset, with proven experience designing and implementing secure authentication and authorization systems Have an extensive understanding of OAuth2, OIDC and SAML protocols, including relevant RFCs and enterprise/server-side implementations Familiarity with healthcare identity protocols, including SMART on FHIR and EHR integrations Have current hands-on experience with AWS cloud services and infrastructure management. Proficiency in Infrastructure as Code (IaC) tools, especially Terraform Strong networking skills, including network security, protocols, and troubleshooting Familiarity with software development methodologies (Agile, Waterfall, etc.) Experience with Java/J2EE, JavaScript, and related technologies, or willingness to learn and deepen expertise Knowledge of data modeling, optimization, and secure data handling best practices Accountabilities Leading the design and implementation of secure, scalable IAM solutions, with a focus on OAuth2/OIDC and healthcare protocols such as SMART on FHIR and EHR integrations Managing, mentoring and supporting a team of 5 engineers, fostering a culture of security, innovation, and technical excellence Collaborating with product managers and stakeholders to define requirements and strategic direction for the platform, including healthcare and life sciences use cases Writing and reviewing code, performing code reviews, and ensuring adherence to security and engineering best practices Troubleshooting and resolving complex technical issues, providing expert guidance on IAM, security, and healthcare protocol topics Contributing to architectural decisions and long-term platform strategy Staying current with industry trends, emerging technologies, and evolving security threats in the IAM and healthcare space Why Elsevier? Join a global leader in information and analytics, and help shape the future of secure, seamless access to knowledge for millions of users worldwide, including healthcare professionals and researchers. If you are an Engineering Lead ready to expand your skills, take on a hands-on software engineering leadership role, and grow as a people manager, we want to hear from you.
    $95k-121k yearly est. 2d ago
  • I&C Engineer (pharma)

    Insight Global

    Data engineer job in Spring House, PA

    Must Haves: Bachelor's Degree in Engineering or a related field 3+ years of I&C experience in a manufacturing or pharmaceutical environment Experience with programming and troubleshooting instrumentation and controls Experience with Computerized Maintenance Management Software (CMMS), such as ProCalV5, Blue Mountain, Maximo, Infor EAM, etc. Experience with Microsoft Office or other computer software Job Description: Insight Global is looking for an Instrumentation and Controls Engineer to join the Engineering and Property Services organization of a large pharmaceutical company in Pennsylvania. The I&C Engineer will be responsible for design, implementation, and maintenance of control systems and instrumentation in commercial environments. This role will primarily dedicated to the design, development, specification, and implementation of instrumentation for manufacturing processes in a highly regulated environment. The ideal candidate needs to partner closely with key stakeholders and external partners for design and be comfortable working with an environment with multi-discipline work streams.
    $68k-91k yearly est. 5d ago
  • Data Modeler

    Brooksource 4.1company rating

    Data engineer job in Philadelphia, PA

    Philadelphia, PA Hybrid / Remote Brooksource is seeking an experienced Data Modeler to support an enterprise data warehousing team responsible for designing and implementing information solutions across large operational and analytical systems. You'll work closely with data stewards, architects, and DBAs to understand business needs and translate them into high-quality logical and physical data models that align with enterprise standards. Key Responsibilities Build and maintain logical and physical data models for the Active Enterprise Data Warehouse (AEDW), operational systems, and data exchange processes. Collaborate with data stewards and architects to capture and refine business requirements and translate them into scalable data structures. Ensure physical models accurately implement approved logical models. Partner with DBAs on schema design, change management, and database optimization. Assess and improve existing data structures for performance, consistency, and scalability. Document data definitions, lineage, relationships, and standards using ERwin or similar tools. Participate in design reviews, data governance work, and data quality initiatives. Support impact analysis for enhancements, new development, and production changes. Adhere to enterprise modeling standards, naming conventions, and best practices. Deliver high-quality modeling artifacts with minimal supervision. Required Skills & Experience 5+ years as a Data Modeler, Data Architect, or similar role. Strong expertise with ERwin or other modeling tools. Experience supporting EDW, ODS, or large analytics environments. Proficiency developing conceptual, logical, and physical data models. Strong understanding of relational design, dimensional modeling, and normalization. Hands-on experience with Oracle, SQL Server, PostgreSQL, or comparable databases. Ability to translate complex business requirements into clear technical solutions. Familiarity with data governance, metadata management, and data quality concepts. Strong communication skills and ability to collaborate across technical and business teams. Preferred Skills Experience in healthcare or insurance data environments. Understanding of ETL/ELT concepts and how data models impact integration workflows. Exposure to cloud data platforms (AWS, Azure, GCP) or modern modeling approaches. Knowledge of enterprise architecture concepts. About the Team You'll join a collaborative, fast-moving data warehousing team focused on building reliable, scalable information systems that support enterprise decision-making. This role is key in aligning business needs with the data structures that power core operations and analytics.
    $87k-125k yearly est. 1d ago
  • Data Engineer

    Realtime Recruitment

    Data engineer job in Philadelphia, PA

    Data Engineer - Job Opportunity Full time Permanent Remote - East coast only Please note this role is open for US citizens or Green Card Holders only We're looking for a Data Engineer to help build and enhance scalable data systems that power analytics, reporting, and business decision-making. This role is ideal for someone who enjoys solving complex technical challenges, optimizing data workflows, and collaborating across teams to deliver reliable, high-quality data solutions. What You'll Do Develop and maintain scalable data infrastructure, cloud-native workflows, and ETL/ELT pipelines supporting analytics and operational workloads. Transform, model, and organize data from multiple sources to enable accurate reporting and data-driven insights. Improve data quality and system performance by identifying issues, optimizing architecture, and enhancing reliability and scalability. Monitor pipelines, troubleshoot discrepancies, and resolve data or platform issues-including participating in on-call support when needed. Prototype analytical tools, automation solutions, and algorithms to support complex analysis and drive operational efficiency. Collaborate closely with BI, Finance, and cross-functional teams to deliver robust and scalable data products. Create and maintain clear, detailed documentation (configurations, specifications, test scripts, and project tracking). Contribute to Agile development processes, engineering excellence, and continuous improvement initiatives. What You Bring Bachelor's degree in Computer Science or a related technical field. 2-4 years of hands-on SQL experience (Oracle, PostgreSQL, etc.). 2-4 years of experience with Java or Groovy. 2+ years working with orchestration and ingestion tools (e.g., Airflow, Airbyte). 2+ years integrating with APIs (SOAP, REST). Experience with cloud data warehouses and modern ELT/ETL frameworks (e.g., Snowflake, Redshift, DBT) is a plus. Comfortable working in an Agile environment. Practical knowledge of version control and CI/CD workflows. Experience with automation, including unit and integration testing. Understanding of cloud storage solutions (e.g., S3, Blob Storage, Object Store). Proactive mindset with strong analytical, logical-thinking, and consultative skills. Ability to reason about design decisions and understand their broader technical impact. Strong collaboration, adaptability, and prioritization abilities. Excellent problem-solving and troubleshooting skills.
    $81k-111k yearly est. 3d ago

Learn more about data engineer jobs

Do you work as a data engineer?

What are the top employers for data engineer in PA?

Top 10 Data Engineer companies in PA

  1. CVS Health

  2. Ernst & Young

  3. Meta

  4. The PNC Financial Services Group

  5. Insight Global

  6. Oracle

  7. Govini

  8. Amyx

  9. CapTech

  10. The Hertz Corporation

Job type you want
Full Time
Part Time
Internship
Temporary

Browse data engineer jobs in pennsylvania by city

All data engineer jobs

Jobs in Pennsylvania