Post job

Requirements engineer jobs in Pennsylvania

- 936 jobs
  • MLOps Engineer

    Valuemomentum 3.6company rating

    Requirements engineer job in Philadelphia, PA

    Role : ML Ops Lead Duration : Long Term Skills : 4 - 7 years of experience in DevOps, MLOps, platform engineering, or cloud infrastructure. Strong skills in containerization (Docker, Kubernetes), API hosting, and cloud-native services. Experience with vector DBs (e.g., FAISS, Pinecone, Weaviate) and model hosting stacks. Familiarity with logging frameworks, APM tools, tracing layers, and prompt/versioning logs. Bonus: exposure to LangChain, LangGraph, LLM APIs, and retrieval-based architectures. Responsibilities : Set up and manage runtime environments for LLMs, vector DBs, and orchestration flows (e.g., LangGraph). Support deployments in cloud, hybrid, and client-hosted environments. Containerize systems for deployment (Docker, Kubernetes, etc.) and manage inference scaling. Integrate observability tooling: prompt tracing, version logs, eval hooks, error pipelines. Collaborate on RAG stack deployments (retriever, ranker, vector DB, toolchains). Support CI/CD, secrets management, error triage, and environment configuration. Contribute to platform-level IP, including reusable scaffolding and infrastructure accelerators. Ensure systems are compliant with governance expectations and auditable (esp. in insurance contexts). Preferred Attributes : Systems thinker with strong debugging skills.. Able to work across cloud, on-prem, and hybrid client environments. Comfortable partnering with architects and engineers to ensure smooth delivery. Proactive about observability, compliance, and runtime reliability. About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry. Our culture - Our fuel At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other. People first - We make employees win. Nurture leaders - We nurture from within. Enjoy wins - Celebrating wins and creating leaders. Collaboration - A culture of collaboration and people-centricity. Diversity - Committed to diversity, equity, and inclusion. Fun - Help people have fun at work.
    $69k-85k yearly est. 4d ago
  • Senior Data Engineer

    Brooksource 4.1company rating

    Requirements engineer job in Bethlehem, PA

    Hybrid (Bethlehem, PA) Contract Applicants must be authorized to work in the U.S. without sponsorship We're looking for a Senior Data Engineer to join our growing technology team and help shape the future of our enterprise data landscape. This is a hands-on, high-impact opportunity to make recommendations, build and evolve a modern data platform using Snowflake and cloud-based EDW Solutions. How You'll Impact Results: Drive the evolution and architecture of scalable, secure, cloud-native data platforms Design, build, and maintain data models, pipelines, and integration patterns across the data lake, data warehouse, and consumption layers Lead deployment of long-term data products and infuse data and analytics capabilities across business and IT Optimize data pipelines and warehouse performance for accuracy, accessibility, and speed Collaborate cross-functionally to deliver data, experimentation, and analytics solutions Implement systems to monitor data quality and ensure reliability and availability of Production data for downstream users, leadership teams, and business processes Recommend and implement best practices for query performance, storage, and resource efficiency Test and clearly document data assets, pipelines, and architecture to support usability and scale Engage across project phases and serve as a key contributor in strategic data architecture initiatives Your Qualifications That Will Ensure Success: Required: 10+ years of experience in Information Technology Data Engineering: professional database and data warehouse development Advanced proficiency in SQL, data modeling, and performance tuning Experience in system configuration, security administration, and performance optimization Deep experience required with Snowflake and modern cloud data platforms (AWS, Azure, or GCP) Familiarity with developing cloud data applications (AWS, Azure, Google Cloud) and/or standard CI/CD tools like Azure DevOps or GitHub Strong analytical, problem-solving, and documentation skills Experience in system configuration, security administration, and performance optimization Proficiency with Microsoft Excel and common data analysis tools Ability to troubleshoot technical issues and provide system support to non-technical users. Preferred: Experience integrating SAP ECC data into cloud-native platforms Exposure to AI/ML, API development, or Boomi Atmosphere Prior experience in consumer packaged goods (CPG), Food / Beverage industry, or manufacturing
    $91k-126k yearly est. 2d ago
  • Data Engineer (IoT)

    Curvepoint

    Requirements engineer job in Pittsburgh, PA

    As an IoT Data Engineer at CurvePoint, you will design, build, and optimize the data pipelines that power our Wi-AI sensing platform. Your work will focus on reliable, low-latency data acquisition from constrained on-prem IoT devices, efficient buffering and streaming, and scalable cloud-based storage and training workflows. You will own how raw sensor data (e.g., wireless CSI, video, metadata) moves from edge devices with limited disk and compute into durable, well-structured datasets used for model training, evaluation, and auditability. You will work closely with hardware, ML, and infrastructure teams to ensure our data systems are fast, resilient, and cost-efficient at scale. Duties and Responsibilities Edge & On-Prem Data Acquisition Design and improve data capture pipelines on constrained IoT devices and host servers (limited disk, intermittent connectivity, real-time constraints). Implement buffering, compression, batching, and backpressure strategies to prevent data loss. Optimize data transfer from edge → on-prem host → cloud. Streaming & Ingestion Pipelines Build and maintain streaming or near-real-time ingestion pipelines for sensor data (e.g., CSI, video, logs, metadata). Ensure data integrity, ordering, and recoverability across failures. Design mechanisms for replay, partial re-ingestion, and audit trails. Cloud Data Pipelines & Storage Own cloud-side ingestion, storage layout, and lifecycle policies for large time-series datasets. Balance cost, durability, and performance across hot, warm, and cold storage tiers. Implement data versioning and dataset lineage to support model training and reproducibility. Training Data Enablement Structure datasets to support efficient downstream ML training, evaluation, and experimentation. Work closely with ML engineers to align data formats, schemas, and sampling strategies with training needs. Build tooling for dataset slicing, filtering, and validation. Reliability & Observability Add monitoring, metrics, and alerts around data freshness, drop rates, and pipeline health. Debug pipeline failures across edge, on-prem, and cloud environments. Continuously improve system robustness under real-world operating conditions. Cross-Functional Collaboration Partner with hardware engineers to understand sensor behavior and constraints. Collaborate with ML engineers to adapt pipelines as model and data requirements evolve. Contribute to architectural decisions as the platform scales from pilots to production deployments. Must Haves Bachelor's degree in Computer Science, Electrical Engineering, or a related field (or equivalent experience). 3+ years of experience as a Data Engineer or Backend Engineer working with production data pipelines. Strong Python skills; experience building reliable data processing systems. Hands-on experience with streaming or near-real-time data ingestion (e.g., Kafka, Kinesis, MQTT, custom TCP/UDP pipelines). Experience working with on-prem systems or edge/IoT devices, including disk, bandwidth, or compute constraints. Familiarity with cloud storage and data lifecycle management (e.g., S3-like object stores). Strong debugging skills across distributed systems. Nice to Have Experience with IoT or sensor data (RF/CSI, video, audio, industrial telemetry). Familiarity with data compression, time-series formats, or binary data handling. Experience supporting ML training pipelines or large-scale dataset management. Exposure to containerized or GPU-enabled data processing environments. Knowledge of data governance, retention, or compliance requirements. Location Pittsburgh, PA (hybrid preferred; some on-site work with hardware teams) Salary $110,000 - $135,000 / year (depending on experience and depth in streaming + IoT systems)
    $110k-135k yearly 4d ago
  • Data Engineer

    EXL 4.5company rating

    Requirements engineer job in Philadelphia, PA

    Job Title: Data Engineer Experience: 5+ years We are seeking an experienced Data Engineer with strong expertise in PySpark and data pipeline operations. This role focuses heavily on performance tuning Spark applications, managing large-scale data pipelines, and ensuring high operational stability. The ideal candidate is a strong technical problem-solver, highly collaborative, and proactive in automation and process improvements. Key Responsibilities: Data Pipeline Management & Support Operate and support Business-as-Usual (BAU) data pipelines, ensuring stability, SLA adherence, and timely incident resolution. Identify and implement opportunities for optimization and automation across pipelines and operational workflows. Spark Development & Performance Tuning Design, develop, and optimize PySpark jobs for efficient large-scale data processing. Diagnose and resolve complex Spark performance issues such as data skew, shuffle spill, executor OOM errors, slow-running stages, and partition imbalance. Platform & Tool Management Use Databricks for Spark job orchestration, workflow automation, and cluster configuration. Debug and manage Spark on Kubernetes, addressing pod crashes, OOM kills, resource tuning, and scheduling problems. Work with MinIO/S3 storage for bucket management, permissions, and large-volume file ingestion and retrieval. Collaboration & Communication Partner with onshore business stakeholders to clarify requirements and convert them into well-defined technical tasks. Provide daily coordination and technical oversight to offshore engineering teams. Participate actively in design discussions and technical reviews. Documentation & Operational Excellence Maintain accurate and detailed documentation, runbooks, and troubleshooting guides. Contribute to process improvements that enhance operational stability and engineering efficiency. Required Skills & Qualifications: Primary Skills (Must-Have) PySpark: Advanced proficiency in transformations, performance tuning, and Spark internals. SQL: Strong analytical query design, performance tuning, and foundational data modeling (relational & dimensional). Python: Ability to write maintainable, production-grade code with a focus on modularity, automation, and reusability. Secondary Skills (Highly Desirable) Kubernetes: Experience with Spark-on-K8s, including pod diagnostics, resource configuration, and log/monitoring tools. Databricks: Hands-on experience with cluster management, workflow creation, Delta Lake optimization, and job monitoring. MinIO / S3: Familiarity with bucket configuration, policies, and efficient ingestion patterns. DevOps: Experience with Git, CI/CD, and cloud environments (Azure preferred).
    $74k-100k yearly est. 2d ago
  • Azure data engineer

    Cognizant 4.6company rating

    Requirements engineer job in Pittsburgh, PA

    Job Title - DataBricks Data Engineer **Must have 8+ years of real hands on experience** We are specifically seeking a Data Engineer-Lead with strong expertise in Databricks development. The role involves: Building and testing data pipelines using Python/Scala on Databricks Hands on experience to develop and lead the offshore team to perform development/testing work in Azure data bricks Architect data platforms using Azure services such as Azure Data Factory (ADF), Azure Databricks (ADB), Azure SQL Database, and PySpark. Collaborate with stakeholders to understand business needs and translate them into technical solutions. Provide technical leadership and guidance to the data engineering team and need to perform development. Familiar with Safe Agile concepts and good to have working experience in agile model. Develop and maintain data pipelines for efficient data movement and transformation. Onsite and offshore team communication and co-ordination. Create and update the documentation to facilitate cross-training and troubleshooting Hands on experience in scheduling tools like BMC control-M and setup jobs and test the schedules. Understand the data models and schemas to support the development work and help in creation of tables in databricks Proficiency in Azure Data Factory (ADF), Azure Databricks (ADB), SQL, NoSQL, PySpark, Power BI and other Azure data tools. Implementing automated data validation frameworks such as Great Expectations or Deequ Reconciling large-scale datasets Ensuring data reliability across both batch and streaming processes The ideal candidate will have hands-on experience with: PySpark, Scala, Delta Lake, and Unity Catalog Devops CI/CD automation Cloud-native data services Azure databricks/Oracle BMC Control-M Location: Pittsburgh, PA
    $77k-101k yearly est. 3d ago
  • Hadoop Data Engineer

    Smart It Frame LLC

    Requirements engineer job in Pittsburgh, PA

    About the job: We are seeking an accomplished Tech Lead - Data Engineer to architect and drive the development of large-scale, high-performance data platforms supporting critical customer and transaction-based systems. The ideal candidate will have a strong background in data pipeline design, Hadoop ecosystem, and real-time data processing, with proven experience building data solutions that power digital products and decisioning platforms in a complex, regulated environment. As a technical leader, you will guide a team of engineers to deliver scalable, secure, and reliable data solutions enabling advanced analytics, operational efficiency, and intelligent customer experiences. Key Roles & Responsibilities Lead and oversee the end-to-end design, implementation, and optimization of data pipelines supporting key customer onboarding, transaction, and decisioning workflows. Architect and implement data ingestion, transformation, and storage frameworks leveraging Hadoop, Avro, and distributed data processing technologies. Partner with product, analytics, and technology teams to translate business requirements into scalable data engineering solutions that enhance real-time data accessibility and reliability. Provide technical leadership and mentorship to a team of data engineers, ensuring adherence to coding, performance, and data quality standards. Design and implement robust data frameworks to support next-generation customer and business product launches. Develop best practices for data governance, security, and compliance aligned with enterprise and regulatory requirements. Drive optimization of existing data pipelines and workflows for improved efficiency, scalability, and maintainability. Collaborate closely with analytics and risk modeling teams to ensure data readiness for predictive insights and strategic decision-making. Evaluate and integrate emerging data technologies to future-proof the data platform and enhance performance. Must-Have Skills 8-10 years of experience in data engineering, with at least 2-3 years in a technical leadership role. Strong expertise in the Hadoop ecosystem (HDFS, Hive, MapReduce, HBase, Pig, etc.). Experience working with Avro, Parquet, or other serialization formats. Proven ability to design and maintain ETL / ELT pipelines using tools such as Spark, Flink, Airflow, or NiFi. Proficiency in Python, Scala for large-scale data processing. Strong understanding of data modeling, data warehousing, and data lake architectures. Hands-on experience with SQL and both relational and NoSQL data stores. Cloud data platform experience with AWS. Deep understanding of data security, compliance, and governance frameworks. Excellent problem-solving, communication, and leadership skills.
    $79k-107k yearly est. 3d ago
  • SRE/DevOps w/ HashiCorp & Clojure Exp

    Dexian

    Requirements engineer job in Philadelphia, PA

    Locals Only! SRE/DevOps w/ HashiCorp & Clojure Exp Philadelphia, PA: 100% Onsite! 12 + Months MUST: HashiCorp Clojure Role: Lead SRE initiatives, automating and monitoring cloud infrastructure to ensure reliable, scalable, and secure systems for eCommerce. Required: Must Have: AWS, Terraform, HashiCorp Stack (Nomad, Vault, Consul) Programming in Python/Clojure Automation, monitoring, and log centralization (Splunk) Experience leading large-scale cloud infrastructure Desired Skills and Experience Locals Only! SRE/DevOps w/ HashiCorp & Clojure Exp Philadelphia, PA: 100% Onsite! 12 + Months Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support. Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ******************** Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
    $85k-112k yearly est. 4d ago
  • Senior Data Engineer

    Realtime Recruitment

    Requirements engineer job in Philadelphia, PA

    Full-time Perm Remote - EAST COAST ONLY Role open to US Citizens and Green Card Holders only We're looking for a Senior Data Engineer to lead the design, build, and optimization of modern data pipelines and cloud-native data infrastructure. This role is ideal for someone who thrives on solving complex data challenges, improving systems at scale, and collaborating across technical and business teams to deliver high-impact solutions. What You'll Do Architect, develop, and maintain scalable, secure data infrastructure supporting analytics, reporting, and operational workflows. Design and optimize ETL/ELT pipelines to integrate data from diverse internal and external sources. Prepare and transform structured and unstructured data to support modeling, reporting, and advanced analysis. Improve data quality, reliability, and performance across platforms and workflows. Monitor pipelines, troubleshoot discrepancies, and ensure accuracy and timely data delivery. Identify architectural bottlenecks and drive long-term scalability improvements. Collaborate with Product, BI, Finance, and engineering teams to build end-to-end data solutions. Prototype algorithms, transformations, and automation tools to accelerate insights. Lead cloud-native workflow design, including logging, monitoring, and storage best practices. Create and maintain high-quality technical documentation. Contribute to Agile ceremonies, engineering best practices, and continuous improvement initiatives. Mentor teammates and guide adoption of data platform tools and patterns. Participate in on-call rotation to maintain platform stability and availability. What You Bring Bachelor's degree in Computer Science or related technical field. 4+ years of advanced SQL experience (Oracle, PostgreSQL, etc.). 4+ years working with Java or Groovy. 3+ years integrating with SOAP or REST APIs. 2+ years with DBT and data modeling. Strong understanding of modern data architectures, distributed systems, and performance optimization. Experience with Snowflake or similar cloud data platforms (preferred). Hands-on experience with Git, Jenkins, CI/CD, and automation/testing practices. Solid grasp of cloud concepts and cloud-native engineering. Excellent problem-solving, communication, and cross-team collaboration skills. Ability to lead projects, own solutions end-to-end, and influence technical direction. Proactive mindset with strong analytical and consultative abilities.
    $81k-111k yearly est. 2d ago
  • Software Engineer

    ESB Technologies

    Requirements engineer job in Malvern, PA

    Day-to-Day Responsibilities: Develop and deploy full-stack applications using AWS services (Lambda, S3, DynamoDB, ECS, Glue, Step Functions, and more). Design, build, and maintain REST and GraphQL APIs and microservices using Python, Java, JavaScript, and Go. Apply DevOps principles with CI/CD pipelines using Bamboo, Bitbucket, Git, and JIRA. Monitor product health and troubleshoot production issues with tools like Honeycomb, Splunk, and CloudWatch. Collaborate with stakeholders to gather requirements, present demos, and coordinate tasks across teams. Resolve complex technical challenges and recommend enterprise-wide improvements. Must-Haves: Minimum 5 years of related experience in software development. Proficient in AWS services, full-stack development, and microservices. Experience with Python, Java, JavaScript, and Go. Strong DevOps experience and familiarity with CI/CD pipelines. Ability to learn new business domains and applications quickly. Nice-to-Haves: Experience with monitoring/observability tools like Honeycomb, Splunk, CloudWatch. Familiarity with serverless and large-scale cloud architectures. Agile or Scrum experience. Strong communication and stakeholder collaboration skills.
    $69k-93k yearly est. 1d ago
  • Java Software Engineer

    Ltimindtree

    Requirements engineer job in Pittsburgh, PA

    About Us: LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree a Larsen & Toubro Group company combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit. ******************************** Job Title: Java Developer Location: Pittsburgh, PA (4 days onsite/week) Duration: FTE Job description: 8 to 10 Years of experience Strong knowledge of Java and FrontEnd UI Technologies Experience of working in UI tool sets programming languages Core JavaScript Angular 11 or higher JavaScript frameworks CSS HTML Experience in Spring Framework Hibernate and proficiency with Spring Boot Solid coding and troubleshooting experience on Web Services and RESTful API Experience and understanding of design patterns culminating into microservices development Strong SQL skills to work on relational databases Strong experience in SDLC DevOps processes CICD tools Git etc Strong problem solver with ability to manage and lead the team to push the solution Strong Communication Skills Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”): Benefits and Perks: Comprehensive Medical Plan Covering Medical, Dental, Vision Short Term and Long-Term Disability Coverage 401(k) Plan with Company match Life Insurance Vacation Time, Sick Leave, Paid Holidays Paid Paternity and Maternity Leave The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation. Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting. LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
    $68k-90k yearly est. 4d ago
  • Microsoft 365 Engineer

    Pei Genesis 4.3company rating

    Requirements engineer job in Philadelphia, PA

    Job Details Experienced PEI-Genesis Philadelphia HQ - Philadelphia, PA Undisclosed Hybrid Full Time 4 Year Degree Undisclosed Undisclosed Undisclosed General BusinessDescription *Equal Opportunity Employer Veterans/Disabled* The Microsoft 365 Engineer reports to the Director of IT Infrastructure & Operations and is responsible for the administration, optimization, and governance of the enterprise Microsoft 365 platform. This role ensures secure, reliable, and efficient collaboration across the organization by managing core services such as Exchange Online, Intune, Entra ID, Microsoft Teams, SharePoint Online, and Windows Autopilot. The ideal candidate will not only provide day-to-day operational support but will also develop scalable processes, policies, and standards aligned with industry best practices to ensure compliance, security, and long-term sustainability of the Microsoft 365 environment. Qualifications Education & Experience: Required Qualifications 1-3 years of hands-on experience managing Microsoft 365 and Azure enterprise environments. Basic expertise in Exchange Online, SharePoint Online, Intune, Entra ID, and Windows Autopilot. Experience with PowerShell scripting for automation and administration. Solid understanding of IT security, compliance, and governance best practices. Ability to troubleshoot and communicate effectively. Associate's or Bachelor's degree in Computer Science, Information Technology, or related field (or equivalent experience). Interest in pursuing relevant certifications (e.g., Microsoft 365 Certified: Modern Desktop Administrator Associate). Preferred Qualifications Microsoft certifications such as: Microsoft 365 Certified: Enterprise Administrator Expert Microsoft 365 Certified: Security Administrator Associate Microsoft Certified: Azure Administrator Associate Background in developing IT policies, standards, and governance documentation. Experience supporting enterprise device deployments and endpoint management throughout the lifecycle. Demonstrated interest in process optimization and automation initiatives. Experience with IT service management frameworks (ITIL or equivalent). Essential Tasks & Responsibilities: Microsoft 365 Platform Administration Manage Exchange Online, Teams, SharePoint Online, and OneDrive to ensure availability, security, and performance. Oversee mailbox provisioning, migrations, hybrid integrations, and advanced troubleshooting. Implement and enforce email and collaboration security policies (DLP, anti-spam, anti-phishing, retention). Device Management & Endpoint Security Configure, manage, and support Microsoft Intune for enterprise device management. Enforce compliance, security baselines, and conditional access policies. Troubleshoot enrollment, deployment, and policy-related issues. Assist with Windows Autopilot deployment profiles. Help integrate Autopilot with Active Directory/Entra ID. Troubleshoot Autopilot enrollment and deployment issues. Document processes and support training for IT staff. Identity Services Administer Entra ID and hybrid identity integrations with on-premises systems. Implement and manage automation scripts using PowerShell and Azure CLI. Security & Compliance Collaborate with Security teams to implement Microsoft Defender threat protection tools. Participate in security reviews, audits, and remediation efforts. Develop and maintain governance policies for access, identity, and compliance aligned with regulatory standards (e.g., ISO 27001, NIST, GDPR). Process Development & Documentation Create and maintain operational runbooks, technical documentation, and governance playbooks. Develop and enforce policies, standards, and best practices for Microsoft 365 adoption and usage. Provide tier-3 support for escalated Microsoft 365 and Azure-related issues. Partner with IT and business stakeholders to design and deliver solutions that improve productivity. Conduct training sessions and knowledge transfers to IT staff. #LI-AS1
    $82k-105k yearly est. 60d+ ago
  • Cybersecurity Engineer III (ISSE)

    Omni Technologies LLC 3.9company rating

    Requirements engineer job in Philadelphia, PA

    Job Title: Cybersecurity Engineer III (ISSE) Primary Location: USA - Philadelphia, PA Security Clearance: Secret Schedule: Full-time, On-site . Basic Qualifications: An individual must meet the following criteria to be considered: U.S. Citizen Pass a background investigation. Possess an active SECRET security clearance. Bachelor's degree in computer science, information technology, or an equivalent STEM degree from an accredited college or university. Three (3) years professional experience capturing and refining information security operational and security requirements and ensuring those requirements are properly addressed through purposeful development, and configuration; and implementing security controls, configuration changes, software/hardware updates/patches, vulnerability scanning, and securing configurations. Must possess DoD 8570-compliant security certifications to meet IAT II (Security+ CE, CCNA-Security, CySA+, etc.) Job Highlights: In this role, you will be responsible for the development, monitoring, and execution of the Cybersecurity Program in support of the Navy, including DoD Information A&A and RMF services. The effort includes Cybersecurity policy, reviewing A&A artifacts, performing A&A validation, implementing security postures, serving as SME in cybersecurity lifecycle management, coordinating, implementing, and sustaining labs under RMF. General Required Skills: Related experience with the Navy / Department of Defense Key Job Functions: Assist with the developing, maintaining, and tracking Risk Management Framework (RMF) system security plans, which include System Categorization Forms, Platform Information Technology (PIT) Determination Checklists, Assess Only (AO) Determination Checklists, Implementation Plans, System Level Continuous Monitoring (SLCM) Strategies, System Level Policies, Hardware Lists, Software List, System Diagrams, Privacy Impact Assessments (PIA), and Plans of Action and Milestones (POA&M). Execute the RMF process in support of obtaining and maintaining Interim Authority to Test (IATT), AO approval, Authorization to Operate (ATO), and Denial of Authorization to Operate (DATO). Identify and tailor IT and Cyber Security(CS) control baselines based on RMF guidelines and categorization of the RMF boundary. Perform Ports, Protocols, and Services Management (PPSM). Perform IT and CS vulnerability-level risk assessments. Execute security control testing as required by a risk assessment or annual security review (ASR). Mitigate and remediate IT and CS system level vulnerabilities for all assets within the boundary per STIG requirements. Develop and maintain Plans of Actions and Milestones (POA&M) in Enterprise Mission Assurance Support Service (eMASS). Develop and maintain system level IT and CS policies and procedures for respective RMF boundaries in accordance with guidance provided by the command ISSMs. Implement and assess STIG and SRGs. Perform and develop vulnerability assessments with automated tools such as Assured Compliance Assessment Solution (ACAS), Security Content Automation Protocol (SCAP) Compliance Check (SCC) and Evaluate STIG. Deploy security updates to Information System components. Perform routine audits of IT system hardware and software components. Maintain inventory of Information System components. Participate in IT change control and configuration management processes. Upload vulnerability data in Vulnerability Remediation Asset Manager (VRAM). Image or re-image assets that are part of the assigned RMF boundary. Install software and troubleshoot software issues as necessary to support compliance of the RMF boundaries' assets. Assist with removal of Solid-State Drive (SSD), Hard Disk Drive (HDD) or other critical components of assets before destruction and removal from the RMF boundary. Provide cybersecurity patching of assets in response to DoD and DoN TASKORDs, FRAGORDs, or as required by Command ISSM, ACIO, and/or Code 104 management. Support configuration change documentation and control processes and maintaining DOD STIG Compliance. Support cyber compliance of assets that are part of an enterprise IT network to include Windows server and CISCO networking hardware. This includes assessing vulnerabilities, patching and meeting requirements of the STIG for the hardware. Report compliance issues of network hardware to management to avoid operational loss of the network. Benefits: Competitive Salary Comprehensive medical coverage Dental, Vision, STD/LTD, and Life Insurance Coverage - 100% premium paid by OMNI. 401(k) Retirement Plan - 3% match and 50% match of 4% and 5% deferral, immediately vested Paid Time Off (PTO) - 4 weeks (20 days) of front-loaded PTO per year, with a maximum rollover of 40 hours each year. Holidays - All employees are given six (6) paid days off and five (5) floating holidays in observance of the U.S. federal holidays. Health Reimbursement Arrangement (HRA) - 100% funded by OMNI ($7,400 individual / $14,800 family) Employee Referral Program - Employee referral bonus is paid for eligible candidates after 90 days of employment. Education Assistance & Continuing Education Program - Employees can use up to $5,000 annually toward continuing education, certifications, training, and conference attendance. Community Outreach - Employees who volunteer 40 (or more) hours a year to community service or OMNI Community Outreach events receive a cash bonus. About OMNI: OMNI is a global solutions provider! We deliver innovative technology-driven solutions and services in the public, private, national defense, and intelligence sectors that help organizations stay ready in an ever-changing technological environment. We help our clients strategize for their most important goals and use advanced business intelligence to understand the drivers behind their performance. We Innovate to help our clients deliver advanced systems, products, and services. OMNI is looking for world-class talent ready to tackle challenging projects that will enable our customers to achieve their most demanding technical and operational goals. At OMNI Technologies, you'll use advanced methods and technologies to solve our nation's emerging challenges. We offer more than a job - we offer a team. We are an equal opportunity employer offering competitive salaries, comprehensive health benefits, and equity packages. Learn more about us at *************************
    $72k-93k yearly est. Auto-Apply 60d+ ago
  • Branch Engineer

    Barnhart Crane & Rigging 4.7company rating

    Requirements engineer job in Philadelphia, PA

    * Develop crane layouts and rigging drawings for lift plans using Barnhart's vast array of custom lifting tools. * Create layout drawings for heavy machinery moving projects utilizing gantries, slide systems, modular trailers, lift & hoist towers, and other custom Barnhart tools. * Create drawings for heavy transportation projects using modular trailers for over-the-road hauling, on-site hauling, barge roll-on and barge roll-off projects. * Use engineering software such as AutoCAD, Inventor, Mathcad, RISA, and Barnhart calculation spreadsheets to develop solutions to heavy lift and heavy transport projects. * Assist Sales Personnel by performing field walk-down assessments of jobs to develop equipment lists, identify potential issues, and create technical sketches for project bids. * Provide technical support to Project Managers and Superintendents on job sites as issues arise requiring changes to project plans. This could be working from the office or at the job site when required. * Design custom lifting or support tools for job specific needs (dependent on project) using engineering standards such as the AISC Steel Construction Manual, ASME Design of Below-the-Hook Lifting Devices, and other industry standards. * Function as field technical liaison for complex projects as a risk manager or performing safety & quality evaluations. Preferred Qualifications: * Civil or Mechanical Degree with a 3.0 minimum GPA * AutoCAD * Communication Skills * MS Excel * MS PowerPoint * MS Word * Mathematical Skills * Reasoning Ability PURPOSE - Barnhart is built on a strong foundation of serving others. The fruit of our labor is used to grow the company, care for our employees, and serve those in our communities and around the world. MINDS OVER MATTER - Barnhart has built a nationwide reputation for solving problems. We specialize in the lifting, heavy-rigging, and heavy transport of major components used in American industry. NETWORK - Barnhart has built teams that form one of our industry's strongest networks of talent and resources with over 60 branch locations across the U.S. working together to serve our customers. This growing network offers our team members constant opportunity for career growth and professional development. CULTURE - Barnhart has a strong team culture -- the "One TEAM." We are looking for smart, hard-working people who strive for excellence in their work and appreciate collaboration. Join a team that values Safety, Servant Leadership, Quality Service, Innovation, Continuous Improvement, Fairness, and Profit with a Purpose. EOE/AA Minority/Female/Disability/Veteran
    $73k-108k yearly est. 42d ago
  • Engineer (Radiological Work)

    Aptim 4.6company rating

    Requirements engineer job in West Mifflin, PA

    At APTIM, we come to work each day knowing that we are making an impact on the world. Our work spans from safeguarding and maintaining critical infrastructure to helping communities recover from natural disasters, from empowering our armed forces and first responders to reducing carbon and energy use, and from making cities more resilient against the threats of climate change to restoring contaminated ecological systems. **Job Overview:** We are searching for Engineers to support APTIM's nuclear decommissioning (D&D) project at the Bettis Atomic Laboratory in West Mifflin, PA. Engineers will be a part of a multi-disciplined team supporting our projects and will be focused on engineering support activities. APTIM is a leading provider in integrated, comprehensive services in a variety of markets and industry. We offer consulting, engineering, construction, scientific program and project management services to meet the customer needs in environmental and infrastructure markets. Our highly skilled multi-discipline teams of professionals are strategically located throughout the United States and abroad to efficiently provide safe and sustainable solutions to complex challenges faced by government, public institutions, and commercial clients. These roles involve planning and executing engineering solutions for characterization, dismantling, and demolition of radiologically and environmentally impacted structures. APTIM expects our engineers to be creative problem solvers who actively seek innovative, practical, and cost-effective solutions to complex challenges. They are encouraged to think beyond conventional approaches, leverage multidisciplinary expertise, and apply critical thinking to deliver safe, sustainable, and efficient outcomes for every project. Responsibilities include developing technical work documents, sampling plans, and waste characterization reports, as well as performing structural evaluations and mechanical design for safe and compliant project execution. Positions offer a mix of office-based engineering and field work, requiring strong problem-solving skills and collaboration with multidisciplinary teams **Key Responsibilities:** + Develop and maintain technical work documents, engineering specifications, and detailed work packages for decommissioning and remediation projects. + Design and implement sampling plans for buildings, soil, and waste to characterize radiological and environmental contaminants, ensuring accurate data collection for safe demolition and waste disposition. + Provide regulatory analysis and ensure adherence to environmental, radiological, and industrial safety standards. + Perform structural evaluations and analyses for building modifications and demolition activities. + Perform waste characterization evaluations and prepare documentation for proper handling, packaging, labeling, and disposal of radioactive, hazardous, and nonhazardous materials. + Work independently and as part of a team, including leadership roles, to coordinate with engineering, operations, and safety teams for project success. + Assist in developing time and material estimates for project proposals and execution plans. **Basic Qualifications:** + Bachelor's or master's degree in Engineering, Environmental Science, or related field. + In lieu of a degree, extensive D&D experience at Bettis Laboratory will be considered. + Minimum 5 years of experience in Nuclear, Construction, Demolition or Remediation fields. + Demonstrated experience in radiological work, environmental sampling, and waste management. + Familiarity with DOE regulations, OSHA standards, and hazardous waste handling. + Strong computer skills (Word, Excel, PowerPoint and technical engineering and design software. + Excellent verbal and writing skills. + Must be a US Citizen (no Dual Citizenship) and be able to pass a background check to gain access to government facilities. For more information please see DOE Order 472.2. + Ability to work independently and in a diverse team setting. + Willingness to work across Engineering disciplines. + Strong multitasking skills. \#Li-Onsite **ABOUT APTIM** APTIM is committed to accelerating the transition toward a clean and efficient energy economy, building a sustainable future for our communities and natural world, and creating a more inclusive and equitable environment that celebrates the diversity of our communities. We specialize in environmental, resilience, and sustainability and energy solutions, as well as technical and data solutions, program management, and critical infrastructure. For every challenge our clients face, there is an opportunity for APTIM to innovate a fit-for-purpose solution that will raise your organization or community to a new standard of excellence. What you can expect from APTIM: + Work that is worthy of your time and talent + Respect and flexibility to live a full life at work and at home + Dogged determination to deliver for our clients and communities + A voice in making our company better + Investment into your personal and professional development As of the date of this posting, a good faith estimate of the current pay range for this position is $80,000 - $110,000 Per Year. Compensation depends on several factors including: experience, education, key skills, geographic location of the position, client requirements, external market competitiveness, and internal equity among other employees within APTIM. **Employee Benefits** APTIM Federal Services is committed to providing an extensive range of benefits that protect and promote the health and financial well-being of our employees and their families through the APTIM Benefits Marketplace *********************************** . + Medical, vision, and dental insurance: Through the marketplace, our employees can choose benefits from five metallic levels and 10+ carriers to find the right benefits that work for them in their location. + Life insurance + Short-term and long-term disability insurance + Paid holidays, vacation, and sick leave (eligibility based on company policy and applicable law) + 401(k) APTIM offers three 401k plans through the Aon Pooled Employer Plan (PEP). The specific plan you are eligible for depends on the business unit you are in. The details of the largest plan are found here: + APTIM 2025 401(k) Plan Features (makeityoursource.com) (***********************************getattachment/eaa3a0a0-e46b-447b-b8b7-18f2fbf26eae/APTIM-401k-Plan-Features.pdf) + APTIM - Helpful Documents **Watch our video:** **About APTIM - In Pursuit of Better** Equal Opportunity Employer Minorities/Women/Protected Veterans/Disabled Applicants with a physical or mental disability who require a reasonable accommodation for any part of the application or hiring process may make their request known by e-mailing ********************************** or calling ************ for assistance. EOE/Vets/Disability
    $80k-110k yearly 9d ago
  • Ruby on Rails Engineer

    Us Tech Solutions 4.4company rating

    Requirements engineer job in Plymouth Meeting, PA

    US Tech Solutions is a global staff augmentation firm providing a wide-range of talent on-demand and total workforce solutions. To know more about US Tech Solutions, please visit our website ************************ We are constantly on the lookout for professionals to fulfill the staffing needs of our clients, sets the correct expectation and thus becomes an accelerator in the mutual growth of the individual and the organization as well. Keeping the same intent in mind, we would like you to consider the job opening with US Tech Solutions that fits your expertise and skillset. Job Description Description · 6 or more years of experience as a technical lead senior engineer or solutions architect on enterprise programs. · Experience developing Ruby applications as a Principal/Senior Software Engineer. · Software development experience including: OOP, concurrency programming, design patterns, RESTful service implementation, Service Oriented Architecture, Test-Driven Development, Acceptance Testing, Transact-SQL, and SQL Server. · Experience creating tools to automate the deployment of an enterprise software solution to the cloud. · Strong object-oriented design and development experience. · Knowledge of design patterns and their implementation. · Multi-tier application design and development. · Multi-threaded design and development. · Excellent problem solving skills. · Agile or Lean Software Development experience such as Kanban, Scrum, Test-Driven Development, and/or Extreme Programming methodologies. · Experience using automated testing tools like RSpec, Capybara, Jasmine, Selenium, and/or other test automation tools. · Experience developing your own testing tools to facilitate testing is a plus. · Experience helping others to design, write, conduct, and direct the development of tests. · Positive team player attitude with excellent verbal and written communication skills. · Self-motivated and willing to “do what it takes” to get the job done. · High degree of organizational skills Strong written and verbal communication skills. · High degree of self-motivation to learn new methodologies that will enhance job performance. Qualifications Primary Responsibilities: · Ability to understand and influence the vision of program strategy. · Technical ownership of a specific solution area. · Help build standards, best practices based on industry trends. · Design and develop solution strategy which supports productivity, maintainability, interoperability, and product growth. · Prevent decision process from stalling by ensuring solution level issues are addressed promptly. · Conduct, manage, and enforce code reviews. · Conduct technical and feature risks assessments and communicate to the architecture and product management groups. · Educate and enforce clean code that follows the main programming principles. · Enforce Proper Unit, Integration, System, Performance level Tests, Code Coverage, and Static/Dynamic Code Quality Metrics. · Work with Architects to ensure proper solution based on the established architectural principles and patterns. · Educate and enforce proper and efficient API/framework documentation Mentor and guide technical resources within team. · Guide and participate in recruiting best technical talent for the team. · Write web services, business objects, and other middle-tier framework using the framework. · Use tools and technologies to extend and improve the functionality of our product. · Communicate with team members to clarify requirements and overcome obstacles to meet the team goals. · Leverage open source and other technologies and languages outside of the framework should the need arise and autonomously be able to make use of those decisions. · Develop cutting edge solutions to maximize the performance, scalability, and distributed processing capabilities of the system. · Provide troubleshooting and root cause analysis for issues that are escalated to the team. · Work with development teams in an agile context as it relates to software development, including test driven development, automated unit testing and test fixtures, and pair programming. Additional Information Regards, Omer. **************
    $76k-115k yearly est. 12h ago
  • SAN Engineer

    Tech Tammina 4.2company rating

    Requirements engineer job in Mechanicsburg, PA

    This engineer will be a system administrator for our Storage Area Network (SAN) platforms and will routinely work with vendors, project managers, system administrators, database administrators, application developers and end users in order to maintain and improve the performance of our servers, databases and applications. We are looking for a flexible, self-starter with a positive attitude that enjoys supporting and helping others in a fast paced and team oriented environment. Qualifications: 10 years managing Enterprise-Level SAN Storage including Fiber Channel Switches Extensive build experience with EMC (VMAX, Symmetrix, VNX, etc.), NetApp & Hitachi ( AMS/HUS/HCP) storage technology Thorough knowledge of SAN zoning using Brocade FOS including virtual fabrics Must have extensive working knowledge of storage vendor management software such as: EMC Unisphere Symcli, SMC/SPA (Symmetrix Management Console / Symmetrix Performance Analyzer), ECC, Prosphere Expertise with day-to-day systems management and administration, including proactive monitoring and capacity planning Extensive experience in the area of Backup and Recovery design / implementation Experience with Developing a Storage strategy which determines when and where to use different types of storage Hands-on storage management and administration of UNIX, Linux and Wintel systems Exceptional troubleshooting and problem resolution skills Experience working within complex technical environments Contingency planning Operational experience in a large-scale Data Centers Desired: EMC VPLEX Storage Virtualization Technology Working knowledge of Microsoft SQL Server and Oracle database Veritas file system experience and/or certification Hitachi storage experience Familiar with clustering technologies Education Bachelor Degree (EE, CS, etc.) Candidates without a Bachelor's degree will NOT be considered Desired Certifications/Key Experience: Applicable EMC, NetApp & HDS SAN Training & Certifications Desired With Best Regards...... Naresh Kumar Pyla Sr. Technical Recruiter | Tech Tammina Ph:************ Additional Information All your information will be kept confidential according to EEO guidelines.
    $68k-97k yearly est. 12h ago
  • AI/LLM Engineer

    Gap International 4.4company rating

    Requirements engineer job in Springfield, PA

    GAP INTERNATIONAL - A unique, purpose-driven, consulting company Gap International is a global consulting firm that partners with executives to achieve breakthrough business results. We're a team of passionate problem-solvers committed to innovation and transformation. As part of our growth strategy, we're expanding our AI capabilities to deliver cutting-edge solutions across the enterprise. This role offers the opportunity to work on impactful projects, including dialog-driven learning and business process acceleration. ABOUT THE ROLE You will be part of a tight knit, highly collaborative research and development team focused on building real-world AI applications. As an AI/LLM Engineer, you will lead the design and implementation of advanced systems centered on large language models and natural language understanding. Your work will involve techniques such as fine-tuning, retrieval-augmented generation (RAG), and prompt engineering to solve meaningful business challenges. You will work closely with teams to explore data, shape requirements, and integrate AI capabilities into internal tools and digital products. Your contributions will drive innovation and shape transformative projects across the organization. RESPONSIBILITIES Design and implement AI systems using Large Language Models (LLMs) to solve enterprise challenges and improve operational efficiency. Fine-tune foundation models and apply techniques such as retrieval-augmented generation (RAG) and prompt engineering. Collaborate with cross-functional teams to integrate AI capabilities into internal tools, workflows, and digital products. Evaluate and experiment with frameworks and tools like Hugging Face, LangChain, OpenAI, Claude, and LLaMA. Build and deploy scalable LLM solutions across cloud-based and local environments, leveraging container orchestration tools where applicable. Lead architectural decisions and guide implementation of reusable frameworks for model evaluation, optimization, and deployment. Design, develop, and optimize autonomous AI agents. Apply emerging research to practical use cases, staying current with advancements in NLP and generative AI. Mentor team members and contribute to a collaborative, high-performance development environment. REQUIREMENTS Minimum Master's degree in Computer Science, Engineering, or a related technical field (PhD preferred); candidates with a Master's must have at least 2 years of relevant work experience since graduating. Strong experience developing and deploying LLM-based applications, including fine-tuning and prompt engineering. Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow, along with version control tools like GitHub. Familiarity with cloud-based AI services and APIs (e.g., OpenAI, AWS, Azure, GCP) and experience deploying models in scalable environments. Solid understanding of NLP concepts, transformer architectures, and evaluation metrics for LLM systems. Experience designing and implementing end-to-end training pipelines, including data shaping, model tuning, and performance optimization. Ability to communicate complex technical concepts clearly across teams and to non-technical stakeholders. Contributions to the AI/ML community through publications, open-source projects, or thought leadership. GAP INTERNATIONAL ASSOCIATES Purposeful people at work impacting companies around the world. People who thrive in a learning environment and enjoy learning, growing, and performing at their best; energized to continually push beyond their comfort zone. Comfortable with ambiguity; eager to take on things they don't know how to do. Curious and flexible when it comes to their own growth and development, as well as receptive to coaching and feedback to maximize their potential. Willing to communicate and contribute thoughts, insights and new ideas to senior leaders both internally and externally. WHAT WE OFFER A role with significant impact and visibility within the company. Opportunities for professional growth and development in a supportive and collaborative environment. Competitive salary commensurate with experience. A dynamic and inclusive company culture. TO APPLY Please submit your resume which includes a list of publications, a cover letter explaining your interest and how your skills and experience meet the qualifications of the position, and any relevant portfolio or GitHub links showcasing your previous work or projects. Gap International associates are based out of our corporate office in the Philadelphia metropolitan area. In order to be considered for this role, applicants should be legally authorized to work in the US. Gap International is an equal opportunity employer and values diversity. All employment is decided on the basis of qualifications, merit and business need, and all qualified candidates will receive consideration.
    $71k-103k yearly est. Auto-Apply 60d+ ago
  • Data Engineer

    Brooksource 4.1company rating

    Requirements engineer job in Lancaster, PA

    Contract-to-Hire (6 months) Lancaster, PA We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and data infrastructure that support analytics, AI, and data-driven decision-making. This role is hands-on and focused on building reliable, well-modeled datasets across modern cloud and lakehouse platforms. You will partner closely with analytics, data science, and business teams to deliver high-quality data solutions. Key Responsibilities Design, build, and optimize batch and real-time data pipelines for ingestion, transformation, and delivery Develop and maintain data models that support analytics, reporting, and machine learning use cases Design scalable data architecture integrating structured, semi-structured, and unstructured data sources Build and support ETL / ELT workflows using modern tools (e.g., dbt, Airflow, Databricks, Glue) Ingest and integrate data from multiple internal and external sources, including APIs, databases, and cloud services Manage and optimize cloud-based data platforms (AWS, Azure, or GCP), including lakehouse technologies such as Snowflake or Databricks Implement data quality, validation, governance, lineage, and monitoring processes Support advanced analytics and machine learning data pipelines Partner with analysts, data scientists, and stakeholders to deliver trusted, well-structured datasets Continuously improve data workflows for performance, scalability, and cost efficiency Contribute to documentation, standards, and best practices across the data engineering function Required Qualifications 3-7 years of experience in data engineering or a related role Strong proficiency in SQL and at least one programming language (Python, Scala, or Java) Hands-on experience with modern data platforms (Snowflake, Databricks, or similar) Experience building and orchestrating data pipelines in cloud environments Working knowledge of cloud services (AWS, Azure, or GCP) Experience with version control, CI/CD, and modern development practices Strong analytical, problem-solving, and communication skills Ability to work effectively in a fast-paced, collaborative environment Preferred / Nice-to-Have Experience with dbt, Airflow, or similar orchestration tools Exposure to machine learning or advanced analytics pipelines Experience implementing data governance or quality frameworks Familiarity with SAP data platforms (e.g., BW, Datasphere, Business Data Cloud) Experience using LLMs or AI-assisted tooling for automation, documentation, or data workflows Relevant certifications in cloud, data platforms, or AI technologies
    $91k-125k yearly est. 1d ago
  • Azure DevOps Engineer with P&C exp.

    Valuemomentum 3.6company rating

    Requirements engineer job in Pittsburgh, PA

    Responsibilities Following are the day-to-day work activities: CI/CD Pipeline Management: Design, implement, and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines for Guidewire applications using tools like TeamCity, GitLab CI, and others. Infrastructure Automation: Automate infrastructure provisioning and configuration management using tools such as Terraform, Ansible, or CloudFormation. Monitoring and Logging: Implement and manage monitoring and logging solutions to ensure system reliability, performance, and security. Collaboration: Work closely with development, QA, and operations teams to streamline processes and improve efficiency. Security: Enhance the security of the IT infrastructure and ensure compliance with industry standards and best practices. Troubleshooting: Identify and resolve infrastructure and application issues, ensuring minimal downtime and optimal performance. Documentation: Maintain comprehensive documentation of infrastructure configurations, processes, and procedures. Requirements Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are: Educational Background: Bachelor's degree in Computer Science, Information Technology, or a related field. Experience: 6-10 years of experience in a DevOps or systems engineering role. Hands-on experience with cloud platforms (AWS, Azure, GCP). Technical Skills: Proficiency in scripting languages (e.g., Python, Power Shell). (2-3 years) Experience with CI/CD tools (e.g., Jenkins, GitLab CI). (3-5 yrs) Knowledge of containerization technologies (e.g., Docker, Kubernetes).- good to have. Strong understanding of networking, security, and system administration. ((3-5 yrs) Familiarity with monitoring toolssuch as DynaTrace/Datadog / Splunk Familiarity with Agile developmentmethodologies. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and teamwork abilities. Ability to work independently About ValueMomentum ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
    $78k-100k yearly est. 4d ago
  • Data Engineer

    Realtime Recruitment

    Requirements engineer job in Philadelphia, PA

    Data Engineer - Job Opportunity Full time Permanent Remote - East coast only Please note this role is open for US citizens or Green Card Holders only We're looking for a Data Engineer to help build and enhance scalable data systems that power analytics, reporting, and business decision-making. This role is ideal for someone who enjoys solving complex technical challenges, optimizing data workflows, and collaborating across teams to deliver reliable, high-quality data solutions. What You'll Do Develop and maintain scalable data infrastructure, cloud-native workflows, and ETL/ELT pipelines supporting analytics and operational workloads. Transform, model, and organize data from multiple sources to enable accurate reporting and data-driven insights. Improve data quality and system performance by identifying issues, optimizing architecture, and enhancing reliability and scalability. Monitor pipelines, troubleshoot discrepancies, and resolve data or platform issues-including participating in on-call support when needed. Prototype analytical tools, automation solutions, and algorithms to support complex analysis and drive operational efficiency. Collaborate closely with BI, Finance, and cross-functional teams to deliver robust and scalable data products. Create and maintain clear, detailed documentation (configurations, specifications, test scripts, and project tracking). Contribute to Agile development processes, engineering excellence, and continuous improvement initiatives. What You Bring Bachelor's degree in Computer Science or a related technical field. 2-4 years of hands-on SQL experience (Oracle, PostgreSQL, etc.). 2-4 years of experience with Java or Groovy. 2+ years working with orchestration and ingestion tools (e.g., Airflow, Airbyte). 2+ years integrating with APIs (SOAP, REST). Experience with cloud data warehouses and modern ELT/ETL frameworks (e.g., Snowflake, Redshift, DBT) is a plus. Comfortable working in an Agile environment. Practical knowledge of version control and CI/CD workflows. Experience with automation, including unit and integration testing. Understanding of cloud storage solutions (e.g., S3, Blob Storage, Object Store). Proactive mindset with strong analytical, logical-thinking, and consultative skills. Ability to reason about design decisions and understand their broader technical impact. Strong collaboration, adaptability, and prioritization abilities. Excellent problem-solving and troubleshooting skills.
    $81k-111k yearly est. 2d ago

Learn more about requirements engineer jobs

Do you work as a requirements engineer?

What are the top employers for requirements engineer in PA?

Top 10 Requirements Engineer companies in PA

  1. Tata Group

  2. Jacobs Enterprises

  3. HNTB

  4. Marriott International

  5. CBRE Group

  6. Exelon

  7. Whiting-Turner

  8. Lockheed Martin

  9. Quaker Houghton

  10. Carnegie Mellon University

Job type you want
Full Time
Part Time
Internship
Temporary

Browse requirements engineer jobs in pennsylvania by city

All requirements engineer jobs

Jobs in Pennsylvania