Senior Data Engineer
Data engineer job in Bethlehem, PA
Hybrid (Bethlehem, PA)
Contract
Applicants must be authorized to work in the U.S. without sponsorship
We're looking for a Senior Data Engineer to join our growing technology team and help shape the future of our enterprise data landscape. This is a hands-on, high-impact opportunity to make recommendations, build and evolve a modern data platform using Snowflake and cloud-based EDW Solutions.
How You'll Impact Results:
Drive the evolution and architecture of scalable, secure, cloud-native data platforms
Design, build, and maintain data models, pipelines, and integration patterns across the data lake, data warehouse, and consumption layers
Lead deployment of long-term data products and infuse data and analytics capabilities across business and IT
Optimize data pipelines and warehouse performance for accuracy, accessibility, and speed
Collaborate cross-functionally to deliver data, experimentation, and analytics solutions
Implement systems to monitor data quality and ensure reliability and availability of Production data for downstream users, leadership teams, and business processes
Recommend and implement best practices for query performance, storage, and resource efficiency
Test and clearly document data assets, pipelines, and architecture to support usability and scale
Engage across project phases and serve as a key contributor in strategic data architecture initiatives
Your Qualifications That Will Ensure Success:
Required:
10+ years of experience in Information Technology Data Engineering:
professional database and data warehouse development
Advanced proficiency in SQL, data modeling, and performance tuning
Experience in system configuration, security administration, and performance optimization
Deep experience required with Snowflake and modern cloud data platforms (AWS, Azure, or GCP)
Familiarity with developing cloud data applications (AWS, Azure, Google Cloud) and/or standard CI/CD tools like Azure DevOps or GitHub
Strong analytical, problem-solving, and documentation skills
Experience in system configuration, security administration, and performance optimization
Proficiency with Microsoft Excel and common data analysis tools
Ability to troubleshoot technical issues and provide system support to non-technical users.
Preferred:
Experience integrating SAP ECC data into cloud-native platforms
Exposure to AI/ML, API development, or Boomi Atmosphere
Prior experience in consumer packaged goods (CPG), Food / Beverage industry, or manufacturing
Machine Learning Data Scientist
Data engineer job in Pittsburgh, PA
Machine Learning Data Scientist
Length: 6 Month Contract to Start
* Please no agencies. Direct employees currently authorized to work in the United States - no sponsorship available.*
Job Description:
We are looking for a Data Scientist/Engineer with Machine Learning and strong skills in Python, time-series modeling, and SCADA/industrial data. In this role, you will build and deploy ML models for forecasting, anomaly detection, and predictive maintenance using high-frequency sensor and operational data.
Essential Duties and Responsibilities:
Develop ML models for time-series forecasting and anomaly detection
Build data pipelines for SCADA/IIoT data ingestion and processing
Perform feature engineering and signal analysis on time-series data
Deploy models in production using APIs, microservices, and MLOps best practices
Collaborate with data engineers and domain experts to improve data quality and model performance
Qualifications:
Strong Python skills
Experience working with SCADA systems or industrial data historians
Solid understanding of time-series analytics and signal processing
Experience with cloud platforms and containerization (AWS/Azure/GCP, Docker)
POST-OFFER BACKGROUND CHECK IS REQUIRED. Digital Prospectors is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other characteristic protected by law. Digital Prospectors affirms the right of all individuals to equal opportunity and prohibits any form of discrimination or harassment.
Come see why DPC has achieved:
4.9/5 Star Glassdoor rating and the only staffing company (< 1000 employees) to be voted in the national Top 10 ‘Employee's Choice - Best Places to Work' by Glassdoor.
Voted ‘Best Staffing Firm to Temp/Contract For' seven times by Staffing Industry Analysts as well as a ‘Best Company to Work For' by Forbes, Fortune and Inc. magazine.
As you are applying, please join us in fostering diversity, equity, and inclusion by completing the Invitation to Self-Identify form today!
*******************
Job #18135
Data Engineer (IoT)
Data engineer job in Pittsburgh, PA
As an IoT Data Engineer at CurvePoint, you will design, build, and optimize the data pipelines that power our Wi-AI sensing platform. Your work will focus on reliable, low-latency data acquisition from constrained on-prem IoT devices, efficient buffering and streaming, and scalable cloud-based storage and training workflows.
You will own how raw sensor data (e.g., wireless CSI, video, metadata) moves from edge devices with limited disk and compute into durable, well-structured datasets used for model training, evaluation, and auditability. You will work closely with hardware, ML, and infrastructure teams to ensure our data systems are fast, resilient, and cost-efficient at scale.
Duties and Responsibilities
Edge & On-Prem Data Acquisition
Design and improve data capture pipelines on constrained IoT devices and host servers (limited disk, intermittent connectivity, real-time constraints).
Implement buffering, compression, batching, and backpressure strategies to prevent data loss.
Optimize data transfer from edge → on-prem host → cloud.
Streaming & Ingestion Pipelines
Build and maintain streaming or near-real-time ingestion pipelines for sensor data (e.g., CSI, video, logs, metadata).
Ensure data integrity, ordering, and recoverability across failures.
Design mechanisms for replay, partial re-ingestion, and audit trails.
Cloud Data Pipelines & Storage
Own cloud-side ingestion, storage layout, and lifecycle policies for large time-series datasets.
Balance cost, durability, and performance across hot, warm, and cold storage tiers.
Implement data versioning and dataset lineage to support model training and reproducibility.
Training Data Enablement
Structure datasets to support efficient downstream ML training, evaluation, and experimentation.
Work closely with ML engineers to align data formats, schemas, and sampling strategies with training needs.
Build tooling for dataset slicing, filtering, and validation.
Reliability & Observability
Add monitoring, metrics, and alerts around data freshness, drop rates, and pipeline health.
Debug pipeline failures across edge, on-prem, and cloud environments.
Continuously improve system robustness under real-world operating conditions.
Cross-Functional Collaboration
Partner with hardware engineers to understand sensor behavior and constraints.
Collaborate with ML engineers to adapt pipelines as model and data requirements evolve.
Contribute to architectural decisions as the platform scales from pilots to production deployments.
Must Haves
Bachelor's degree in Computer Science, Electrical Engineering, or a related field (or equivalent experience).
3+ years of experience as a Data Engineer or Backend Engineer working with production data pipelines.
Strong Python skills; experience building reliable data processing systems.
Hands-on experience with streaming or near-real-time data ingestion (e.g., Kafka, Kinesis, MQTT, custom TCP/UDP pipelines).
Experience working with on-prem systems or edge/IoT devices, including disk, bandwidth, or compute constraints.
Familiarity with cloud storage and data lifecycle management (e.g., S3-like object stores).
Strong debugging skills across distributed systems.
Nice to Have
Experience with IoT or sensor data (RF/CSI, video, audio, industrial telemetry).
Familiarity with data compression, time-series formats, or binary data handling.
Experience supporting ML training pipelines or large-scale dataset management.
Exposure to containerized or GPU-enabled data processing environments.
Knowledge of data governance, retention, or compliance requirements.
Location
Pittsburgh, PA (hybrid preferred; some on-site work with hardware teams)
Salary
$110,000 - $135,000 / year (depending on experience and depth in streaming + IoT systems)
Data Engineer
Data engineer job in Philadelphia, PA
Data Engineer - Job Opportunity
Full time Permanent
Remote - East coast only
Please note this role is open for US citizens or Green Card Holders only
We're looking for a Data Engineer to help build and enhance scalable data systems that power analytics, reporting, and business decision-making. This role is ideal for someone who enjoys solving complex technical challenges, optimizing data workflows, and collaborating across teams to deliver reliable, high-quality data solutions.
What You'll Do
Develop and maintain scalable data infrastructure, cloud-native workflows, and ETL/ELT pipelines supporting analytics and operational workloads.
Transform, model, and organize data from multiple sources to enable accurate reporting and data-driven insights.
Improve data quality and system performance by identifying issues, optimizing architecture, and enhancing reliability and scalability.
Monitor pipelines, troubleshoot discrepancies, and resolve data or platform issues-including participating in on-call support when needed.
Prototype analytical tools, automation solutions, and algorithms to support complex analysis and drive operational efficiency.
Collaborate closely with BI, Finance, and cross-functional teams to deliver robust and scalable data products.
Create and maintain clear, detailed documentation (configurations, specifications, test scripts, and project tracking).
Contribute to Agile development processes, engineering excellence, and continuous improvement initiatives.
What You Bring
Bachelor's degree in Computer Science or a related technical field.
2-4 years of hands-on SQL experience (Oracle, PostgreSQL, etc.).
2-4 years of experience with Java or Groovy.
2+ years working with orchestration and ingestion tools (e.g., Airflow, Airbyte).
2+ years integrating with APIs (SOAP, REST).
Experience with cloud data warehouses and modern ELT/ETL frameworks (e.g., Snowflake, Redshift, DBT) is a plus.
Comfortable working in an Agile environment.
Practical knowledge of version control and CI/CD workflows.
Experience with automation, including unit and integration testing.
Understanding of cloud storage solutions (e.g., S3, Blob Storage, Object Store).
Proactive mindset with strong analytical, logical-thinking, and consultative skills.
Ability to reason about design decisions and understand their broader technical impact.
Strong collaboration, adaptability, and prioritization abilities.
Excellent problem-solving and troubleshooting skills.
Data Engineer & Analytics(Reporting, Visualization) (US Citizen Only)
Data engineer job in Philadelphia, PA
1. Comfortable writing and tuning SQL queries on Vertica .
2. Can work with large datasets.
3. Good understanding of MicroStrategy basics like Attributes, Facts, and Hierarchies.
4. Skilled at translating complex technical outputs into simple, meaningful visuals and summaries.
5. Experience optimizing MicroStrategy reports for performance and scalability.
6. Experience creating data pipeline using cloud platforms (edited)
Azure Data Architect
Data engineer job in Malvern, PA
Hi,
I hope you are doing well!
We have an opportunity for Azure Data Architect with one of our clients for Malvern PA.
Please see the job details below and let me know if you would be interested in this role.
If interested, please send me a copy of your resume, contact details, availability, and a good time to connect with you.
Title: Azure Data Architect
Location: Malvern PA
Terms: Long Term Contract
JOB DESCRIPTION:
Required Skills and Experience
Technical Expertise
Strong proficiency in Azure services: Data Factory, Synapse, Databricks, Data Lake, Power BI.
Experience with data modeling, ETL design, and data warehousing.
Knowledge of SQL, NoSQL, PySpark, and BI tools.
Architecture and Strategy
7+ years in data architecture roles; 3+ years with Azure data solutions.
Familiarity with Lakehouse architecture, Delta/Parquet formats, and data governance tools.
Soft Skills
Excellent communication and stakeholder management.
Ability to lead cross-functional teams and influence technical decisions.
Preferred
Experience in regulated industries (e.g., Financial Services).
Knowledge of Microsoft Fabric, Generative AI, and RAG-based architectures.
Education
Bachelor's or Master's degree in Computer Science, Information Systems, or related fields.
Certifications like Microsoft Certified: Azure Solutions Architect Expert or Azure Data Engineer Associate are highly desirable.
Thank you!
Amit Jha
Senior Recruiter | BeaconFire Inc.
📧 ***********************
Cloud Data Architect
Data engineer job in Pittsburgh, PA
Duquesne Light Company, headquartered in downtown Pittsburgh, is a leader in providing electric energy and has been in the forefront of the electric energy market, with a history rooted in technological innovation and superior customer service. Today, the company continues its role as a leader in the transmission and distribution of electric energy, providing a secure supply of reliable power to more than half a million customers in southwestern Pennsylvania.
Duquesne Light Company is committed to creating a culture of inclusion. We value and respect the unique differences and experiences of our employees. We believe that our differences lead to better collaboration, innovation and outcomes. We want you to join our team!
The role of Cloud Data Architect I is to expand the company's use of data as a strategic enabler of corporate goals and objectives. The Cloud Data Architect will achieve this by strategically designing, developing, and implementing data models for data stored in enterprise systems/platforms or curated from 3rd parties and provide accessible, availability for business consumers to analyze and gain valuable insights. This individual will act as the primary advocate of data modeling best practices and lead innovation in adopting and leveraging cloud data technologies to accelerate adoption. Overtime & on-call availability as required.
Location: Hybrid (two days per week in office), downtown Pittsburgh, Pennsylvania
Responsibilities:
Strategy & Planning
Develop and deliver long-term strategic goals for data architecture vision and standards with data users, department managers, business partners, and other key stakeholders.
Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap.
Collaborate with third parties and business subject matter experts to enable company strategic imperatives that require secure and accessible data cloud capabilities or assist in curating data required to enrich models.
Collaborate with IT Enterprise Architecture, Enterprise Platforms, Business Intelligence, Data Engineering, Data Quality and Data Governance stakeholders to:
Develop processes for governing the identification, collection, and use of corporate metadata;
Take steps to assure metadata accuracy and validity.
Track data quality, completeness, redundancy, and improvement.
Conduct cloud consumption cost forecasting, usage requirements, proof of concepts, proof of business value, feasibility studies, and other tasks.
Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving.
Ensure that data strategies and architectures are in regulatory compliance.
Acquisition & Deployment
Liaise with vendors, cloud providers and service providers to select the solutions or services that best meet company goals.
Operational Management
Develop and promote data management methodologies and standards.
Select and implement the appropriate tools, software, applications, and systems to support data technology goals.
Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality.
Collaborate with scrum masters, project managers, and business unit leaders for all projects involving enterprise data.
Address data-related problems in regard to systems integration, compatibility, and multiple-platform integration.
Act as a leader and advocate of data management, including coaching, training, and career development to staff.
Develop and implement key components as needed to create testing criteria to guarantee data architecture's fidelity and performance.
Document the data architecture and environment to maintain a current and accurate view of the larger data picture.
Identify and develop opportunities for data reuse, migration, or retirement.
Act as a technical leader to the organization for Data Architecture and Data Engineering best practices.
Education/Experience:
Bachelor's degree in computer science, information systems, computer engineering, or relevant discipline
Twelve (12) or more years of experience is required.
Advanced degrees in a related field preferred.
Relevant professional certifications preferred.
Knowledge & Experience
Utilities experience (oil, gas, and/or electric) strongly preferred.
Five (5) or more-years' work experience as a data or information architect.
Hands-on experience with data architecting, data mining, large-scale data modeling, cloud data storage and analytics platforms.
Experience with Azure, AWS or Google data capabilities.
Experience with Databricks, Synapse, Foundry, Power BI, and/or Snowflake preferred.
Experience with Enterprise platforms such as Oracle, Maximo etc preferred
Direct experience in implementing data solutions in the cloud.
Strong understanding of relational data structures, theories, principles, and practices.
Strong familiarity with metadata management and associated processes.
Hands-on knowledge of enterprise repository tools, data modeling tools, data mapping tools, and data profiling tools.
Demonstrated expertise with repository creation, and data and information system life cycle methodologies.
Experience with business requirements analysis, entity relationship planning, database design, reporting structures, and so on.
Ability to manage data and metadata migration.
Experience with database platforms, including Oracle RDBMS.
Experience with GIS Geo-Spatial Data, time-phased PI Historian or collector data preferred.
Understanding of Web services (SOAP, XML, UDDI, WSDL).
Object-oriented programming experience (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.).
Excellent client/user interaction skills to determine requirements.
Proven project management experience.
Good knowledge of applicable data privacy practices and laws.
Storm Roles
All Non-Union Employees will serve in storm roles as appropriate to their role and skillset.
EQUAL OPPORTUNITY EMPLOYER
Duquesne Light Holdings is committed to providing equal employment opportunity to all people in all aspects of the employment relationship, without discrimination because of race, age, sex, color, religion, national origin, disability, sexual orientation and gender identity or status as a Vietnam era or special disabled veteran or any other unlawful basis, as defined by applicable law, and fostering a workplace free of unlawful discrimination and retaliation. This policy affects decisions including, but not limited to, hiring, compensation, benefits, terms and conditions of employment, opportunities for promotion, transfer, layoffs, return from a layoff, training and development, and other privileges of employment. An integral part of Duquesne Light Holdings' commitment is to comply with all applicable federal, state and local laws concerning equal employment and affirmative action.
Duquesne Light Holdings is committed to offering an inclusive and accessible experience for all job seekers, including individuals with disabilities. Our goal is to foster an inclusive and accessible workplace where everyone has the opportunity to be successful.
If you need a reasonable accommodation to search for a job opening, apply for a position, or participate in the interview process, connect with us at *************** and describe the specific accommodation requested for a disability-related limitation.
Senior Data Engineer (Bank Tech)
Data engineer job in York, PA
Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who solve real problems and meet real customer needs. We are seeking Data Engineers who are passionate about marrying data with emerging technologies. As a Capital One Data Engineer, you'll have the opportunity to be on the forefront of driving a major transformation within Capital One.
What You'll Do:
Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies
Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems
Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake
Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community
Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment
Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance
Basic Qualifications:
Bachelor's Degree
At least 3 years of experience in application development (Internship experience does not apply)
At least 1 year of experience in big data technologies
Preferred Qualifications:
5+ years of experience in application development including Python, SQL, Scala, or Java
2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud)
3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL)
2+ year experience working on real-time data and streaming applications
2+ years of experience with NoSQL implementation (Mongo, Cassandra)
2+ years of data warehousing experience (Redshift or Snowflake)
3+ years of experience with UNIX/Linux including basic commands and shell scripting
2+ years of experience with Agile engineering practices
At this time, Capital One will not sponsor a new applicant for employment authorization for this position.
The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked.
McLean, VA: $158,600 - $181,000 for Senior Data Engineer
Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter.
This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan.
Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
For technical support or questions about Capital One's recruiting process, please send an email to
Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site.
Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Front End Engineer Tier 2
Data engineer job in Reading, PA
We're working with a large enterprise undergoing major digital transformation efforts, and they're looking to bring on a Front-End Engineer with strong Angular experience. This is a contract-to-hire role with a stable team, modern architecture, and growth potential in a lead or mentorship capacity.
This opportunity is perfect for someone who's comfortable wearing multiple hats-coding hands-on with Angular, mentoring junior team members, and participating in full lifecycle enterprise application development.
What You'll Be Doing
Develop high-performance, responsive web applications using Angular 14+, TypeScript, and modern front-end tools.
Work on modular front-end architecture (including micro-frontends and shell app integration).
Build and maintain shared UI libraries and reusable components.
Collaborate with internal stakeholders to gather business requirements and translate them into technical specs.
Lead and assign work to junior/onshore and offshore developers as needed.
Write unit tests and follow Test-Driven Development (TDD) practices.
Participate in Agile ceremonies and contribute to continuous improvement efforts.
Monitor and support deployed applications, analyzing performance and addressing issues.
Ideal Background
5-7 years of front-end development experience.
Strong hands-on experience with Angular 14 or newer, TypeScript, HTML5, CSS, and JavaScript.
Experience with responsive/adaptive design for both desktop and mobile platforms.
Familiarity with Angular Microfrontend architecture, including Module Federation, Webpack, and shared library design.
Previous experience mentoring developers or leading small development teams is highly valued.
Proven experience across the software development lifecycle-from concept to deployment and support.
Understanding of Agile methodologies and best practices.
Experience conducting code reviews and guiding junior developers.
Nice-to-Have Skills
Exposure to backend tools such as Java, Spring, and REST APIs.
Experience with relational databases like Oracle, MS SQL Server, or iSeries DB2.
Familiarity with tools like NodeJS, Swagger, Postman, Bitbucket, JIRA, Confluence, Dynatrace, or Splunk.
Comfort with version control and CI/CD tools such as Maven, Git, Bamboo, or Artifactory.
Bonus if you have experience with Elastic Search, UML, or performance monitoring tools.
Location & Schedule
Hybrid - Partial onsite expected (location and frequency shared in interview).
Long-term project work with contract-to-hire intent.
Must be eligible to convert without sponsorship.
Azure DevOps Engineer with P&C exp.
Data engineer job in Pittsburgh, PA
Responsibilities
Following are the day-to-day work activities:
CI/CD Pipeline Management: Design, implement, and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines for Guidewire applications using tools like TeamCity, GitLab CI, and others.
Infrastructure Automation: Automate infrastructure provisioning and configuration management using tools such as Terraform, Ansible, or CloudFormation.
Monitoring and Logging: Implement and manage monitoring and logging solutions to ensure system reliability, performance, and security.
Collaboration: Work closely with development, QA, and operations teams to streamline processes and improve efficiency.
Security: Enhance the security of the IT infrastructure and ensure compliance with industry standards and best practices.
Troubleshooting: Identify and resolve infrastructure and application issues, ensuring minimal downtime and optimal performance.
Documentation: Maintain comprehensive documentation of infrastructure configurations, processes, and procedures.
Requirements
Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are:
Educational Background: Bachelor's degree in Computer Science, Information Technology, or a related field.
Experience:
6-10 years of experience in a DevOps or systems engineering role.
Hands-on experience with cloud platforms (AWS, Azure, GCP).
Technical Skills:
Proficiency in scripting languages (e.g., Python, Power Shell). (2-3 years)
Experience with CI/CD tools (e.g., Jenkins, GitLab CI). (3-5 yrs)
Knowledge of containerization technologies (e.g., Docker, Kubernetes).- good to have.
Strong understanding of networking, security, and system administration. ((3-5 yrs)
Familiarity with monitoring toolssuch as DynaTrace/Datadog / Splunk
Familiarity with Agile developmentmethodologies.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and teamwork abilities.
Ability to work independently
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
Software Engineer
Data engineer job in Malvern, PA
Day-to-Day Responsibilities:
Develop and deploy full-stack applications using AWS services (Lambda, S3, DynamoDB, ECS, Glue, Step Functions, and more).
Design, build, and maintain REST and GraphQL APIs and microservices using Python, Java, JavaScript, and Go.
Apply DevOps principles with CI/CD pipelines using Bamboo, Bitbucket, Git, and JIRA.
Monitor product health and troubleshoot production issues with tools like Honeycomb, Splunk, and CloudWatch.
Collaborate with stakeholders to gather requirements, present demos, and coordinate tasks across teams.
Resolve complex technical challenges and recommend enterprise-wide improvements.
Must-Haves:
Minimum 5 years of related experience in software development.
Proficient in AWS services, full-stack development, and microservices.
Experience with Python, Java, JavaScript, and Go.
Strong DevOps experience and familiarity with CI/CD pipelines.
Ability to learn new business domains and applications quickly.
Nice-to-Haves:
Experience with monitoring/observability tools like Honeycomb, Splunk, CloudWatch.
Familiarity with serverless and large-scale cloud architectures.
Agile or Scrum experience.
Strong communication and stakeholder collaboration skills.
SRE/DevOps w/ HashiCorp & Clojure Exp
Data engineer job in Philadelphia, PA
Locals Only! SRE/DevOps w/ HashiCorp & Clojure Exp Philadelphia, PA: 100% Onsite! 12 + Months
MUST: HashiCorp Clojure
Role: Lead SRE initiatives, automating and monitoring cloud infrastructure to ensure reliable, scalable, and secure systems for eCommerce.
Required: Must Have:
AWS, Terraform, HashiCorp Stack (Nomad, Vault, Consul)
Programming in Python/Clojure
Automation, monitoring, and log centralization (Splunk)
Experience leading large-scale cloud infrastructure
Desired Skills and Experience
Locals Only!
SRE/DevOps w/ HashiCorp & Clojure Exp
Philadelphia, PA: 100% Onsite!
12 + Months
Dexian stands at the forefront of Talent + Technology solutions with a presence spanning more than 70 locations worldwide and a team exceeding 10,000 professionals. As one of the largest technology and professional staffing companies and one of the largest minority-owned staffing companies in the United States, Dexian combines over 30 years of industry expertise with cutting-edge technologies to deliver comprehensive global services and support.
Dexian connects the right talent and the right technology with the right organizations to deliver trajectory-changing results that help everyone achieve their ambitions and goals. To learn more, please visit ********************
Dexian is an Equal Opportunity Employer that recruits and hires qualified candidates without regard to race, religion, sex, sexual orientation, gender identity, age, national origin, ancestry, citizenship, disability, or veteran status.
Senior Full Stack Developer
Data engineer job in Malvern, PA
Senior Full Stack Developer - Direct Hire Must be a U.S. citizen or green card holder - Visa sponsorship/transfer not available. Join a leading Software provider dedicated to transforming the financial services industry as a full-time Senior Full Stack Developer. My client is looking for a detail-oriented individual with a strong background in JavaScript, modern React packages, and front and back-end coding. This individual should have confidence and the ability to approach and embrace projects with both functionality and user experience in mind. The Full Stack Developer will be helping create risk management software primarily for credit unions and banks. The software integrates with a large community of financial services, via REST, SOAP and daily batch files. This development is being done in an environment with the latest technologies including .NET Core, React, Azure and the latest libraries. As a Full Stack Developer, the use of AI Development tools such as Cursor, ChatGPT and Code Rabbit are expected to be used daily to provide efficiency and increase productivity.
Key Responsibilities
Broad experience designing, programming, and implementing large information systems
Ability to decompose complex problems independently into workable solutions
Excellent analytical and problem-solving skills
Excellent organization and time management skills
Adhere to SOLID design principles when designing and producing high quality code
Monitor overall project progress to ensure consistency with initial platform design
Conduct code review and pair programming sessions
Research/troubleshoot issues when in QA and/or production to help maintain a high-quality technology platform
Determine root cause for complex software issues
Participate in SCRUM team sessions, providing status updates of work
Utilize Cursor to generate code, unit tests and review code prior to pushing to the code repository
Remain current on new technologies; evaluate and make recommendations as necessary
Other duties as assigned
Skills And Experience
7+ years' experience in designing, debugging and implementing software applications
5+ years' experience with .NET web applications - C# and .NET Core
4+ years' experience with database technology such as SQL Server including debugging queries, writing stored procedures and views
2+ years' experience with ReactJS, React and common React packages
Familiarity with TypeScript
Experience with GIT and familiarity with the use of Repositories in Azure DevOps
Use of a Caching Layer like Redis
Experience creating UI/UX Experiences with a modern JavaScript Framework
Independently learn new AI technologies and recommend their usage
Ready to make a significant impact at a dynamic organization? If you're passionate about development and delivering reliable solutions, we encourage you to apply now and take the next step in your career.
How To Apply
We'd love to see your resume, but we don't need it to have a conversation. Send us an email to ************************* and tell me why you're interested. Or feel free to email your resume. Please include Job#19660.
Checkout all our open jobs: ************************
Java Software Engineer
Data engineer job in Pittsburgh, PA
About Us:
LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 700+ clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree a Larsen & Toubro Group company combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit. ********************************
Job Title: Java Developer
Location: Pittsburgh, PA (4 days onsite/week)
Duration: FTE
Job description:
8 to 10 Years of experience
Strong knowledge of Java and FrontEnd UI Technologies
Experience of working in UI tool sets programming languages Core JavaScript Angular 11 or higher JavaScript frameworks CSS HTML
Experience in Spring Framework Hibernate and proficiency with Spring Boot
Solid coding and troubleshooting experience on Web Services and RESTful API
Experience and understanding of design patterns culminating into microservices development
Strong SQL skills to work on relational databases
Strong experience in SDLC DevOps processes CICD tools Git etc
Strong problem solver with ability to manage and lead the team to push the solution
Strong Communication Skills
Benefits/perks listed below may vary depending on the nature of your employment with LTIMindtree (“LTIM”):
Benefits and Perks:
Comprehensive Medical Plan Covering Medical, Dental, Vision
Short Term and Long-Term Disability Coverage
401(k) Plan with Company match
Life Insurance
Vacation Time, Sick Leave, Paid Holidays
Paid Paternity and Maternity Leave
The range displayed on each job posting reflects the minimum and maximum salary target for the position across all US locations. Within the range, individual pay is determined by work location and job level and additional factors including job-related skills, experience, and relevant education or training. Depending on the position offered, other forms of compensation may be provided as part of overall compensation like an annual performance-based bonus, sales incentive pay and other forms of bonus or variable compensation.
Disclaimer: The compensation and benefits information provided herein is accurate as of the date of this posting.
LTIMindtree is an equal opportunity employer that is committed to diversity in the workplace. Our employment decisions are made without regard to race, color, creed, religion, sex (including pregnancy, childbirth or related medical conditions), gender identity or expression, national origin, ancestry, age, family-care status, veteran status, marital status, civil union status, domestic partnership status, military service, handicap or disability or history of handicap or disability, genetic information, atypical hereditary cellular or blood trait, union affiliation, affectional or sexual orientation or preference, or any other characteristic protected by applicable federal, state, or local law, except where such considerations are bona fide occupational qualifications permitted by law.
Identity and Access Management - Software Engineering Lead
Data engineer job in Philadelphia, PA
About the role - As Engineering Lead for NeoID-Elsevier's next-generation Identity and Access Management (IAM) platform-you'll leverage your deep security expertise to architect, build, and evolve the authentication and authorization backbone for Elsevier's global products. You'll also lead and manage a team of 5 engineers, fostering their growth and ensuring delivery excellence. You'll have the opportunity to work with industry standard protocols such as OAuth2, OIDC and SAML, as well as healthcare's SMART on FHIR and EHR integrations.
About the team- This team is entrusted with building Elsevier's next-generation Identity and Access Management (IAM) platform. This diverse team of engineers are also building and evolving the authentication and authorization backbone for Elsevier's global products. This team is building a brand new product in Cyber Security that will provide Authorization and Authentication for ALL Elsevier products
Qualifications
Current and extensive experience with at least one major IAM platform (KeyCloak, Auth0, Okta, or similar) - KeyCloak and Auth0 experience are strong pluses. Only candidates with this experience will be considered for this critical role.
Possess an in-depth security mindset, with proven experience designing and implementing secure authentication and authorization systems
Have an extensive understanding of OAuth2, OIDC and SAML protocols, including relevant RFCs and enterprise/server-side implementations
Familiarity with healthcare identity protocols, including SMART on FHIR and EHR integrations
Have current hands-on experience with AWS cloud services and infrastructure management. Proficiency in Infrastructure as Code (IaC) tools, especially Terraform
Strong networking skills, including network security, protocols, and troubleshooting
Familiarity with software development methodologies (Agile, Waterfall, etc.)
Current experience as a people manager of ideally Software and Security professionals.
Experience with Java/J2EE, JavaScript, and related technologies, or willingness to learn and deepen expertise
Knowledge of data modeling, optimization, and secure data handling best practices
Accountabilities
Leading the design and implementation of secure, scalable IAM solutions, with a focus on OAuth2/OIDC and healthcare protocols such as SMART on FHIR and EHR integrations
Managing, mentoring and supporting a team of 5 engineers, fostering a culture of security, innovation, and technical excellence
Collaborating with product managers and stakeholders to define requirements and strategic direction for the platform, including healthcare and life sciences use cases
Writing and reviewing code, performing code reviews, and ensuring adherence to security and engineering best practices
Troubleshooting and resolving complex technical issues, providing expert guidance on IAM, security, and healthcare protocol topics
Contributing to architectural decisions and long-term platform strategy
Staying current with industry trends, emerging technologies, and evolving security threats in the IAM and healthcare space
Data Engineer
Data engineer job in Lancaster, PA
Contract-to-Hire (6 months)
Lancaster, PA
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and data infrastructure that support analytics, AI, and data-driven decision-making. This role is hands-on and focused on building reliable, well-modeled datasets across modern cloud and lakehouse platforms. You will partner closely with analytics, data science, and business teams to deliver high-quality data solutions.
Key Responsibilities
Design, build, and optimize batch and real-time data pipelines for ingestion, transformation, and delivery
Develop and maintain data models that support analytics, reporting, and machine learning use cases
Design scalable data architecture integrating structured, semi-structured, and unstructured data sources
Build and support ETL / ELT workflows using modern tools (e.g., dbt, Airflow, Databricks, Glue)
Ingest and integrate data from multiple internal and external sources, including APIs, databases, and cloud services
Manage and optimize cloud-based data platforms (AWS, Azure, or GCP), including lakehouse technologies such as Snowflake or Databricks
Implement data quality, validation, governance, lineage, and monitoring processes
Support advanced analytics and machine learning data pipelines
Partner with analysts, data scientists, and stakeholders to deliver trusted, well-structured datasets
Continuously improve data workflows for performance, scalability, and cost efficiency
Contribute to documentation, standards, and best practices across the data engineering function
Required Qualifications
3-7 years of experience in data engineering or a related role
Strong proficiency in SQL and at least one programming language (Python, Scala, or Java)
Hands-on experience with modern data platforms (Snowflake, Databricks, or similar)
Experience building and orchestrating data pipelines in cloud environments
Working knowledge of cloud services (AWS, Azure, or GCP)
Experience with version control, CI/CD, and modern development practices
Strong analytical, problem-solving, and communication skills
Ability to work effectively in a fast-paced, collaborative environment
Preferred / Nice-to-Have
Experience with dbt, Airflow, or similar orchestration tools
Exposure to machine learning or advanced analytics pipelines
Experience implementing data governance or quality frameworks
Familiarity with SAP data platforms (e.g., BW, Datasphere, Business Data Cloud)
Experience using LLMs or AI-assisted tooling for automation, documentation, or data workflows
Relevant certifications in cloud, data platforms, or AI technologies
Senior Data Engineer
Data engineer job in Philadelphia, PA
Full-time Perm
Remote - EAST COAST ONLY
Role open to US Citizens and Green Card Holders only
We're looking for a Senior Data Engineer to lead the design, build, and optimization of modern data pipelines and cloud-native data infrastructure. This role is ideal for someone who thrives on solving complex data challenges, improving systems at scale, and collaborating across technical and business teams to deliver high-impact solutions.
What You'll Do
Architect, develop, and maintain scalable, secure data infrastructure supporting analytics, reporting, and operational workflows.
Design and optimize ETL/ELT pipelines to integrate data from diverse internal and external sources.
Prepare and transform structured and unstructured data to support modeling, reporting, and advanced analysis.
Improve data quality, reliability, and performance across platforms and workflows.
Monitor pipelines, troubleshoot discrepancies, and ensure accuracy and timely data delivery.
Identify architectural bottlenecks and drive long-term scalability improvements.
Collaborate with Product, BI, Finance, and engineering teams to build end-to-end data solutions.
Prototype algorithms, transformations, and automation tools to accelerate insights.
Lead cloud-native workflow design, including logging, monitoring, and storage best practices.
Create and maintain high-quality technical documentation.
Contribute to Agile ceremonies, engineering best practices, and continuous improvement initiatives.
Mentor teammates and guide adoption of data platform tools and patterns.
Participate in on-call rotation to maintain platform stability and availability.
What You Bring
Bachelor's degree in Computer Science or related technical field.
4+ years of advanced SQL experience (Oracle, PostgreSQL, etc.).
4+ years working with Java or Groovy.
3+ years integrating with SOAP or REST APIs.
2+ years with DBT and data modeling.
Strong understanding of modern data architectures, distributed systems, and performance optimization.
Experience with Snowflake or similar cloud data platforms (preferred).
Hands-on experience with Git, Jenkins, CI/CD, and automation/testing practices.
Solid grasp of cloud concepts and cloud-native engineering.
Excellent problem-solving, communication, and cross-team collaboration skills.
Ability to lead projects, own solutions end-to-end, and influence technical direction.
Proactive mindset with strong analytical and consultative abilities.
Distinguished Data Engineer- Bank Tech
Data engineer job in York, PA
Distinguished Data Engineers are individual contributors who strive to be diverse in thought so we visualize the problem space. At Capital One, we believe diversity of thought strengthens our ability to influence, collaborate and provide the most innovative solutions across organizational boundaries. Distinguished Engineers will significantly impact our trajectory and devise clear roadmaps to deliver next generation technology solutions.
Horzianal, Bank data organization to accelerate data modernization across the bank by defining, building, and operating on a unified, resilient, and compliant Enterprise Data Platforms. Enable bank domains to produce and leverage modern data for a modern bank. The position focused on setting the technical vision, prototyping and driving the most complex data domain data architecture for the banking domains. In addition, partner closely with enterprise teams to develop highly resilient data platforms.
Deep technical experts and thought leaders that help accelerate adoption of the very best engineering practices, while maintaining knowledge on industry innovations, trends and practices
Visionaries, collaborating on Capital One's toughest issues, to deliver on business needs that directly impact the lives of our customers and associates
Role models and mentors, helping to coach and strengthen the technical expertise and know-how of our engineering and product community
Evangelists, both internally and externally, helping to elevate the Distinguished Engineering community and establish themselves as a go-to resource on given technologies and technology-enabled capabilities
Responsibilities:
Build awareness, increase knowledge and drive adoption of modern technologies, sharing consumer and engineering benefits to gain buy-in
Strike the right balance between lending expertise and providing an inclusive environment where others' ideas can be heard and championed; leverage expertise to grow skills in the broader Capital One team
Promote a culture of engineering excellence, using opportunities to reuse and innersource solutions where possible
Effectively communicate with and influence key stakeholders across the enterprise, at all levels of the organization
Operate as a trusted advisor for a specific technology, platform or capability domain, helping to shape use cases and implementation in an unified manner
Lead the way in creating next-generation talent for Tech, mentoring internal talent and actively recruiting external talent to bolster Capital One's Tech talent
Basic Qualifications:
Bachelor's Degree
At least 7 years of experience in data engineering
At least 3 years of experience in data architecture
At least 2 years of experience building applications in AWS
Preferred Qualifications:
Masters' Degree
9+ years of experience in data engineering
3+ years of data modeling experience
2+ years of experience with ontology standards for defining a domain
2+ years of experience using Python, SQL or Scala
1+ year of experience deploying machine learning models
3+ years of experience implementing big data processing solutions on AWS
Capital One will consider sponsoring a new qualified applicant for employment authorization for this position.
The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked.
McLean, VA: $263,900 - $301,200 for Distinguished Data Engineer
Philadelphia, PA: $239,900 - $273,800 for Distinguished Data Engineer
Richmond, VA: $239,900 - $273,800 for Distinguished Data Engineer
Wilmington, DE: $239,900 - $273,800 for Distinguished Data Engineer
Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter.
This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan.
Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries.
If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
For technical support or questions about Capital One's recruiting process, please send an email to
Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site.
Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
MLOps Engineer
Data engineer job in Philadelphia, PA
Role : ML Ops Lead
Duration : Long Term
Skills :
4 - 7 years of experience in DevOps, MLOps, platform engineering, or cloud infrastructure.
Strong skills in containerization (Docker, Kubernetes), API hosting, and cloud-native services.
Experience with vector DBs (e.g., FAISS, Pinecone, Weaviate) and model hosting stacks.
Familiarity with logging frameworks, APM tools, tracing layers, and prompt/versioning logs.
Bonus: exposure to LangChain, LangGraph, LLM APIs, and retrieval-based architectures.
Responsibilities :
Set up and manage runtime environments for LLMs, vector DBs, and orchestration flows (e.g., LangGraph).
Support deployments in cloud, hybrid, and client-hosted environments.
Containerize systems for deployment (Docker, Kubernetes, etc.) and manage inference scaling.
Integrate observability tooling: prompt tracing, version logs, eval hooks, error pipelines.
Collaborate on RAG stack deployments (retriever, ranker, vector DB, toolchains).
Support CI/CD, secrets management, error triage, and environment configuration.
Contribute to platform-level IP, including reusable scaffolding and infrastructure accelerators.
Ensure systems are compliant with governance expectations and auditable (esp. in insurance contexts).
Preferred Attributes :
Systems thinker with strong debugging skills..
Able to work across cloud, on-prem, and hybrid client environments.
Comfortable partnering with architects and engineers to ensure smooth delivery.
Proactive about observability, compliance, and runtime reliability.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
Our culture - Our fuel
At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other.
People first - We make employees win.
Nurture leaders - We nurture from within.
Enjoy wins - Celebrating wins and creating leaders.
Collaboration - A culture of collaboration and people-centricity.
Diversity - Committed to diversity, equity, and inclusion.
Fun - Help people have fun at work.
Junior Data Engineer
Data engineer job in Philadelphia, PA
Remote
12-Month Contract-to-Hire
$30-35/hr. Depending on Experience
We are seeking a Junior Data Engineer to support the development and maintenance of data pipelines and analytics solutions. The ideal candidate will work alongside senior engineers and analysts to help manage data infrastructure, build scalable data workflows, and support business intelligence and AI initiatives. This role is ideal for someone with a strong technical foundation and a passion for learning who is looking to grow in the field of data engineering and analytics.
Requirements:
Bachelor's Degree in Computer Science, Data Engineering, Information Systems, or a related field.
0-3 years of experience in a data engineering or analytics support role.
Proficiency in SQL and familiarity with relational databases.
Exposure to Python, R, or Spark for data processing.
Understanding of data modeling, ETL processes, and data warehousing concepts.
Willingness to learn and grow within a large-scale enterprise data environment.
Strong communication skills and ability to work with cross-functional teams.
Ability to follow documented processes and work under supervision.
Your Responsibilities Include:
Assist in building and maintaining data pipelines and workflows.
Support data ingestion, transformation, and integration tasks.
Help manage and document data schemas and metadata.
Collaborate with analysts and business users to understand data needs.
Participate in data quality checks and troubleshooting efforts.
Contribute to the automation of manual data processes.
Learn and support cloud-based data tools and platforms (e.g., Azure).
Stay current with new data technologies and best practices.
Participate in sprint planning and agile ceremonies with the team.
Preferred Experience:
Exposure to cloud platforms (Azure preferred).
Experience with data visualization tools (e.g., Power BI).
Familiarity with CRM systems and customer data.
Understanding of the retail energy industry is a plus.
Certifications in data engineering or cloud technologies are a bonus.
What's in it for you?
A welcoming, team-oriented environment where you'll gain hands-on experience with a Fortune 100 company. Eight Eleven Group offers health, dental, and vision benefits, weekly pay, holiday paid time off, and sick leave.