Staff Software Engineer
Data engineer job at Navient
**Our mission is to make higher education accessible and affordable for everyone.** We empower students with financial support and supercharge their ability to pay down their debt, so they can get on the right financial track, fast. We build tools that help people feel in control of their financial future, including:
+ **Private student loans** - low rates, people-first service, and flexible payments.
+ **Student loan refinancing** - break free from high-interest rates or monthly payments.
+ **Scholarships** - access to thousands of scholarships to help students pay less.
Earnies are committed to helping students live their best lives, free from the stress of student debt. If you're as passionate as we are about our mission, read more below, and let's build something great together!
Title: Staff Software Engineer
Duties: The Staff Software Engineer (Multiple Positions Open) at Earnest LLC in Oakland, CA will
drive the technical strategy and execution for our engineering teams. Lead the development of a
scalable, high-performance lending ecosystem from customer onboarding to checkout. Architect and
build customer-centric financial products, ensuring a frictionless and optimized user experience and
orchestrating large-scale financial transactions. Define and execute the technical vision and best
practices for a high-performing engineering team. Lead architectural decisions to enhance scalability,
reliability, and efficiency of the lending platform. Collaborate with Product, UX, and Business teams to
align technology with strategic goals. Design, build, and maintain customer-facing lending
applications using Node.js, TypeScript, React/Redux, Angular, Sequelize, PostgreSQL, and Docker.
Develop and optimize high-quality, testable code, implementing unit and integration tests with Mocha,
Chai, Sinon, and Sequelize. Ensure performance, security, and scalability through best-in-class
software engineering practices. Identify and resolve defects through debugging, profiling, logging, log
analysis, tracing, and FullStory session replays. Oversee code deployment to Staging and Production
environments. Partner with Quality Engineers to address issues found in testing and improve automated
testing coverage. Lead and participate in Agile ceremonies. Break down product requirements into
engineering deliverables in Jira. Review and provide critical feedback on Product Requirements
Documents, Epics, and User Stories, influencing the technical and business roadmap. Recommend
alternative technical solutions to optimize delivery speed, enhance customer experience, and reduce
costs. Maintain technical documentation. Contribute to Earnest's DevOps culture and participate in
rotating on-call support for production applications.
Position is 100% remote. Salary: $207,585 per year.
Requirements: Bachelor's degree in Computer Science, Software Engineering, or a closely related
field, plus 3 years of software development experience. The 3 years of experience must include 3 years
of experience with each of the following: (1) building highly distributed microservices; (2) SQL
databases, including PostgreSQL, and caching, performance, monitoring, and scalability; (3) server-
side technologies, including Node.js, Typescript, and Javascript; and (4) client-side technologies,
including React Native and Angular. Must include two years of experience with: AWS or similar
cloud-based infrastructure; and leading the architecture, design, development, and deployment of
large-scale projects.
This notice is subject to Earnest LLC's employee referral program.
Interested candidates can apply online at *********************** [earnest.com] or send a resume to
**************************** and reference job code 058.
A little about our pay philosophy: We take pride in compensating our employees fairly and equitably. We are showcasing a range of your potential base salary based on the roles location. The successful candidate's starting pay will also be determined based on job-related qualifications, internal compensation, candidate location and budget. This range may be modified in the future.
Pay Range
$207,585-$207,585 USD
**Earnest believes in enabling our employees to live their best lives. We offer a variety of perks and competitive benefits, including:**
+ Health, Dental, & Vision benefits plus savings plans
+ Mac computers + work-from-home stipend to set up your home office
+ Monthly internet and phone reimbursement
+ Employee Stock Purchase Plan
+ Restricted Stock Units (RSUs)
+ 401(k) plan to help you save for retirement plus a company match
+ Robust tuition reimbursement program
+ $1,000 travel perk on each Earnie-versary to anywhere in the world
+ Competitive days of annual PTO
+ Competitive parental leave
**What Makes an Earnie:**
At Earnest, our people bring our cultural principles to life. These principles define how we work, how we win, and what we expect of ourselves and each other:
+ **Every Second Counts** : Speed is our competitive advantage. Our customers need better solutions, and the faster we execute, the greater our chance of success.
+ **Choose To Do Hard Things** : We win by tackling the hard things that others avoid, fueled by grit and resilience.
+ **Pursue Excellence** : Great companies, teams, and individuals never settle and are proud of the work that they do. What's good enough today won't be good enough tomorrow. Excellence isn't a destination; it's a mindset of continuous improvement.
+ **Lead Together** : Our success comes from how we work together. Leadership is not about titles-it is about action. We take ownership, drive results, and move forward as a team.
+ **Don't Take Yourself Too Seriously** : We take our work seriously, not ourselves. The stakes are high, but a sense of humor keeps us grounded, creative, and resilient.
**At Earnest, we are committed to building an environment where our employees feel included, valued, and heard. Our belief is that a strong commitment to diversity, inclusion, equity, and belonging enables us to move forward with our mission. We are dedicated to adding new perspectives to the team and encourage anyone to apply if your experience is close to what we are looking for.**
_Earnest provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, sexual orientation, gender identity, veteran status, disability or genetics. Qualified applicants with criminal histories will be considered for the position in a manner consistent with the Fair Chance Ordinance._
Easy ApplyData Engineer
Houston, TX jobs
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Founding Data Scientist (GTM)
San Francisco, CA jobs
An early-stage investment of ours is looking to make their first IC hire in data science. This company builds tools that help teams understand how their AI systems perform and improve them over time (and they already have a lot of enterprise customers).
We're looking for a Sr Data Scientist to lead analytics for sales, marketing, and customer success. The job is about finding insights in data, running analyses and experiments, and helping the business make better decisions.
Responsibilities:
Analyze data to improve how the company finds, converts, and supports customers
Create models that predict lead quality, conversion, and customer value
Build clear dashboards and reports for leadership
Work with teams across the company to answer key questions
Take initiative, communicate clearly, and dig into data to solve problems
Try new methods and tools to keep improving the company's GTM approach
Qualifications:
5+ years related industry experience working with data and supporting business teams.
Solid experience analyzing GTM or revenue-related data
Strong skills in SQL and modern analytics tools (Snowflake, Hex, dbt etc.)
Comfortable owning data workflows-from cleaning and modeling to presenting insights.
Able to work independently, prioritize well, and move projects forward without much direction
Clear thinker and communicator who can turn data into actionable recommendations
Adaptable and willing to learn new methods in a fast-paced environment
About Us:
Greylock is an early-stage investor in hundreds of remarkable companies including Airbnb, LinkedIn, Dropbox, Workday, Cloudera, Facebook, Instagram, Roblox, Coinbase, Palo Alto Networks, among others. More can be found about us here: *********************
How We Work:
We are full-time, salaried employees of Greylock and provide free candidate referrals/introductions to our active investments. We will contact anyone who looks like a potential match--requesting to schedule a call with you immediately.
Due to the selective nature of this service and the volume of applicants we typically receive from our job postings, a follow-up email will not be sent until a match is identified with one of our investments.
Please note: We are not recruiting for any roles within Greylock at this time. This job posting is for direct employment with a startup in our portfolio.
Data Engineer
Atlanta, GA jobs
No C2C
We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform.
What You'll Do
Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper
Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3
Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning
Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference
Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools
Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow
Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives
Build scalable, cloud-native, event-driven architectures that power AI-driven data products
What You Bring
Bachelor's degree in Computer Science, Engineering, Math, or a related technical field
3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS
Experience with automated data quality, monitoring, and observability tools
Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases
Proficiency in Python, Scala, or Java for pipeline development and AI integrations
Experience with synthetic data generation, vector databases, or GenAI-powered data products
Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
Data Modeling
Melbourne, FL jobs
Must Have Technical/Functional Skills
• 5+ years of experience in data modeling, data architecture, or a similar role
• Proficiency in SQL and experience with relational databases such as Oracle, SQL Server, or PostgreSQL
• Experience with data modeling tools such as Erwin, IBM Infosphere Data Architect, or similar
• Ability to communicate complex concepts clearly to diverse audiences
Roles & Responsibilities
• Design and develop conceptual, logical, and physical data models that support both operational and analytical needs
• Collaborate with business stakeholders to gather requirements and translate them into scalable data models
• Perform data profiling and analysis to understand data quality issues and identify opportunities for improvement
• Implement best practices for data modeling, including normalization, denormalization, and indexing strategies
• Lead data architecture discussions and present data modeling solutions to technical and non-technical audiences
• Mentor and guide junior data modelers and data architects within the team
• Continuously evaluate data modeling tools and techniques to enhance team efficiency and productivity
Base Salary Range: $100,000 - $150,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Platform Engineer / AI Workloads
Fremont, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
AWS Data Engineer
Seattle, WA jobs
Must Have Technical/Functional Skills:
We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications.
Roles & Responsibilities:
Key Responsibilities:
• Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data.
• ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems.
• Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data.
• Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data.
• Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations.
• Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health.
• Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals.
• Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Platform Engineer / AI Workloads
San Mateo, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Sr. Data Engineer
Dallas, TX jobs
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
Data Engineer
Irvine, CA jobs
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Conversion Engineer
Charlotte, NC jobs
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
Data Platform Engineer / AI Workloads
Santa Rosa, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Governance Engineer
Phoenix, AZ jobs
Role: Data Governance Engineer
Experience Required - 6+ Years
Must Have Technical/Functional Skills
• Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
• 2 - 5 years of Data Quality Management experience.
• Intermediate competency in SQL & Python or related programming language.
• Strong familiarity with data architecture and/or data modeling concepts
• 2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
• Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
• Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
• Influence and contribute to strategic improvements to data assessment processes and analytical tools.
• Responsible for monitoring data quality issues, communicating issues, and driving resolution.
• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
• Subject matter expertise on multiple platforms.
• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Interested candidates please do share me your updated resume to *******************
Salary Range - $100,000 to $120,000 per year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Principal Software Engineer
South San Francisco, CA jobs
This is a full-time role with a client of Dinan & Associates. This role is with an established company and includes excellent health care and other benefits.
Role: Principal / Senior Principal Software Engineer Industry: Biotechnology / Pharmaceutical R&D Location: San Francisco Bay Area (Hybrid)
The Organization We are a leading global biotechnology company driven to innovate and ensure access to healthcare for generations to come. Our goal is to create a healthier future and more time for patients with their loved ones.
The Position Advances in AI, data, and computational sciences are transforming drug discovery and development. Our Research and Early Development organizations have demonstrated how these technologies accelerate R&D, leveraging data and novel computational models to drive impact.
Our Computational Sciences group is a strategic, unified team dedicated to harnessing the transformative power of data and Artificial Intelligence (AI) to assist scientists in delivering innovative medicines for patients worldwide. Within this group, the Data and Digital Solutions team leads the modernization of our computational and data ecosystems by integrating digital technologies to empower stakeholders, advance data-driven science, and accelerate decision-making.
The Role The Solutions team develops modernized and interconnected computational and data ecosystems. These are foundational to building solutions that accelerate the work done by Computational and Bench Scientists and enable ML/AI tool creation and adoption. Our team specializes in building Data Pipelines and Applications for data acquisition, collection, storage, transformation, linkage, and sharing.
As a Software Engineer in the Solutions Engineering capability, you will work closely with Data Engineers, Product Leaders, and Tech/ML Ops, as well as directly with key partners including Computational Scientists and Research Scientists. You will build robust and scalable systems that unlock the potential of diverse scientific data, accelerating the discovery and development of life-changing treatments.
Key Responsibilities
Technical Leadership: Provide strategic and tactical technical leadership for ongoing initiatives. Identify new opportunities with an eye for consolidation, deprecation, and building common solutions.
System Design: Responsible for technical excellence, ensuring solutions are innovative, best-in-class, and integrated by delivering data flows and pipelines across key domains like Research Biology, Drug Discovery, and Translational Medicine.
Architecture: Learn, deeply understand, and improve Data Workflows, Application Architecture, and Data Ecosystems by leveraging standard patterns (layered architecture, microservices, event-driven, multi-tenancy).
Collaboration: Understand and influence technical decisions around data workflows and application development while working collaboratively with key partners.
AI/ML Integration: Integrate diverse sets of data to power AI/ML and Natural Language Search, enabling downstream teams working on Workflows, Visualization, and Analytics. Facilitate the implementation of AI models.
Who You Are
Education: Bachelor's or Master's degree in Computer Science or similar technical field, or equivalent experience.
Experience:
7+ years of experience in software engineering (Principal Software Engineer level).
12+ years of experience (Sr. Principal Software Engineer level).
Full Stack Expertise: Deep experience in full-stack development is required. Strong skills in building Front Ends using JavaScript, React (or similar libraries) as well as Backends using high-level languages like Python or Java.
Data & Cloud: Extensive experience with Databases, Data Analytics (SQL/NoSQL, ETL, ELT), and APIs (REST, GraphQL). Extensive experience working on cloud-native architectures in public clouds (ideally AWS) is preferred.
Engineering Best Practices: Experience building data applications that are highly reliable, scalable, performant, secure, and robust. You adopt and champion Open Source, Cloud First, API First, and AI First approaches.
Communication: Outstanding communication skills, capable of articulating technical concepts clearly to diverse audiences, including executives and globally distributed technical teams.
Mentorship: Ability to provide technical mentorship to junior developers and foster professional growth.
Domain Knowledge (Preferred): Ideally, you are a full-stack engineer with domain knowledge in biology, chemistry, drug discovery, translational medicine, or a related scientific discipline.
Compensation & Benefits
Competitive salary range commensurate with experience (Principal and Senior Principal levels available).
Discretionary annual bonus based on individual and company performance.
Comprehensive benefits package.
Relocation benefits are available.
Work Arrangement
Onsite presence on the San Francisco Bay Area campus is expected at least 3 days a week.
Engineer III, Software Development
New York, NY jobs
S&P Global Ratings
The Role: Software Engineer
The Team: The team is responsible for building external customer facing websites using emerging tools and technologies. The team works in a significant environment that gives ample opportunities to use creative ideas to take on complex problems for Commercial team.
The Impact: You will have the opportunity every single day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. You will be making meaningful contribution in building solutions for the User Interfaces/Webservices/API/Data Processing
What's in it for you:
Build a career with a global company.
Grow and improve your skills by working on enterprise-level products and new technologies.
Enjoy attractive benefits package including medical benefits, gym discounts, and corporate benefits.
Ongoing education through participation in conferences and training.
Access to the most interesting information technologies.
Responsibilities:
Drive the development of strategic initiatives and BAU in a timely manner, collaborating with stakeholders.
Set priorities and coordinate workflows to efficiently contribute to S&P objectives.
Promote outstanding customer service, high performance, teamwork, and accountability.
Define roles and responsibilities with clear goals and processes.
Contribute to S&P enterprise architecture and strategic roadmaps.
Develop agile practices with continuous development, integration, and deployment.
Collaborate with global technology development teams and cross-functional teams.
What We're Looking For:
Bachelor's / Master's Degree in Computer Science, Data Science, or equivalent.
4+ years of experience in a related role.
Excellent communication and interpersonal skills.
Strong development skills, specifically in ReactJs, Java and related technologies.
Ability to work in a collaborative environment.
Right to work requirements: This role is limited for candidates with indefinite right to work within the USA.
Compensation/Benefits Information (US Applicants Only):
S&P Global states that the anticipated base salary range for this position is $90,000 - $120,000. Final base salary for this role will be based on the individual's geographical location as well as experience and qualifications for the role.
In addition to base compensation, this role is eligible for an annual incentive plan. This role is not eligible for additional compensation such as an annual incentive bonus or sales commission plan.
This role is eligible to receive additional S&P Global benefits. For more information on the benefits we provide to our employees, please
click here.
SOC Engineer
Foster City, CA jobs
Source One is a consulting services company and we're currently looking for the following individuals to work for an on-demand, autonomous ride-hailing company in Foster City, CA.
** We are unable to work with third party companies or offer visa sponsorship for this role.
Title: SOC Engineer (contract)
Pay Rate: $94.25/hr (W-2)
Hybrid: 3 days/week on-site
Description: SOC Engineers to help enhance the company's security posture by driving automation and conducting proactive threat hunting. The ideal candidates have a strong InfoSec background with deep experience in SIEM and SOAR platforms, including rule and playbook development, along with proficiency in Python scripting for automation.
There are two positions: One role focused more on the SIEM side (Elastic is what they use, but Splunk ok), and the other role focused more on automation for detection.
As an SOC Engineer, you'll:
- Develop and fine-tune detection and correlation rules, dashboards, and reports within the SIEM to accurately detect anomalous activities.
- Create, manage, and optimize SOAR playbooks to automate incident response processes and streamline security operations.
- Utilize Python scripting to develop custom integrations and automate repetitive tasks within the SOC.
- Build and maintain automation workflows to enhance the efficiency of threat detection, alert triage, and incident response.
- Integrate various security tools and threat intelligence feeds with our SIEM and SOAR platforms using APIs and custom scripts.
- Conduct proactive threat hunting to identify potential security gaps and indicators of compromise.
- Analyze security alerts and data from various sources to identify and respond to potential security incidents.
- Collaborate with Information Security team members and other teams to enhance the overall security of the organization.
- Create and maintain clear and comprehensive documentation for detection rules, automation workflows, and incident response procedures.
Key Responsibilities:
- SIEM and SOAR Platform Management: Maintain our SIEM and SOAR platforms to ensure optimal performance and effectiveness in detecting and responding to security threats. Develop and fine-tune detection and correlation rules, dashboards, and reports within the SIEM to accurately detect anomalous activities. Create, manage, and optimize SOAR playbooks to automate incident response processes and streamline security operations.
- Automation and Scripting: Utilize Python scripting to develop custom integrations and automate repetitive tasks within the SOC. Build and maintain automation workflows to enhance the efficiency of threat detection, alert triage, and incident response. Integrate various security tools and threat intelligence feeds with our SIEM and SOAR platforms using APIs and custom scripts.
- Incident Response and Threat Hunting: Conduct proactive threat hunting to identify potential security gaps and indicators of compromise. Analyze security alerts and data from various sources to identify and respond to potential security incidents.
- Collaboration and Documentation: Collaborate with Information Security team members and other teams to enhance the overall security of the organization. Create and maintain clear and comprehensive documentation for detection rules, automation workflows, and incident response procedures.
Top Skills:
- SIEM: InfoSec background Incident response/threat hunting Rule creation (some query language experience needed)
- SOAR/Automation: Python automation, big data, systems Cortex XSOAR is pretty established - maintaining existing playbooks, logic changes, bug fixes
Required:
- 6+ years of experience in a Security Operations Center (SOC) environment or a similar cybersecurity role
- Hands-on experience with managing and configuring SIEM platforms (e.g., Elastic SIEM, Splunk, QRadar, Microsoft Sentinel)
- Demonstrable experience with SOAR platforms (e.g., Palo Alto Cortex XSOAR, Splunk SOAR) and playbook development
- Proficiency in Python for scripting and automation of security tasks
- Strong understanding of incident response methodologies, threat intelligence, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST)
- Excellent analytical and problem-solving skills with the ability to work effectively in a fast-paced environment
Preferred:
- Relevant industry certifications such as CISSP, GCIH, or similar
- Experience with cloud security and environmental constructs (AWS, Azure, GCP)
- Familiarity with other scripting languages (e.g., PowerShell, Bash)
- Knowledge of network and endpoint security solutions
Sr. Forward Deployed Engineer (Palantir)
Dallas, TX jobs
At Trinity Industries, we don't just build railcars and deliver logistics - we shape the future of industrial transportation and infrastructure. As a Senior Forward Deployed Engineer, you'll be on the front lines deploying Palantir Foundry solutions directly into our operations, partnering with business leaders and frontline teams to transform complex requirements into intuitive, scalable solutions. Your work will streamline manufacturing, optimize supply chains, and enhance safety across our enterprise. This is more than a coding role - it's an opportunity to embed yourself in the heart of Trinity's mission, solving real‑world challenges that keep goods and people moving across North America.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
End-to-End Solution Delivery:
Autonomously lead the design, development, and deployment of scalable data pipelines, full applications, and workflows in Palantir Foundry, integrating with cloud platforms (e.g., AWS, Azure, GCP) and external sources (e.g., Snowflake, Oracle, REST APIs). Ensure solutions are reliable, secure, and compliant with industry standards (e.g., GDPR, SOX), while handling ambiguity and delivering on-time results in high-stakes environments. Demonstrate deep expertise in Foundry's ecosystem to independently navigate and optimize complex builds
Full Application and Workflow Development:
Build comprehensive, end-to-end applications and automated workflows using Foundry modules such as Workshop, Slate, Quiver, Contour, and Pipeline Builder. Focus on creating intuitive, interactive user experiences that integrate front-end interfaces with robust back-end logic, enabling seamless operational tools like real-time supply chain monitoring systems or AI-driven decision workflows, going beyond data models to deliver fully functional, scalable solutions
Data Modeling and Transformation for Advanced Analytics:
Architect robust data models and ontologies in Foundry to standardize and integrate complex datasets from manufacturing and logistics sources. Develop reusable transformation logic using PySpark, SQL, and Foundry tools (e.g., Pipeline Builder, Code Repositories) to cleanse, enrich, and prepare data for advanced analytics, enabling predictive modeling, AI-driven insights, and operational optimizations like cost reductions or efficiency gains. Focus on creating semantic integrity across domains to support proactive problem-solving and "game-changing" outcomes
Dashboard Development and Visualization:
Build interactive dashboards and applications using Foundry modules (e.g., Workshop, Slate, Quiver, Contour) to provide real-time KPIs, trends, and visualizations for business stakeholders. Leverage these tools to transform raw data into actionable insights, such as supply chain monitoring or performance analytics, enhancing decision-making and user adoption
AI Integration and Impact:
Elevate business transformation by designing and implementing AIP pipelines and integrations that harness AI/ML for high-impact applications, such as predictive analytics in leasing & logistics, anomaly detection in manufacturing, or automated decision-making in supply chains. Drive transformative innovations through AIP's capabilities, integrating Large Language Models (LLMs), TensorFlow, PyTorch, or external APIs to deliver bottom-line results
Leadership and Collaboration:
Serve as a lead FDE on the team, collaborating with team members through hands-on guidance, code reviews, workshops, and troubleshooting. Lead by example in fostering a culture of efficient Foundry building and knowledge-sharing to scale team capabilities
Business Domain Strategy and Innovation:
Deeply understand Trinity's industrial domains (e.g., leasing financials, manufacturing processes, supply chain logistics) to identify stakeholder needs better than they do themselves. Propose and implement disruptive solutions that drive long-term productivity, retention, and business transformation, incorporating interoperable cloud IDEs such as Databricks for complementary data processing and analytics workflows
Collaboration and Stakeholder Engagement:
Work cross-functionally with senior leadership and teams to gather requirements, validate solutions, and ensure trustworthiness in high-stakes projects
What you'll bring:
Bachelor's degree in Computer Science, Engineering, Data Science, Financial Engineering, Econometrics, or a related field required (Master's preferred)
8 plus years of hands-on experience in data engineering, with at least 4 years specializing in Palantir Foundry (e.g., Ontology, Pipelines, AIP, Workshop, Slate), demonstrating deep, autonomous proficiency in building full applications and workflows
Proven expertise in Python, PySpark, SQL, and building scalable ETL workflows, with experience integrating with interoperable cloud IDEs such as Databricks
Demonstrated ability to deliver end-to-end solutions independently, with strong evidence of quantifiable impacts (e.g., "Built pipeline reducing cloud services expenditures by 30%")
Strong business acumen in industrial domains like manufacturing, commercial leasing, supply chain, or logistics, with examples of proactive innovations
Experience collaborating with team members and leadership in technical environments
Excellent problem-solving skills, with a track record of handling ambiguity and driving results in fast-paced settings
Preferred Qualifications
Certifications in Palantir Foundry (e.g., Foundry Data Engineer, Application Developer)
Experience with AI/ML integrations (e.g., TensorFlow, PyTorch, LLMs) within Foundry AIP for predictive analytics
Familiarity with CI/CD tools and cloud services (e.g., AW, Azure, Google Cloud).
Strongly Desired: Hands-on experience with enterprise visualization platforms such as Qlik, Tableau, or PowerBI to enhance dashboard development and analytics delivery (not required but a significant plus for integrating with Foundry tools).
GCP engineer with Bigquery, Pyspark
Phoenix, AZ jobs
Job Title : GCP engineer with Bigquery, Pyspark
Experience Required - 7+ Years
Must Have Technical/Functional Skills
GCP Engineer with Bigquery, Pyspark and Python experience
Roles & Responsibilities
· 6+ years of professional experience with at least 4+ years of GCP Data Engineer experience
· Experience working on GCP application Migration for large enterprise
· Hands on Experience with Google Cloud Platform (GCP)
· Extensive experience with ETL/ELT tools and data transformation frameworks
· Working knowledge of data storage solutions like Big Query or Cloud SQL
· Solid skills in data orchestration tools like AirFlow or Cloud Workflows.
· Familiarity with Agile development methods.
· Hands on experience with Spark, Python ,PySpark APIs.
Knowledge of various Shell Scripting tools
Salary Range - $90,000 to $120,000 per year
Interested candidates please do share me your updated resume to *******************
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Windchill Sr. Developer
Greensboro, NC jobs
Must Have Technical/Functional Skills:
Strong knowledge of Windchill architecture and PLM processes
Design, develop, and implement PTC Windchill PLM solutions.
Customization and configuration of Windchill PLM software to meet business requirements.
Integrate Windchill PLM with other enterprise systems and applications using ESI/ERP Connector.
Provide technical support and troubleshooting for Windchill PLM solutions.
CAD Integration support and User support activities
Upgrade and maintenance of Windchill PLM systems.
Knowledge in migration activities like PTC WBM or third-party tools.
Familiarity with database management and SQL - Oracle/PostgreSQL
System/Business Administration in Windows platform.
Roles & Responsibilities:
Work closely with the customer on maintenance of PLM Windchill, along with leading the integration initiative for the ERP rollout project.
Salary Range: $94,000-$130,000 a year
#LI-CM2
Java Software Engineer
Iselin, NJ jobs
Job Information:
Functional Title - Assistant Vice President, Java Software Development Engineer
Department - Technology
Corporate Level - Assistant Vice President
Report to - Director, Application Development
Expected full-time salary range between $ 125,000 - 145,000 + variable compensation + 401(k) match + benefits
Job Description:
This position is with CLS Technology. The primary responsibilities of the job will be
(a) Hands-on software application development
(b) Level 3 support
Duties, Responsibilities, and Deliverables:
Develop scalable, robust applications utilizing appropriate design patterns, algorithms and Java frameworks
Collaborate with Business Analysts, Application Architects, Developers, QA, Engineering, and Technology Vendor teams for design, development, testing, maintenance and support
Adhere to CLS SDLC process and governance requirements and ensure full compliance of these requirements
Plan, implement and ensure that delivery milestones are met
Provide solutions using design patterns, common techniques, and industry best practices that meet the typical challenges/requirements of a financial application including usability, performance, security, resiliency, and compatibility
Proactively recognize system deficiencies and implement effective solutions
Participate in, contribute to, and assimilate changes, enhancements, requirements (functional and non-functional), and requirements traceability
Apply significant knowledge of industry trends and developments to improve CLS in-house practices and services
Provide Level-3 support. Provide application knowledge and training to Level-2 support teams
Experience Requirements:
5+ years of hands-on application development and testing experience with proficient knowledge of core Java and JEE technologies such as JDBC and JAXB, Java/Web technologies
Knowledge of Python, Perl, Unix shell scripting is a plus
Expert hands-on experience with SQL and with at least one DBMS such as IBM DB2 (preferred) or Oracle is a strong plus
Expert knowledge of and experience in securing web applications, secure coding practices
Hands-on knowledge of application resiliency, performance tuning, technology risk management is a strong plus
Hands-on knowledge of messaging middleware such as IBM MQ (preferred) or TIBCO EMS, and application servers such as WebSphere, or WebLogic
Knowledge of SWIFT messaging, payments processing, FX business domain is a plus
Hands-on knowledge of CI/CD practices and DevOps toolsets such as JIRA, GIT, Ant, Maven, Jenkins, Bamboo, Confluence, and ServiceNow.
Hands-on knowledge of MS Office toolset including MS-Excel, MS-Word, PowerPoint, and Visio
Proven track record of successful application delivery to production and effective Level-3 support.
Success factors: In addition, the person selected for the job will
Have strong analytical, written and oral communication skills with a high self-motivation factor
Possess excellent organization skills to manage multiple tasks in parallel
Be a team player
Have the ability to work on complex projects with globally distributed teams and manage tight delivery timelines
Have the ability to smoothly handle high stress application development and support environments
Strive continuously to improve stakeholder management for end-to-end application delivery and support
Qualification Requirements:
Bachelor Degree
Minimum 5 year experience in Information Technology