Post job

Devops Engineer jobs at TSYS

- 4815 jobs
  • Databricks Platform Engineer

    Edward Jones 4.5company rating

    Tempe, AZ jobs

    Innovate here. And see your ideas come to life. It's an exciting time to work in tech at Edward Jones. We are making massive investments in emerging technologies to improve how we work with our clients and with each other. Relationships are the focus of our business model. And working in Technology here means using your skills to build, deliver and maintain the technologies that enable us to deepen and support those relationships. The best part? We develop and create our own industry-leading solutions internally. And you can be a part of it. Working with emerging new technologies. Creating platforms, programs and experiences that change how we work together - and support our client-first focus. Changing the future of our firm, the industry and the advisor-client relationship. Job Overview Position Schedule: Full-Time This job posting is anticipated to remain open for 30 days, from 10-Dec-2025. The posting may close early due to the volume of applicants. About the Opportunity Are you passionate about enabling advanced analytics, data science, and AI? Do you thrive at the intersection of cloud engineering and transformative business initiatives? If so, the Data Platform Engineering team is seeking an experienced Databricks Platform Engineer to help drive our enterprise data modernization strategy. This unique opportunity positions you at the forefront of our mission to accelerate advanced analytics, data science, and machine learning across the enterprise by building, managing, and evolving our Databricks platform on Azure. As a Databricks Platform Engineer, you will architect, implement, and optimize our Azure Databricks environment, supporting mission-critical analytics projects across multiple business units. You'll directly enable high-value initiatives while enhancing platform reliability, scalability, and governance. You will collaborate with data engineering, data science, analytics, and application teams to empower business innovation, drive technical excellence, and support key transformation initiatives. What you will do: Platform Architecture & Management: Design, implement, and manage scalable, secure Databricks platforms including workspace/cluster provisioning, lifecycle management, and resource optimization using Delta Lake and Photon to ensure performance, reliability, and cost efficiency. Infrastructure Automation & Monitoring: Automate provisioning and deployment using Terraform, GitLab CI/CD, and scripting (Python, Bash, PowerShell) while building observability dashboards to monitor platform health, performance, and costs. Security & Governance: Implement and enforce security policies, access controls, and compliance requirements using Unity Catalog and Databricks security features to maintain enterprise-grade data protection. Collaboration & Troubleshooting: Partner with data scientists, ML engineers, and product teams to deploy data pipelines and analytical services while resolving cluster configuration issues, job failures, and performance bottlenecks. Platform Innovation & Support: Evaluate and adopt emerging Databricks capabilities, document architecture and best practices, mentor junior engineers, and participate in on-call rotation for rapid incident response and platform stability. You will also participate in the Databricks platform on-call support rotation, providing expert assistance and rapid response for high-priority incidents, escalations, and critical service interruptions. This is a vital component of our commitment to platform stability and end-user success. Edward Jones' compensation and benefits package includes medical and prescription drug, dental, vision, voluntary benefits (such as accident, hospital indemnity, and critical illness), short- and long-term disability, basic life, and basic AD&D coverage. Short- and long-term disability, basic life, and basic AD&D coverage are provided at no cost to associates. Edward Jones offers a 401k retirement plan, and tax-advantaged accounts: health savings account, and flexible spending account. Edward Jones observes ten paid holidays and provides 15 days of vacation for new associates beginning on January 1 of each year, as well as sick time, personal days, and a paid day for volunteerism. Associates may be eligible for bonuses and profit sharing. All associates are eligible for the firm's Employee Assistance Program. For more information on the Benefits available to Edward Jones associates, please visit our benefits page. Hiring Minimum: $99200 Hiring Maximum: $168900 Read More About Job Overview Skills/Requirements Position Requirements: What experience you will need: Bachelor's degree in Computer Science, Information Systems or another applicable field is preferred 5+ years of experience in Systems Administration Minimum 3 years hands-on experience designing, deploying, managing, and supporting Azure Databricks, Azure Data Lake Storage (ADLS), and related data solutions as a Platform Engineer/Administrator, with deep knowledge of Azure cloud infrastructure, resource management, security, and networking. Proficiency in automation and scripting tools including Terraform (IaC), Azure CLI, Databricks CLI, PowerShell, Python, REST API, and SQL for infrastructure provisioning, deployment, and platform management. CI/CD and version control expertise with Git, GitHub Actions, and related tools to automate data and infrastructure workflows, ensuring reliable and repeatable deployment processes. Preferred experience in performance tuning and optimization of Azure ADLS and Databricks clusters, cost optimization strategies, and familiarity with data ingestion tools (Qlik Replicate, Apache Kafka, Matillion ELT, Snowflake). Agile development experience working collaboratively in cross-functional product teams with exposure to iterative development practices and modern collaboration frameworks. Excellent communication skills with the ability to articulate complex system designs, architectural patterns, and technical solutions clearly to stakeholders across all levels of leadership, from technical teams to executives. Candidates that live within a commutable distance from our Tempe, AZ and St. Louis, MO home office locations are expected to work in the office three days per week, with preference for Tuesday through Thursday. Current INTERNAL home-based associates: While this role is posted as hybrid, if selected and accepted, you may retain your home-based status. Edward Jones intends in good faith to continue offering the role as home-based, though future business or regulatory needs may require on-site work. Read More About Skills/Requirements Awards & Accolades At Edward Jones, we are building a place where everyone feels like they belong. We're proud of our associates' contributions to the firm and the recognitions we have received. Check out our U.S. awards and accolades: Insights & Information Blog Postings about Edward Jones Check out our Canadian awards and accolades: Insights & Information Blog Postings about Edward Jones Read More About Awards & Accolades About Us Join a financial services firm where your contributions are valued. Edward Jones is a Fortune 500¹ company where people come first. With over 9 million clients and 20,000 financial advisors across the U.S. and Canada, we're proud to be privately-owned, placing the focus on our clients rather than shareholder returns. Behind everything we do is our purpose: We partner for positive impact to improve the lives of our clients and colleagues, and together, better our communities and society. We are an innovative, flexible, and inclusive organization that attracts, develops, and inspires performance excellence and a sense of belonging. People are at the center of our partnership. Edward Jones associates are seen, heard, respected, and supported. This is what we believe makes us the best place to start or build your career. View our Purpose, Inclusion and Citizenship Report. ¹Fortune 500, published June 2024, data as of December 2023. Compensation provided for using, not obtaining, the rating. Edward Jones does not discriminate on the basis of race, color, gender, religion, national origin, age, disability, sexual orientation, pregnancy, veteran status, genetic information or any other basis prohibited by applicable law. #LI-HO
    $99.2k-168.9k yearly 10h ago
  • Principal Software Engineer

    Dinan & Associates 4.1company rating

    South San Francisco, CA jobs

    This is a full-time role with a client of Dinan & Associates. This role is with an established company and includes excellent health care and other benefits. Role: Principal / Senior Principal Software Engineer Industry: Biotechnology / Pharmaceutical R&D Location: San Francisco Bay Area (Hybrid) The Organization We are a leading global biotechnology company driven to innovate and ensure access to healthcare for generations to come. Our goal is to create a healthier future and more time for patients with their loved ones. The Position Advances in AI, data, and computational sciences are transforming drug discovery and development. Our Research and Early Development organizations have demonstrated how these technologies accelerate R&D, leveraging data and novel computational models to drive impact. Our Computational Sciences group is a strategic, unified team dedicated to harnessing the transformative power of data and Artificial Intelligence (AI) to assist scientists in delivering innovative medicines for patients worldwide. Within this group, the Data and Digital Solutions team leads the modernization of our computational and data ecosystems by integrating digital technologies to empower stakeholders, advance data-driven science, and accelerate decision-making. The Role The Solutions team develops modernized and interconnected computational and data ecosystems. These are foundational to building solutions that accelerate the work done by Computational and Bench Scientists and enable ML/AI tool creation and adoption. Our team specializes in building Data Pipelines and Applications for data acquisition, collection, storage, transformation, linkage, and sharing. As a Software Engineer in the Solutions Engineering capability, you will work closely with Data Engineers, Product Leaders, and Tech/ML Ops, as well as directly with key partners including Computational Scientists and Research Scientists. You will build robust and scalable systems that unlock the potential of diverse scientific data, accelerating the discovery and development of life-changing treatments. Key Responsibilities Technical Leadership: Provide strategic and tactical technical leadership for ongoing initiatives. Identify new opportunities with an eye for consolidation, deprecation, and building common solutions. System Design: Responsible for technical excellence, ensuring solutions are innovative, best-in-class, and integrated by delivering data flows and pipelines across key domains like Research Biology, Drug Discovery, and Translational Medicine. Architecture: Learn, deeply understand, and improve Data Workflows, Application Architecture, and Data Ecosystems by leveraging standard patterns (layered architecture, microservices, event-driven, multi-tenancy). Collaboration: Understand and influence technical decisions around data workflows and application development while working collaboratively with key partners. AI/ML Integration: Integrate diverse sets of data to power AI/ML and Natural Language Search, enabling downstream teams working on Workflows, Visualization, and Analytics. Facilitate the implementation of AI models. Who You Are Education: Bachelor's or Master's degree in Computer Science or similar technical field, or equivalent experience. Experience: 7+ years of experience in software engineering (Principal Software Engineer level). 12+ years of experience (Sr. Principal Software Engineer level). Full Stack Expertise: Deep experience in full-stack development is required. Strong skills in building Front Ends using JavaScript, React (or similar libraries) as well as Backends using high-level languages like Python or Java. Data & Cloud: Extensive experience with Databases, Data Analytics (SQL/NoSQL, ETL, ELT), and APIs (REST, GraphQL). Extensive experience working on cloud-native architectures in public clouds (ideally AWS) is preferred. Engineering Best Practices: Experience building data applications that are highly reliable, scalable, performant, secure, and robust. You adopt and champion Open Source, Cloud First, API First, and AI First approaches. Communication: Outstanding communication skills, capable of articulating technical concepts clearly to diverse audiences, including executives and globally distributed technical teams. Mentorship: Ability to provide technical mentorship to junior developers and foster professional growth. Domain Knowledge (Preferred): Ideally, you are a full-stack engineer with domain knowledge in biology, chemistry, drug discovery, translational medicine, or a related scientific discipline. Compensation & Benefits Competitive salary range commensurate with experience (Principal and Senior Principal levels available). Discretionary annual bonus based on individual and company performance. Comprehensive benefits package. Relocation benefits are available. Work Arrangement Onsite presence on the San Francisco Bay Area campus is expected at least 3 days a week.
    $168k-227k yearly est. 2d ago
  • Sr Software Engineer

    CTC 4.6company rating

    Georgetown, KY jobs

    Required Qualifications: Excellent communication and collaboration skills Ability and desire to learn new technologies and support the continuous improvement of the team's processes Working knowledge in developing C# .NET web applications (.NET core is a plus) Proficient in the following: React, Angular, MVC framework (At least one) Knowledge of Agile development methodologies, especially Scrum Must be prepared to show and/or discuss examples of previously developed systems/applications Experience designing relational databases Additional Beneficial Qualifications: Experience using AI tools to enhance software development work Experience developing applications using a microservices architecture Experience working as part of a Scrum team Experience building web applications using Amazon Web Services (AWS) cloud services (or other cloud platforms) Proficient in the following: SQL Server, DocumentDB/MongoDB Automotive or other manufacturing/engineering industry experience
    $105k-141k yearly est. 1d ago
  • Java Software Engineer

    CLS Group 4.8company rating

    Iselin, NJ jobs

    Job Information: Functional Title - Assistant Vice President, Java Software Development Engineer Department - Technology Corporate Level - Assistant Vice President Report to - Director, Application Development Expected full-time salary range between $ 125,000 - 145,000 + variable compensation + 401(k) match + benefits Job Description: This position is with CLS Technology. The primary responsibilities of the job will be (a) Hands-on software application development (b) Level 3 support Duties, Responsibilities, and Deliverables: Develop scalable, robust applications utilizing appropriate design patterns, algorithms and Java frameworks Collaborate with Business Analysts, Application Architects, Developers, QA, Engineering, and Technology Vendor teams for design, development, testing, maintenance and support Adhere to CLS SDLC process and governance requirements and ensure full compliance of these requirements Plan, implement and ensure that delivery milestones are met Provide solutions using design patterns, common techniques, and industry best practices that meet the typical challenges/requirements of a financial application including usability, performance, security, resiliency, and compatibility Proactively recognize system deficiencies and implement effective solutions Participate in, contribute to, and assimilate changes, enhancements, requirements (functional and non-functional), and requirements traceability Apply significant knowledge of industry trends and developments to improve CLS in-house practices and services Provide Level-3 support. Provide application knowledge and training to Level-2 support teams Experience Requirements: 5+ years of hands-on application development and testing experience with proficient knowledge of core Java and JEE technologies such as JDBC and JAXB, Java/Web technologies Knowledge of Python, Perl, Unix shell scripting is a plus Expert hands-on experience with SQL and with at least one DBMS such as IBM DB2 (preferred) or Oracle is a strong plus Expert knowledge of and experience in securing web applications, secure coding practices Hands-on knowledge of application resiliency, performance tuning, technology risk management is a strong plus Hands-on knowledge of messaging middleware such as IBM MQ (preferred) or TIBCO EMS, and application servers such as WebSphere, or WebLogic Knowledge of SWIFT messaging, payments processing, FX business domain is a plus Hands-on knowledge of CI/CD practices and DevOps toolsets such as JIRA, GIT, Ant, Maven, Jenkins, Bamboo, Confluence, and ServiceNow. Hands-on knowledge of MS Office toolset including MS-Excel, MS-Word, PowerPoint, and Visio Proven track record of successful application delivery to production and effective Level-3 support. Success factors: In addition, the person selected for the job will Have strong analytical, written and oral communication skills with a high self-motivation factor Possess excellent organization skills to manage multiple tasks in parallel Be a team player Have the ability to work on complex projects with globally distributed teams and manage tight delivery timelines Have the ability to smoothly handle high stress application development and support environments Strive continuously to improve stakeholder management for end-to-end application delivery and support Qualification Requirements: Bachelor Degree Minimum 5 year experience in Information Technology
    $125k-145k yearly 3d ago
  • Data Engineer

    Hedge Fund 4.3company rating

    Houston, TX jobs

    We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs. What you will do: - Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access - Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible - Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality - Efficiently coordinate with the rest of our team in different locations Qualifications - 6+ years of enterprise-level coding experience with Python - Computer Science, MIS or related degree - Familiarity with Pandas and NumPy packages - Experience with Data Engineering and building data pipelines - Experience scraping websites with Requests, Beautiful Soup, Selenium, etc. - Strong understating of object-oriented design, design patterns, SOA architectures - Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools. - Strong communication skills - Familiarity with containerization solutions like Docker and Kubernetes is a plus
    $99k-127k yearly est. 4d ago
  • Windchill Sr. Developer

    Tata Consultancy Services 4.3company rating

    Greensboro, NC jobs

    Must Have Technical/Functional Skills: Strong knowledge of Windchill architecture and PLM processes Design, develop, and implement PTC Windchill PLM solutions. Customization and configuration of Windchill PLM software to meet business requirements. Integrate Windchill PLM with other enterprise systems and applications using ESI/ERP Connector. Provide technical support and troubleshooting for Windchill PLM solutions. CAD Integration support and User support activities Upgrade and maintenance of Windchill PLM systems. Knowledge in migration activities like PTC WBM or third-party tools. Familiarity with database management and SQL - Oracle/PostgreSQL System/Business Administration in Windows platform. Roles & Responsibilities: Work closely with the customer on maintenance of PLM Windchill, along with leading the integration initiative for the ERP rollout project. Salary Range: $94,000-$130,000 a year #LI-CM2
    $94k-130k yearly 4d ago
  • Data Engineer

    RSM Solutions, Inc. 4.4company rating

    Irvine, CA jobs

    Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it. If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is. So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role. This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available. I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role. The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house. You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight. Here are some of the main responsibilities: Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets. Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability. Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component. Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting. Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process. Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks. Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals. Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor. Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines. Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards. Here is what we are seeking: At least 6 years of experience building data pipelines in Spark or equivalent. At least 2 years deploying workloads on Kubernetes/Kubeflow. At least 2 years of experience with MLflow or similar experiment‑tracking tools. At least 6 years of experience in T‑SQL, Python/Scala for Spark. At least 6 years of PowerShell/.NET scripting. At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS. Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see. Mentor engineers on best practices in containerized data engineering and MLOps.
    $111k-166k yearly est. 5d ago
  • Data Platform Engineer / AI Workloads

    The Crypto Recruiters 3.3company rating

    Santa Rosa, CA jobs

    We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Your Rhythm: Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient Tackle complex challenges in distributed systems, databases, and AI infrastructure Collaborate with technical leadership to define and refine the product roadmap Write high-quality, well-tested, and maintainable code Contribute to the open-source community and engage with developers in the space Your Vibe: 5+ years experience designing building distributed database systems Expertise in building and operating scalable, reliable and secure database infrastructure systems Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB. Programming skills in Python Passion for building developer tools and scalable infrastructure Our Vibe: Relaxed work environment 100% paid top of the line health care benefits Full ownership, no micro management Strong equity package 401K Unlimited vacation An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
    $128k-181k yearly est. 4d ago
  • Data Platform Engineer / AI Workloads

    The Crypto Recruiters 3.3company rating

    San Francisco, CA jobs

    We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Your Rhythm: Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient Tackle complex challenges in distributed systems, databases, and AI infrastructure Collaborate with technical leadership to define and refine the product roadmap Write high-quality, well-tested, and maintainable code Contribute to the open-source community and engage with developers in the space Your Vibe: 5+ years experience designing building distributed database systems Expertise in building and operating scalable, reliable and secure database infrastructure systems Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB. Programming skills in Python Passion for building developer tools and scalable infrastructure Our Vibe: Relaxed work environment 100% paid top of the line health care benefits Full ownership, no micro management Strong equity package 401K Unlimited vacation An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
    $127k-180k yearly est. 4d ago
  • Engineer III, Software Development

    S&P Global 4.3company rating

    New York, NY jobs

    S&P Global Ratings The Role: Software Engineer The Team: The team is responsible for building external customer facing websites using emerging tools and technologies. The team works in a significant environment that gives ample opportunities to use creative ideas to take on complex problems for Commercial team. The Impact: You will have the opportunity every single day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. You will be making meaningful contribution in building solutions for the User Interfaces/Webservices/API/Data Processing What's in it for you: Build a career with a global company. Grow and improve your skills by working on enterprise-level products and new technologies. Enjoy attractive benefits package including medical benefits, gym discounts, and corporate benefits. Ongoing education through participation in conferences and training. Access to the most interesting information technologies. Responsibilities: Drive the development of strategic initiatives and BAU in a timely manner, collaborating with stakeholders. Set priorities and coordinate workflows to efficiently contribute to S&P objectives. Promote outstanding customer service, high performance, teamwork, and accountability. Define roles and responsibilities with clear goals and processes. Contribute to S&P enterprise architecture and strategic roadmaps. Develop agile practices with continuous development, integration, and deployment. Collaborate with global technology development teams and cross-functional teams. What We're Looking For: Bachelor's / Master's Degree in Computer Science, Data Science, or equivalent. 4+ years of experience in a related role. Excellent communication and interpersonal skills. Strong development skills, specifically in ReactJs, Java and related technologies. Ability to work in a collaborative environment. Right to work requirements: This role is limited for candidates with indefinite right to work within the USA. Compensation/Benefits Information (US Applicants Only): S&P Global states that the anticipated base salary range for this position is $90,000 - $120,000. Final base salary for this role will be based on the individual's geographical location as well as experience and qualifications for the role. In addition to base compensation, this role is eligible for an annual incentive plan. This role is not eligible for additional compensation such as an annual incentive bonus or sales commission plan. This role is eligible to receive additional S&P Global benefits. For more information on the benefits we provide to our employees, please click here.
    $90k-120k yearly 2d ago
  • SOC Engineer

    Source One Technical Solutions 4.3company rating

    Foster City, CA jobs

    Source One is a consulting services company and we're currently looking for the following individuals to work for an on-demand, autonomous ride-hailing company in Foster City, CA. ** We are unable to work with third party companies or offer visa sponsorship for this role. Title: SOC Engineer (contract) Pay Rate: $94.25/hr (W-2) Hybrid: 3 days/week on-site Description: SOC Engineers to help enhance the company's security posture by driving automation and conducting proactive threat hunting. The ideal candidates have a strong InfoSec background with deep experience in SIEM and SOAR platforms, including rule and playbook development, along with proficiency in Python scripting for automation. There are two positions: One role focused more on the SIEM side (Elastic is what they use, but Splunk ok), and the other role focused more on automation for detection. As an SOC Engineer, you'll: - Develop and fine-tune detection and correlation rules, dashboards, and reports within the SIEM to accurately detect anomalous activities. - Create, manage, and optimize SOAR playbooks to automate incident response processes and streamline security operations. - Utilize Python scripting to develop custom integrations and automate repetitive tasks within the SOC. - Build and maintain automation workflows to enhance the efficiency of threat detection, alert triage, and incident response. - Integrate various security tools and threat intelligence feeds with our SIEM and SOAR platforms using APIs and custom scripts. - Conduct proactive threat hunting to identify potential security gaps and indicators of compromise. - Analyze security alerts and data from various sources to identify and respond to potential security incidents. - Collaborate with Information Security team members and other teams to enhance the overall security of the organization. - Create and maintain clear and comprehensive documentation for detection rules, automation workflows, and incident response procedures. Key Responsibilities: - SIEM and SOAR Platform Management: Maintain our SIEM and SOAR platforms to ensure optimal performance and effectiveness in detecting and responding to security threats. Develop and fine-tune detection and correlation rules, dashboards, and reports within the SIEM to accurately detect anomalous activities. Create, manage, and optimize SOAR playbooks to automate incident response processes and streamline security operations. - Automation and Scripting: Utilize Python scripting to develop custom integrations and automate repetitive tasks within the SOC. Build and maintain automation workflows to enhance the efficiency of threat detection, alert triage, and incident response. Integrate various security tools and threat intelligence feeds with our SIEM and SOAR platforms using APIs and custom scripts. - Incident Response and Threat Hunting: Conduct proactive threat hunting to identify potential security gaps and indicators of compromise. Analyze security alerts and data from various sources to identify and respond to potential security incidents. - Collaboration and Documentation: Collaborate with Information Security team members and other teams to enhance the overall security of the organization. Create and maintain clear and comprehensive documentation for detection rules, automation workflows, and incident response procedures. Top Skills: - SIEM: InfoSec background Incident response/threat hunting Rule creation (some query language experience needed) - SOAR/Automation: Python automation, big data, systems Cortex XSOAR is pretty established - maintaining existing playbooks, logic changes, bug fixes Required: - 6+ years of experience in a Security Operations Center (SOC) environment or a similar cybersecurity role - Hands-on experience with managing and configuring SIEM platforms (e.g., Elastic SIEM, Splunk, QRadar, Microsoft Sentinel) - Demonstrable experience with SOAR platforms (e.g., Palo Alto Cortex XSOAR, Splunk SOAR) and playbook development - Proficiency in Python for scripting and automation of security tasks - Strong understanding of incident response methodologies, threat intelligence, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST) - Excellent analytical and problem-solving skills with the ability to work effectively in a fast-paced environment Preferred: - Relevant industry certifications such as CISSP, GCIH, or similar - Experience with cloud security and environmental constructs (AWS, Azure, GCP) - Familiarity with other scripting languages (e.g., PowerShell, Bash) - Knowledge of network and endpoint security solutions
    $94.3 hourly 3d ago
  • Data Platform Engineer / AI Workloads

    The Crypto Recruiters 3.3company rating

    Fremont, CA jobs

    We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Your Rhythm: Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient Tackle complex challenges in distributed systems, databases, and AI infrastructure Collaborate with technical leadership to define and refine the product roadmap Write high-quality, well-tested, and maintainable code Contribute to the open-source community and engage with developers in the space Your Vibe: 5+ years experience designing building distributed database systems Expertise in building and operating scalable, reliable and secure database infrastructure systems Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB. Programming skills in Python Passion for building developer tools and scalable infrastructure Our Vibe: Relaxed work environment 100% paid top of the line health care benefits Full ownership, no micro management Strong equity package 401K Unlimited vacation An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
    $127k-179k yearly est. 4d ago
  • Sr. Forward Deployed Engineer (Palantir)

    Trinity Industries, Inc. 4.5company rating

    Dallas, TX jobs

    At Trinity Industries, we don't just build railcars and deliver logistics - we shape the future of industrial transportation and infrastructure. As a Senior Forward Deployed Engineer, you'll be on the front lines deploying Palantir Foundry solutions directly into our operations, partnering with business leaders and frontline teams to transform complex requirements into intuitive, scalable solutions. Your work will streamline manufacturing, optimize supply chains, and enhance safety across our enterprise. This is more than a coding role - it's an opportunity to embed yourself in the heart of Trinity's mission, solving real‑world challenges that keep goods and people moving across North America. Join our team today and be a part of Delivering Goods for the Good of All! What you'll do: End-to-End Solution Delivery: Autonomously lead the design, development, and deployment of scalable data pipelines, full applications, and workflows in Palantir Foundry, integrating with cloud platforms (e.g., AWS, Azure, GCP) and external sources (e.g., Snowflake, Oracle, REST APIs). Ensure solutions are reliable, secure, and compliant with industry standards (e.g., GDPR, SOX), while handling ambiguity and delivering on-time results in high-stakes environments. Demonstrate deep expertise in Foundry's ecosystem to independently navigate and optimize complex builds Full Application and Workflow Development: Build comprehensive, end-to-end applications and automated workflows using Foundry modules such as Workshop, Slate, Quiver, Contour, and Pipeline Builder. Focus on creating intuitive, interactive user experiences that integrate front-end interfaces with robust back-end logic, enabling seamless operational tools like real-time supply chain monitoring systems or AI-driven decision workflows, going beyond data models to deliver fully functional, scalable solutions Data Modeling and Transformation for Advanced Analytics: Architect robust data models and ontologies in Foundry to standardize and integrate complex datasets from manufacturing and logistics sources. Develop reusable transformation logic using PySpark, SQL, and Foundry tools (e.g., Pipeline Builder, Code Repositories) to cleanse, enrich, and prepare data for advanced analytics, enabling predictive modeling, AI-driven insights, and operational optimizations like cost reductions or efficiency gains. Focus on creating semantic integrity across domains to support proactive problem-solving and "game-changing" outcomes Dashboard Development and Visualization: Build interactive dashboards and applications using Foundry modules (e.g., Workshop, Slate, Quiver, Contour) to provide real-time KPIs, trends, and visualizations for business stakeholders. Leverage these tools to transform raw data into actionable insights, such as supply chain monitoring or performance analytics, enhancing decision-making and user adoption AI Integration and Impact: Elevate business transformation by designing and implementing AIP pipelines and integrations that harness AI/ML for high-impact applications, such as predictive analytics in leasing & logistics, anomaly detection in manufacturing, or automated decision-making in supply chains. Drive transformative innovations through AIP's capabilities, integrating Large Language Models (LLMs), TensorFlow, PyTorch, or external APIs to deliver bottom-line results Leadership and Collaboration: Serve as a lead FDE on the team, collaborating with team members through hands-on guidance, code reviews, workshops, and troubleshooting. Lead by example in fostering a culture of efficient Foundry building and knowledge-sharing to scale team capabilities Business Domain Strategy and Innovation: Deeply understand Trinity's industrial domains (e.g., leasing financials, manufacturing processes, supply chain logistics) to identify stakeholder needs better than they do themselves. Propose and implement disruptive solutions that drive long-term productivity, retention, and business transformation, incorporating interoperable cloud IDEs such as Databricks for complementary data processing and analytics workflows Collaboration and Stakeholder Engagement: Work cross-functionally with senior leadership and teams to gather requirements, validate solutions, and ensure trustworthiness in high-stakes projects What you'll bring: Bachelor's degree in Computer Science, Engineering, Data Science, Financial Engineering, Econometrics, or a related field required (Master's preferred) 8 plus years of hands-on experience in data engineering, with at least 4 years specializing in Palantir Foundry (e.g., Ontology, Pipelines, AIP, Workshop, Slate), demonstrating deep, autonomous proficiency in building full applications and workflows Proven expertise in Python, PySpark, SQL, and building scalable ETL workflows, with experience integrating with interoperable cloud IDEs such as Databricks Demonstrated ability to deliver end-to-end solutions independently, with strong evidence of quantifiable impacts (e.g., "Built pipeline reducing cloud services expenditures by 30%") Strong business acumen in industrial domains like manufacturing, commercial leasing, supply chain, or logistics, with examples of proactive innovations Experience collaborating with team members and leadership in technical environments Excellent problem-solving skills, with a track record of handling ambiguity and driving results in fast-paced settings Preferred Qualifications Certifications in Palantir Foundry (e.g., Foundry Data Engineer, Application Developer) Experience with AI/ML integrations (e.g., TensorFlow, PyTorch, LLMs) within Foundry AIP for predictive analytics Familiarity with CI/CD tools and cloud services (e.g., AW, Azure, Google Cloud). Strongly Desired: Hands-on experience with enterprise visualization platforms such as Qlik, Tableau, or PowerBI to enhance dashboard development and analytics delivery (not required but a significant plus for integrating with Foundry tools).
    $86k-115k yearly est. 4d ago
  • Data Conversion Engineer

    Paymentus 4.5company rating

    Charlotte, NC jobs

    Summary/Objective Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded. About the Role The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform. Responsibilities Develop data conversion procedures using SQL, Java and Linux scripting Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications Develop new specifications to satisfy new customers and products Serve as the primary point of contact/driver for all technical related conversion activities Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates Maintain and update technical conversion documentation to share with internal and external clients and partners Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills Adapt and creatively solve encountered problems under high stress and tight deadlines Learn database structure, business logic and combine all knowledge to improve processes Be flexible Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication Management of multiple projects and conversion implementations Ability to proactively troubleshoot and solve problems with limited supervision Qualifications B.S. Degree in Computer Science or comparable experience Strong knowledge of Linux and the command line interface Exceptional SQL skills Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.) Familiarity with various online banking applications and understanding of third-party integrations is a plus Effective written and verbal communication skills Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels Strong attention to detail Proficiency with raw data, analytics, or data reporting tools Preferred Skills Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries Experience with front end web interfaces (HTML5, Javascript, CSS3) Cloud technologies (AWS, GCP, Azure) Work Environment This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones. Physical Demands This role requires sitting or standing at a computer workstation for extended periods of time. Position Type/Expected Hours of Work This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand. Travel No travel is required for this position. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Equal Opportunity Statement Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment. Reasonable Accommodation Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
    $82k-114k yearly est. 5d ago
  • AWS Data Engineer

    Tata Consultancy Services 4.3company rating

    Seattle, WA jobs

    Must Have Technical/Functional Skills: We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications. Roles & Responsibilities: Key Responsibilities: • Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data. • ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems. • Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data. • Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data. • Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations. • Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health. • Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals. • Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance. Base Salary Range: $100,000 - $130,000 per annum TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $100k-130k yearly 1d ago
  • Kubernetes Engineer

    Tata Consultancy Services 4.3company rating

    Plano, TX jobs

    Hands on experience of Kubernetes engineering and development. Minimum 5-7+ years of experience in working with hybrid Infra architectures Experience in analyzing the architecture of On Prem Infrastructure for Applications (Network, Storage, Processing, Backup/DR etc). Strong understanding of Infrastructure capacity planning, monitoring, upgrades, IaaC automations using Terraform, Ansible, CICD using Jenkins/Github Actions. Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management. Experience working with engineering teams to define best practices and processes as appropriate to support the entire infrastructure lifecycle - Plan, Build, Deploy, and Operate such as automate lifecycle activities - self-service, orchestration and provisioning, configuration management. Experience defining infrastructure direction. Drive continuous improvement including design, and standardization of process and methodologies. Experience assessing feasibility, complexity and scope of new capabilities and solutions Base Salary Range: $100,000 - $110,000 per annum TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $100k-110k yearly 1d ago
  • Distributed Systems Engineer / AI Workloads

    The Crypto Recruiters 3.3company rating

    San Jose, CA jobs

    We are actively searching for a Distributed Systems Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Our office is located in downtown SF and we collaborate two days a week onsite. Your Rhythm: Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient Tackle complex challenges in distributed systems, databases, and AI infrastructure Collaborate with technical leadership to define and refine the product roadmap Write high-quality, well-tested, and maintainable code Contribute to the open-source community and engage with developers in the space Your Vibe: 3+ years of professional distributed database systems experience Expertise in building and operating scalable, reliable and secure database infrastructure systems Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB. Programming skills in Python Passion for building developer tools and scalable infrastructure Available to collaborate onsite 2 days a week Our Vibe: Relaxed work environment 100% paid top of the line health care benefits Full ownership, no micro management Strong equity package 401K Unlimited vacation An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
    $101k-138k yearly est. 2d ago
  • GCP engineer with Bigquery, Pyspark

    Tata Consultancy Services 4.3company rating

    Phoenix, AZ jobs

    Job Title : GCP engineer with Bigquery, Pyspark Experience Required - 7+ Years Must Have Technical/Functional Skills GCP Engineer with Bigquery, Pyspark and Python experience Roles & Responsibilities · 6+ years of professional experience with at least 4+ years of GCP Data Engineer experience · Experience working on GCP application Migration for large enterprise · Hands on Experience with Google Cloud Platform (GCP) · Extensive experience with ETL/ELT tools and data transformation frameworks · Working knowledge of data storage solutions like Big Query or Cloud SQL · Solid skills in data orchestration tools like AirFlow or Cloud Workflows. · Familiarity with Agile development methods. · Hands on experience with Spark, Python ,PySpark APIs. Knowledge of various Shell Scripting tools Salary Range - $90,000 to $120,000 per year Interested candidates please do share me your updated resume to ******************* TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $90k-120k yearly 4d ago
  • Distributed Systems Engineer / AI Workloads

    The Crypto Recruiters 3.3company rating

    Sonoma, CA jobs

    We are actively searching for a Distributed Systems Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you. Our office is located in downtown SF and we collaborate two days a week onsite. Your Rhythm: Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient Tackle complex challenges in distributed systems, databases, and AI infrastructure Collaborate with technical leadership to define and refine the product roadmap Write high-quality, well-tested, and maintainable code Contribute to the open-source community and engage with developers in the space Your Vibe: 3+ years of professional distributed database systems experience Expertise in building and operating scalable, reliable and secure database infrastructure systems Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB. Programming skills in Python Passion for building developer tools and scalable infrastructure Available to collaborate onsite 2 days a week Our Vibe: Relaxed work environment 100% paid top of the line health care benefits Full ownership, no micro management Strong equity package 401K Unlimited vacation An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
    $101k-139k yearly est. 2d ago
  • Data Governance Engineer

    Tata Consultancy Services 4.3company rating

    Phoenix, AZ jobs

    Role: Data Governance Engineer Experience Required - 6+ Years Must Have Technical/Functional Skills • Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience. • 2 - 5 years of Data Quality Management experience. • Intermediate competency in SQL & Python or related programming language. • Strong familiarity with data architecture and/or data modeling concepts • 2 - 5 years of experience with Agile or SAFe project methodologies Roles & Responsibilities • Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others. • Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback. • Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business. • Responsible for holistic platform data quality monitoring, including but not limited to critical data elements. • Collaborate with and influence product managers to ensure all new use cases are managed according to policies. • Influence and contribute to strategic improvements to data assessment processes and analytical tools. • Responsible for monitoring data quality issues, communicating issues, and driving resolution. • Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams. • Subject matter expertise on multiple platforms. • Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap. Generic Managerial Skills, If any • Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions. • Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team. • Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way. Interested candidates please do share me your updated resume to ******************* Salary Range - $100,000 to $120,000 per year TCS Employee Benefits Summary: Discretionary Annual Incentive. Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans. Family Support: Maternal & Parental Leaves. Insurance Options: Auto & Home Insurance, Identity Theft Protection. Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement. Time Off: Vacation, Time Off, Sick Leave & Holidays. Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
    $100k-120k yearly 1d ago

Learn more about TSYS jobs