Job Title: Engineer
Job Type: Contract
Founded in 1996, CTC is a global IT services, Consulting and Business Solutions partner dedicated to helping organizations innovate, optimize, and grow. With over 2,000 professionals worldwide, we support more than 100 clients in transforming complex challenges into lasting competitive advantages.
What you'll be doing
Participate in daily written and verbal reports up to Senior Manager level.
Developing process plans for battery production lines within the Battery PE department with direct support from the Manager.
Working in a collaborative team environment (supporting other Engineers, Construction Management Group, Technicians, Vendors, General Contractors, etc.) to achieve project milestones.
Equipment procurement, installation planning to execution, commissioning, pre-production trials, and launch of battery production lines.
Participate in advanced cleanliness protocols (clean room) necessary for assigned area of battery manufacturing.
Lead and/or support equipment trials at vendors prior to shipping and on the manufacturing floor after installation ensuring all targets are met for safety, quality and productivity.
Punchlist item identification, root cause analysis, and countermeasure management
KPI summarization, mass-production handover, and production issues support
Cross-functional team engagement (domestic and international) to discuss open items and key project schedules/milestones.
Creating and maintaining detailed schedules for assigned areas.
Maintain budgets for assigned areas.
Completing internal and external training
What you bring
Bachelor's degree or higher in Engineering or similar technical field.
3+ years of equivalent professional experience in a manufacturing environment.
Experience in production preparation and execution of capital projects, preferably from initial strategy planning to the start of mass production.
Project management experience dealing with capital investments greater than $0.5M.
Experience in creating and maintaining detailed schedules including milestone achievement for manufacturing equipment installation.
Ability to read and interpret 2D drawings including building facilities and process equipment drawings.
Proficiency with Microsoft Office products (Word, Excel, PowerPoint, etc.).
Proficiency with 2D and 3D drafting software such as AutoCAD.
PLC experience including the ability to Read, interpret and modify.
Ability for business travel, both domestic and international up to 10%.
Ability to work weekends, holidays, and shutdown periods (such as July and December shutdown) as needed, based on project condition and schedule.
SQL / Oracle or other database experience.
Experience with Cisco networks and maintain a machine network.
Experience using Ignition for programming Andon visualization and other tools.
Added bonus if you have
Mitsubishi PLC Programming experience
SCADA Systems experience
MES experience
Experience with Kepware software or similar SCADA Data systems
Cloud Engineer
Chicago, IL jobs
Winner of Built In's 2025 Best Places to Work in Chicago & Midsize Companies to Work For in Chicago
ABN AMRO Clearing USA LLC (AAC-USA) is a subsidiary of ABN AMRO Clearing Bank N.V. We are a global clearing firm that provides an integrated suite of financial services to professional trading participants in the global financial market.
The core service offering consists of clearing, execution, stock borrowing and lending, settlement. AAC-USA has a Global Reach through direct and indirect clearing memberships to over 90 of the world's leading exchanges. Our international network provides comprehensive market access to exchange-listed instruments such as stocks, futures, and options. ABN AMRO Clearing USA LLC-Member FINRA, NFA, FIA and SIPC.
Job Overview:
The AWS Cloud Platform Engineer is part of the Public Cloud Platform team. This is a global team that works on establishing the right landing ground for new workloads on AWS to build upon.
The team consists of 6 other AWS Cloud Platform Engineers that work on making a safe, compliant and consumable platform. You work together with other engineers and colleagues to help support the move from on-premise way of working to the AWS Cloud DevOps way of working while expanding the functionality of AWS platform.
We're looking for an AWS Cloud Platform Engineer with advanced Python software engineering skills-someone who thrives on challenges and is eager to join a global team shaping the future of cloud technology. In this role, you'll help design and evolve a secure, automated, and scalable AWS platform that enables teams worldwide to innovate faster and smarter as well as support local software engineers with migration of the their workloads to AWS cloud.
This position is based in our Chicago office and requires a min of 3 days per week onsite in office.
Job Responsibilities:
Design and evolve AWS landing zones that serve as the secure, automated foundation for global workloads.
Collaborate with a world-class team of AWS engineers to deliver a safe, scalable, and developer-friendly cloud platform.
Build powerful automations using Everything as Code and Infrastructure as Code components to reduce ticket-ops operations and accelerate innovation using your software engineering practices.
Drive the cloud-first transformation by guiding teams from on-premise to AWS and DevOps practices.
Continuously innovate-expand platform capabilities and introduce cutting-edge solutions to meet business needs.
Implement governance frameworks for automated compliance checks against internal risk standards.
Standardize infrastructure components to ensure consistency and reliability across workloads.
Develop automated testing strategies to guarantee quality and resilience.
Enhance CI/CD pipelines for faster, safer deployments.
Scale the AWS platform globally, expanding into new regions to support business growth.
Lead migration efforts from on-premises to AWS cloud in line with strategic roadmaps.
Collaborate with developers to translate software requirements into robust cloud solutions.
Act as the local ambassador for the department-communicating effectively with stakeholders and representing the team.
Basic Qualifications:
5+ years Deep knowledge and experience of AWS with a deep understanding of building complex cloud infrastructure using AWS CDK. Hands-on experience with wide variety of AWS services (networking, governance, integration, compute) is required.
5+ years Strong software development skills in Python (Python development experience). Clean code is not a mystery to you. We are AWS CDK-first company. Having experience in Terraform is appreciated, but strong software development skills are required.
2+ AWS Serverless services (Step functions, Lambda, SNS)
Good understanding of IT networking
Professional experience using CICD pipelines (Jenkins, AWS CodeBuild and CodePipeline)
Professional experience with automated software testing
Proficiency in GIT
Good understanding of observability tooling (eg: Splunk)
Computer science, or equivalent experience
Strong written and verbal communication skills
Ability to operate independently, part of larger team language
Experience with stakeholder interaction/education
Preferred Qualifications:
Financial services industry experience
Experience working on global teams
Annual base salary, not including discretionary bonus eligibility: $110,000
Perks and Benefits
As a global leader in financial services, we rely on the strengths of our employees to deliver their best work for our clients. We invest back in our employees by offering a host of benefits and perks.
Competitive health benefit offerings, including choice of three medical plans through BCBS-IL, dental, vision and flexible spending accounts
Complimentary annual membership to One Medical as well as an EAP
Robust 401(k) Plan with a generous match and vesting schedule
Use it or lose it pre-tax commuter benefits, corporate Divvy memberships and employer paid benefits such as term life and AD&D and disability insurance
Generous paid time off, sick days, a robust holiday schedule and parental leave plans.
Monthly wellness subsidy used towards wellness activities
Flexible hybrid work schedules
Open communication including regular Town Hall meetings with the Management Team
Forward-thinking, culture-based organization with collaborative teams that promote diversity, equity and inclusion
Free coffee & tea and “bagel Wednesday”
Employee-led Social and Philanthropy Committee to bring awareness and fun to the employees
Awesome office space with a large kitchen/meeting gathering area - including a foosball and ping pong table
Private, well-equipped Mother's room
Office is conveniently located in the Chicago Loop Financial District - close to CTA and Metra
Well maintained building (an architectural “masterpiece”) and a part of Chicago history - also includes a robust business center with a café, game-room and a shared rooftop terrace with green space
This information is intended as a summary of potential benefits only. Eligibility for the plans and programs listed here depends on the nature of employment, length of service and other factors. Actual coverage is governed by supporting summary plan descriptions and related policies.
ABN AMRO Clearing USA (AAC-USA) is proud to be an equal opportunity employer. AAC-USA celebrates diversity and does not discriminate on the basis of actual or perceived race; creed; color; religion; alienage or national origin; ancestry; citizenship status; age; disability or handicap; sex; marital status; veteran status; sexual orientation; status as a survivor of domestic violence, sex offenses, or stalking; genetic predisposition or carrier status; gender identity or expression; or any other characteristic protected by applicable federal, state or local laws.
We cultivate a culture of inclusion for all employees that respects individual strengths, views and experiences. This enables us to be a better and stronger team - one that makes better decisions and delivers better business results. AAC-USA participates in the E-Verify program in the US.
Notice and Disclaimer to External Recruitment Agencies:
External recruitment agencies must have a signed agreement with ABN AMRO Clearing USA, LLC (“AAC-USA”) prior to submitting a resume to fill a vacancy. In addition, a recruitment agency may only submit a resume when invited in writing by an AAC-USA Talent Acquisition Specialist to join the search to fill a specific vacancy. Any resume or other candidate information sent to AAC-USA not in accordance with the foregoing procedure will be considered exclusive property of AAC-USA, with respect to which no placement fee will be due, owing or paid to the sender.
Neo4j Engineer
Summit, NJ jobs
Must Have Technical/Functional Skills
Neo4j, Graph Data Science, Cypher, Python, Graph Algorithms, Bloom, GraphXR, Cloud, Kubernetes, ETL
Roles & Responsibilities
Design and implement graph-based data models using Neo4j.
Develop Cypher queries and procedures for efficient graph traversal and analysis.
Apply Graph Data Science algorithms for community detection, centrality, and similarity.
Integrate Neo4j with enterprise data platforms and APIs.
Collaborate with data scientists and engineers to build graph-powered applications.
Optimize performance and scalability of graph queries and pipelines.
Support deployment and monitoring of Neo4j clusters in cloud or on-prem environments.
Salary Range: $110,000 $140,000 Year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & amp; Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Founding Backend Engineer / DeFi / EVM
Fremont, CA jobs
We are actively searching for a Founding Senior Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
2+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
AWS Cloud Engineer
Plano, TX jobs
AWS data services (S3, Glue, Redshift, Athena, Lambda, Step Functions, Kinesis, etc.).
Unity Catalog, Pyspark, AWS Glue, Lambda, Step Functions, and Apache Airflow
AWS data services (S3, Glue, Redshift, Athena, Lambda, Step Functions, Kinesis, etc.)
Programming skills in Python, Scala, or Pyspark for data processing and automation
Expertise in SQL and experience with relational and NoSQL databases (e.g., RDS, DynamoDB).
Data Pipeline Development: Design, develop, and optimize ETL/ELT pipelines using AWS & Databricks services such as Unity Catalog, Pyspark, AWS Glue, Lambda, Step Functions, and Apache Airflow.
Data Integration: Integrate data from various sources, including relational databases, APIs, and streaming data, ensuring high data quality and consistency.
Cloud Infrastructure Management: Build and manage scalable, secure, and cost-efficient data infrastructure using AWS services like S3, Redshift, Athena, and RDS.
Data Modeling: Create and maintain data models to support analytics and reporting requirements, ensuring efficient querying and storage.
Performance Optimization: Monitor and optimize the performance of data pipelines, databases, and queries to meet SLAs and reduce costs.
Collaboration: Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions that enable business insights.
Security and Compliance: Implement best practices for data security, encryption, and compliance with regulations such as GDPR, CCPA, or ITAR.
Automation: Automate repetitive tasks and processes using scripting (Python, Bash) and Infrastructure as Code (e.g., Terraform, AWS CloudFormation).
Agile Development: Build and optimize continuous integration and continuous deployment (CI/CD) pipelines to enable rapid and reliable software releases using Gitlab in an Agile environment.
Monitoring and Troubleshooting: Set up monitoring and alerting for data pipelines and infrastructure, and troubleshoot issues to ensure high availability.
Base Salary Range: $100,000 - $120,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
GenAI Engineer
Alpharetta, GA jobs
At this time, we are unable to offer visa sponsorship for this position. Candidates must be legally authorized to work for any employer in the United States (or (applicable country) on a full-time basis without the need for current or future immigration sponsorship
RESPONSIBILITIES
Design, develop, implement, test, and maintain Generative AI models and agentic workflows using GCP Vertex AI, AWS Bedrock, and Snowflake Cortex.
Build and integrate Retrieval Augmented Generation (RAG) systems, to ground the responses of AI solutions with up-to-date and relevant data.
Fine-tune and evaluate foundation models using both proprietary and open-source technology.
Develop agentic architectures using tools like LangGraph, CrewAI, AutoGen, and others to orchestrate multi-step reasoning, planning, and tool use.
Build and optimize AI agents that can interface with MCP servers, APIs, databases, take contextual actions, and autonomously execute business workflows.
Collaborate closely with data scientists, data engineers, and stakeholders to understand requirements and deliver AI solutions that meet business needs.
Implement best practices for model deployment, monitoring, validation, and retraining on GCP Vertex AI, AWS Bedrock, and Snowflake Cortex.
Work with cross-functional IT and business teams in an Agile environment to deliver successful AI solutions.
Document processes, models, and configurations for knowledge sharing and compliance.
Stay current with the latest in generative AI research and translate breakthroughs into applied business solutions
Must Haves:
Bachelor's degree in Computer Science, Engineering, or a related field.
4+ years of experience in AI/ML engineering, with a focus on Generative AI.
Proficiency in programming languages such as Python
Strong understanding of Generative AI models (e.g., GPT, Transformer architectures) and experience in distilling, tuning and training them.
Familiarity with Retrieval Augmented Generation (RAG) techniques and their implementation.
Experience with agentic AI concepts and developing autonomous AI workflows.
Hands-on experience with GCP Vertex AI, AWS Bedrock + Sagemaker, and Snowflake Cortex platforms and their AI/ML capabilities..
Experience with machine learning frameworks (TensorFlow, PyTorch, Hugging Face Transformers).
Experience with AI frameworks such as LangChain, LlamaIndex, LangGraph, CrewAI, AutoGen, etc.
Experience with Natural Language Processing (NLP) and building conversational AI agents.
Experience with vector databases (eg. Pinecone, MongoDB Atlas, OpenSearch, pg Vector, FAISS, Qdrant, etc)
Experience building production-grade AI/ML systems at scale.
Knowledge of MLOps practices, including model deployment and lifecycle management.
Excellent problem-solving and analytical skills.
Excellent communication and collaboration skills.
Bonus Attributes:
Familiarity with Prompt Engineering, RLHF, and model evaluation techniques.
Understanding of AI governance, safety, and responsible principles.
Understanding of reinforcement learning and its application in agentic AI.
Familiarity with big data technologies (Apache Spark, Kafka)
Experience with CI/CD tools and automation for AI/ML workflows.
Familiarity with Snowflake's cloud integrations.
Experience with real-time data processing and streaming analytics.
Publications or contributions to open-source projects in the AI/ML field.
Abilities:
Ability to work proactively with a high level of initiative and accuracy.
Ability to manage multiple assignments effectively and meet established deadlines.
Strong interpersonal skills to interact professionally with staff and stakeholders.
Excellent organizational skills and attention to detail.
Critical thinking ability ranging from moderately to highly complex tasks.
Flexibility in adapting to changing business needs and priorities.
Ability to work creatively and independently with minimal supervision.
Ability to utilize experience and judgment in accomplishing goals.
Experience in navigating organizational structures and collaborating across teams.
Travel Required: 2%
Physical Demands:
Standing/ Walking - minimal level
Sitting - moderate to high level
Lifting - up to 15 lbs.
Visual Concentration - high level
Work Environment - typical office environment.
Position Type and Expected Hours of Work: Full Time
Disclaimer:
The above statement is intended to describe the general nature and level of work being performed. It is not intended to be an exhaustive list of responsibilities, duties and skills required.
AIOps Engineer
Alpharetta, GA jobs
AI Ops Engineer
(Exempt)
Enterprise AI/ML Organization
At this time, we are unable to offer visa sponsorship for this position. Candidates must be legally authorized to work for any employer in the United States (or (applicable country) on a full-time basis without the need for current or future immigration sponsorship.
OVERVIEW
We are looking for an experienced AI Ops Engineer to support our AI and ML initiatives, including GenAI platform development, deployment automation, and infrastructure optimization. You will play a critical role in building and maintaining scalable, secure, and observable systems that power scalable RAG solutions, model training platforms, and agentic AI workflows across the enterprise.
RESPONSIBILITIES
Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vector DB, embedding and reranking models, governance and observability systems, and guardrails).
Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-.
Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute.
Support vector database, feature store, and embedding store deployments (e.g., pg Vector, Pinecone, Redis, Featureform. MongoDB Atlas, etc).
Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings).
Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production.
Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines.
Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.).
Must Haves:
4+ years of DevOps, AI Ops, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments.
Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management.
Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.).
Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-.
Experience with monitoring, logging, and alerting systems for AI/ML workloads.
Deep understanding of Kubernetes and container lifecycle management.
Bonus Attributes:
Exposure to AI Ops tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines.
Familiarity with prompt engineering, model fine-tuning, and inference serving.
Experience with secure AI deployment and compliance frameworks
Knowledge of model versioning, drift detection, and scalable rollback strategies.
Abilities:
Ability to work with a high level of initiative, accuracy, and attention to detail.
Ability to prioritize multiple assignments effectively. Ability to meet established deadlines.
Ability to successfully, efficiently, and professionally interact with staff and customers.
Excellent organization skills.
Critical thinking ability ranging from moderately to highly complex.
Flexibility in meeting the business needs of the customer and the company.
Ability to work creatively and independently with latitude and minimal supervision.
Ability to utilize experience and judgment in accomplishing assigned goals.
Experience in navigating organizational structure.
Travel Required: 2%
Physical Demands:
Standing/ Walking - minimal level
Sitting - moderate to high level
Lifting - up to 15 lbs.
Visual Concentration - high level
Work Environment - typical office environment.
Position Type and Expected Hours of Work: Full Time
Disclaimer:
The above statement is intended to describe the general nature and level of work being performed. It is not intended to be an exhaustive list of responsibilities, duties and skills required.
GCP engineer with Bigquery, Pyspark
Phoenix, AZ jobs
Job Title : GCP engineer with Bigquery, Pyspark
Experience Required - 7+ Years
Must Have Technical/Functional Skills
GCP Engineer with Bigquery, Pyspark and Python experience
Roles & Responsibilities
· 6+ years of professional experience with at least 4+ years of GCP Data Engineer experience
· Experience working on GCP application Migration for large enterprise
· Hands on Experience with Google Cloud Platform (GCP)
· Extensive experience with ETL/ELT tools and data transformation frameworks
· Working knowledge of data storage solutions like Big Query or Cloud SQL
· Solid skills in data orchestration tools like AirFlow or Cloud Workflows.
· Familiarity with Agile development methods.
· Hands on experience with Spark, Python ,PySpark APIs.
Knowledge of various Shell Scripting tools
Salary Range - $90,000 to $120,000 per year
Interested candidates please do share me your updated resume to *******************
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Founding Backend Engineer / DeFi / EVM
Santa Rosa, CA jobs
We are actively searching for a Founding Senior Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
2+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Founding Backend Engineer / DeFi / EVM
San Francisco, CA jobs
We are actively searching for a Founding Senior Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
2+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Founding Backend Engineer / DeFi / EVM
San Jose, CA jobs
We are actively searching for a Founding Senior Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
2+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Senior DevOps Engineer
Cary, NC jobs
Job Title Senior DevOps Engineer
Corporate Title Assistant Vice President
Who we are
In short - an essential part of Deutsche Bank's technology solution, developing applications for key business areas.
Our Technologists drive Cloud, Cyber and business technology strategy while transforming it within a robust, hands-on engineering culture. Learning is a key element of our people strategy, and we have a variety of options for you to develop professionally. Our approach to the future of work champions flexibility and is rooted in the understanding that there have been dramatic shifts in the ways we work.
Having first established a presence in the Americas in the 19th century, Deutsche Bank opened its US technology center in Cary, North Carolina in 2009. Learn more about us here.
Overview
As a Senior DevOps Engineer in the Archiving Tech product family, you will design and deliver innovative solutions that power our Archival ecosystem. You will drive the development of a cutting-edge platform that ensures scalability, security, and reliability for critical business processes. In this role, you will collaborate across teams, translate business needs into technical outcomes, and directly influence the future of our technology and culture. Your contributions will enable seamless delivery of business goals while advancing our digital transformation strategy.
What We Offer You
A diverse and inclusive environment that embraces change, innovation, and collaboration
A hybrid working model with up to 60% work from home, allowing for in-office / work from home flexibility, generous vacation, personal and volunteer days, a commitment to Corporate Social Responsibility
Employee Resource Groups support an inclusive workplace for everyone and promote community engagement
Access to a strong network of Communities of Practice connecting you to colleagues with shared interests and values
Competitive compensation packages including health and wellbeing benefits, retirement savings plans, parental leave, and family building benefits, educational resources, matching gift and volunteer programs
What You'll Do
Managing and supporting Global Control Programme (GCP) cloud infrastructure
Maintain and update continuous integration/continuous deployment (or development) (CI/CD) pipelines and monitor system health and respond to issues/incidents
Ability to work in an interdisciplinary and independent manner as well as work with precision under tight deadlines
Work with a Solution-oriented approach and quick thinking
Good understanding of agile methodologies
Skills You'll Need
Bachelor's degree or equivalent required
Deep understanding of the cloud and ability to troubleshoot cloud/network issues
Excellent knowledge and expertise in enterprise production workloads related to one of the major cloud providers
In depth knowledge of CI/CD pipelines and experience with one of the common tools like Jenkins, Teamcity or Github
Kubernetes knowledge with deep experience with Kubernetes and Istio or other container management toosl (e.g. Consul and Nomad)
Skills That Will Help You Excel
Excellent communication skills, both written and spoken
Experience with task automations
Experience with languages such as Terraform, Helm, Shell, Python, and/or .net core
Expectations
It is the Bank's expectation that employees hired into this role will work in the Cary office in accordance with the Bank's hybrid working model.
Deutsche Bank provides reasonable accommodations to candidates and employees with a substantiated need based on disability and/or religion.
The salary range for this position in Cary is $100,000 to $142,250. Actual salaries may be based on a number of factors including, but not limited to, a candidate's skill set, experience, education, work location and other qualifications. Posted salary ranges do not include incentive compensation or any other type of remuneration.
Deutsche Bank Benefits
At Deutsche Bank, we recognize that our benefit programs have a profound impact on our colleagues. That's why we are focused on providing benefits and perks that enable our colleagues to live authenti cally and be their whole selves, at every stage of life. We provide access to physical, emotional, and financial wellness benefits that allow our colleagues to stay financially secure and strike balance between work and home. Click here to learn more!
Learn more about your life at Deutsche Bank through the eyes of our current employees ***************************
The California Consumer Privacy Act outlines how companies can use personal information. If you are interested in receiving a copy of Deutsche Bank's California Privacy Notice, please email ****************.
#LI-HYBRID
We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively.
Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group.
We welcome applications from all people and promote a positive, fair and inclusive work environment.
Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Click these links to view Deutsche Bank's Equal Opportunity Policy Statement and the following notices: EEOC Know Your Rights; Employee Rights and Responsibilities under the Family and Medical Leave Act; and Employee Polygraph Protection Act.
Data Engineer
Houston, TX jobs
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Network Deployment Engineer - Associate
Jacksonville, FL jobs
Job Title Network Deployment Engineer
Corporate Title Associate
As a Network Deployment Engineer, you will be responsible for designing and building new network environments that support critical business operations. You will manage network changes, troubleshoot issues using diagnostic tools, and coordinate with carriers and vendors for installations. Your contributions will directly impact the performance, scalability, and reliability of the organization's network infrastructure. You will also support both routine changes and large-scale projects across data centers, trading floors, and branch networks.
What We Offer You
A diverse and inclusive environment that embraces change, innovation, and collaboration
A hybrid working model, allowing for in-office / work from home flexibility, generous vacation, personal and volunteer days
Employee Resource Groups support an inclusive workplace for everyone and promote community engagement
Competitive compensation packages including health and wellbeing benefits, retirement savings plans, parental leave, and family building benefits
Educational resources, matching gift and volunteer programs
What You'll Do
Design and implement network solutions by leveraging existing standards and deploying new technologies as required
Plan, document, and execute network changes across various environments
Troubleshoot network issues using diagnostic tools such as Wireshark and coordinate with vendors for resolution
Test and validate voice and data circuits, liaising with carriers for installations and upgrades
Support network deployments for new locations and global initiatives involving high-volume financial systems
Contribute to the rollout of new features and functionality across backbone, data center, and branch networks
Skills You'll Need
Knowledge of routing protocols such as Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), Border Gateway Protocol (BGP), and Routing Information Protocol (RIP)
Experience with network hardware including F5 load balancers, Cisco routers, switches, and firewalls
Familiarity with network security concepts including firewalls and proxy servers
Certified technical engineer (e.g., Cisco Certified Network Professional)
Hands-on experience in enterprise network environments, preferably within financial institutions
Skills That Will Help You Excel
Understanding of Information Technology Infrastructure Library (ITIL) principles and service management practices
Operational knowledge of loop prevention protocols such as Spanning Tree Protocol
Experience with Internet Protocol Telephony (IPT) implementation and troubleshooting
Familiarity with Layer 2 protocols including Hot Standby Router Protocol (HSRP), Virtual Router Redundancy Protocol (VRRP), VLAN Trunking Protocol (VTP), IEEE 802.1Q, Flex Link, and Link Aggregation Control Protocol (LACP)
Proficiency in Cisco operating systems such as Internetwork Operating System (IOS) and Nexus Operating System (NX-OS)
Expectations
It is the Bank's expectation that employees hired into this role will work in the Jacksonville office in accordance with the Bank's hybrid working model.
Deutsche Bank provides reasonable accommodations to candidates and employees with a substantiated need based on disability and/or religion.
The salary range for this position in Jacksonville is $60,000 to $86,000. Actual salaries may be based on a number of factors including, but not limited to, a candidate's skill set, experience, education, work location and other qualifications. Posted salary ranges do not include incentive compensation or any other type of remuneration.
Deutsche Bank Benefits
At Deutsche Bank, we recognize that our benefit programs have a profound impact on our colleagues. That's why we are focused on providing benefits and perks that enable our colleagues to live authenti cally and be their whole selves, at every stage of life. We provide access to physical, emotional, and financial wellness benefits that allow our colleagues to stay financially secure and strike balance between work and home. Click here to learn more!
Learn more about your life at Deutsche Bank through the eyes of our current employees ***************************
The California Consumer Privacy Act outlines how companies can use personal information. If you are interested in receiving a copy of Deutsche Bank's California Privacy Notice please email ****************.
#LI-HYBRID
We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively.
Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group.
We welcome applications from all people and promote a positive, fair and inclusive work environment.
Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Click these links to view Deutsche Bank's Equal Opportunity Policy Statement and the following notices: EEOC Know Your Rights; Employee Rights and Responsibilities under the Family and Medical Leave Act; and Employee Polygraph Protection Act.
Data Engineer
Irvine, CA jobs
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Conversion Engineer
Charlotte, NC jobs
Summary/Objective
Are you looking to work at a high growth, innovative, and purpose driven FinTech company? If so, you'll love Paymentus. Recognized by Deloitte as one of the fastest growing companies in North America, Paymentus is the premier provider of innovative, reliable, and secure electronic bill payment and presentment for more than 1700 clients. We are a SaaS provider that enables companies to help their customers simplify their financial lives. We do that by making it easier for consumers and businesses to pay bills, plus move and manage money to achieve strong financial health. We continually build upon a massively scalable platform, supporting thousands of businesses and millions of transactions on a daily basis. We're looking for high performers to join our team who excel in their expertise and who can transform plans into action. You'll have the opportunity to grow in an environment where intelligence, innovation, and leadership are valued and rewarded.
About the Role
The Data Conversion Engineer serves as a key component of the Platform Integrations team, providing technical support and guidance on data conversion projects. Conversions are an integral part in ensuring adherence to Paymentus' standards for a successful launch. This role is essential to ensure all bill payment data converts properly and efficiently onto the Paymentus platform.
Responsibilities
Develop data conversion procedures using SQL, Java and Linux scripting
Augment and automate existing manual procedures to optimize accuracy and reduce time for each conversion
Develop and update conversion mappers to interpret incoming data and manipulate it to match Paymentus' specifications
Develop new specifications to satisfy new customers and products
Serve as the primary point of contact/driver for all technical related conversion activities
Review conversion calendar and offer technical support and solutions to meet deadlines and contract dates
Maintain and update technical conversion documentation to share with internal and external clients and partners
Work in close collaboration with implementation, integration, product and development teams using exceptional communication skills
Adapt and creatively solve encountered problems under high stress and tight deadlines
Learn database structure, business logic and combine all knowledge to improve processes
Be flexible
Monitor new client conversions and existing client support if needed; provide daily problem solving, coordination, and communication
Management of multiple projects and conversion implementations
Ability to proactively troubleshoot and solve problems with limited supervision
Qualifications
B.S. Degree in Computer Science or comparable experience
Strong knowledge of Linux and the command line interface
Exceptional SQL skills
Experience with logging/monitoring tools (AWS Cloudwatch, Splunk, ELK, etc.)
Familiarity with various online banking applications and understanding of third-party integrations is a plus
Effective written and verbal communication skills
Problem Solver - recognizes the need to resolve issues quickly and effectively, uses logic to solve problems; identifies problems and brings forward multiple solution options; knows who/when to involve appropriate people when troubleshooting issues
Communication; ability to use formal and informal written and/or verbal communication channels to inform others; articulates ideas and thoughts clearly both verbally and in writing
Dynamic and self-motivated; able to work on their own initiative and deliver the objectives required to maintain service levels
Strong attention to detail
Proficiency with raw data, analytics, or data reporting tools
Preferred Skills
Background in the Payments, Banking, E-Commerce, Finance and/or Utility industries
Experience with front end web interfaces (HTML5, Javascript, CSS3)
Cloud technologies (AWS, GCP, Azure)
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment such as laptop computers, photocopiers and smartphones.
Physical Demands
This role requires sitting or standing at a computer workstation for extended periods of time.
Position Type/Expected Hours of Work
This is a full-time position. Days and hours of work are Monday through Friday, 40 hours a week. Occasional evening and weekend work may be required as job duties demand.
Travel
No travel is required for this position.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Equal Opportunity Statement
Paymentus is an equal opportunity employer. We enthusiastically accept our responsibility to make employment decisions without regard to race, religious creed, color, age, sex, sexual orientation, national origin, ancestry, citizenship status, religion, marital status, disability, military service or veteran status, genetic information, medical condition including medical characteristics, or any other classification protected by applicable federal, state, and local laws and ordinances. Our management is dedicated to ensuring the fulfillment of this policy with respect to hiring, placement, promotion, transfer, demotion, layoff, termination, recruitment advertising, pay, and other forms of compensation, training, and general treatment during employment.
Reasonable Accommodation
Paymentus recognizes and supports its obligation to endeavor to accommodate job applicants and employees with known physical or mental disabilities who are able to perform the essential functions of the position, with or without reasonable accommodation. Paymentus will endeavor to provide reasonable accommodations to otherwise qualified job applicants and employees with known physical or mental disabilities, unless doing so would impose an undue hardship on the Company or pose a direct threat of substantial harm to the employee or others. An applicant or employee who believes he or she needs a reasonable accommodation of a disability should discuss the need for possible accommodation with the Human Resources Department, or his or her direct supervisor.
Senior Data Engineer
New York, NY jobs
Our client is a growing Fintech software company Headquarted in New York, NY. They have several hundred employees and are in growth mode.
They are currently looking for a Senior Data Engineer w/ 6+ years of overall professional experience. Qualified candidates will have hands-on experience with Python (6 years), SQL (6 years), DBT (3 years), AWS (Lambda, Glue), Airflow and Snowflake (3 years). BSCS and good CS fundamentals.
The Senior Data Engineer will work in a collaborative team environment and will be responsible for building, optimizing and scaling ETL Data Pipelines, DBT models and Datawarehousing. Excellent communication and organizational skills are expected.
This role features competitive base salary, equity, 401(k) with company match and many other attractive perks. Please send your resume to ******************* for immediate consideration.
Data Engineer
Seattle, WA jobs
Extensive knowledge on various Azure services.
Strong Knowledge of Dara Warehouse Concepts.
Extensive hands-on experience working with data using SQL.
Abilities to write complex queries in line with the business requirements.
Experience developing data pipelines in Databricks.
Prior experience of programming language (Preferred Python).
Hands on experience using version control systems like GIT.
Experience with creating project plans, technical design documents, training documents and presenting solutions to the end users.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Distinguished Engineer (Card Tech)
Washington, DC jobs
Distinguished Engineer (Card Tech) As a Distinguished Engineer at Capital One, you will be a part of a community of technical experts working to define the future of banking in the cloud. You will work alongside our talented team of developers, machine learning experts, product managers and people leaders. Our Distinguished Engineers are leading experts in their domains, helping devise practical and reusable solutions to complex problems. You will drive innovation at multiple levels, helping optimize business outcomes while driving towards strong technology solutions.
At Capital One, we believe diversity of thought strengthens our ability to influence, collaborate and provide the most innovative solutions across organizational boundaries. You will promote a culture of engineering excellence, and strike the right balance between lending expertise and providing an inclusive environment where the ideas of others can be heard and championed. You will lead the way in creating next-generation talent for Capital One Tech, mentoring internal talent and actively recruiting to keep building our community.
Distinguished Engineers are expected to lead through technical contribution. You will operate as a trusted advisor for our key technologies, platforms and capability domains, creating clear and concise communications, code samples, blog posts and other material to share knowledge both inside and outside the organization. You will specialize in a particular subject area, but your input and impact will be sought and expected throughout the organization.
In this role, you will partner with the Capital One Travel Data & Personalization team to architect the next generation of our Travel Intelligence Platform, serving as the primary architect for the foundational data schemas and orchestration layers that power our entire travel ecosystem. Your mission is to move beyond reactive tools by establishing a holistic Agentic AI framework-a vision where reasoning-based agents bridge the gap between high-scale data orchestration and real-world execution. You will be responsible for building an ecosystem that simultaneously enables classic machine learning model development for predictive insights while scaling agentic systems to automate complex workflows. This enables proactive journey management, where AI anticipates needs and prepares recommendations while incorporating human-in-the-loop approval when necessary. A key component of your strategy will be designing our growth marketing engine, where you will lead the strategic selection and architectural integration of our technology stack to ensure seamless, hyper-personalized, and value-optimized customer acquisition. By unifying these intelligent systems with a robust and scalable data architecture, you will transform how millions of customers discover, book, and experience travel within our premium portfolio.
If you are ready to provide thought leadership and build engineering excellence across Capital One's engineering teams, come join us in our mission to change banking for good.
Key responsibilities:
Articulate and evangelize a bold technical vision for your domain
Decompose complex problems into practical and operational solutions
Ensure the quality of technical design and implementation
Serve as an authoritative expert on non-functional system characteristics, such as performance, scalability and operability
Continue learning and injecting advanced technical knowledge into our community
Handle several projects simultaneously, balancing your time to maximize impact
Act as a role model and mentor within the tech community, helping to coach and strengthen the technical expertise and know-how of our engineering and product community
Basic Qualifications:
Bachelor's Degree
At least 9 years of experience in software engineering (including solution architecture and enterprise design patterns).
At least 7 years of experience in Data Engineering or Machine Learning Engineering, specifically building and maintaining large-scale production data pipelines (PySpark, SQL, Airflow).
At least 5 years of experience in Cloud Native Architecture (AWS preferred), including container orchestration (Kubernetes/KubeFlow) and serverless design.
At least 3 years of experience in a Technical Leadership role (Principal, Staff, or Distinguished Engineer), leading cross-functional initiatives without direct reporting lines.
Preferred Qualifications:
10+ years of experience in Software Engineering and Solution Architecture with a focus on high-scale data systems.
3+ years of experience in Machine Learning Engineering , specifically implementing LLMs or Agentic workflows in production.
Deep expertise in Data Design (Delta Lake, Snowflake, or Iceberg) and Streaming (Kafka).
Proven track record of building MarTech or Personalization engines that scale to millions of users.
Strong communication skills with the ability to translate "Agentic AI" concepts into business value for CX & Growth Marketing leadership.
Capital One will consider sponsoring a new qualified applicant for employment authorization for this position.
The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked.
New York, NY: $287,800 - $328,500 for Distinguished Engineer
McLean, VA: $263,900 - $301,200 for Distinguished Engineer
Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter.
This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level.
This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations.
For technical support or questions about Capital One's recruiting process, please send an email to
Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site.
Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Data Governance Engineer
Phoenix, AZ jobs
Role: Data Governance Engineer
Experience Required - 6+ Years
Must Have Technical/Functional Skills
• Understanding of Data Management and Data Governance concepts (metadata, lineage, data quality, etc.) and prior experience.
• 2 - 5 years of Data Quality Management experience.
• Intermediate competency in SQL & Python or related programming language.
• Strong familiarity with data architecture and/or data modeling concepts
• 2 - 5 years of experience with Agile or SAFe project methodologies
Roles & Responsibilities
• Assist in identifying data-related risks and associated controls for key business processes. Risks relate to Record Retention, Data Quality, Data Movement, Data Stewardship, Data Protection, Data Sharing, among others.
• Identify data quality issues, perform root-cause-analysis of data quality issues and drive remediation of audit and regulatory feedback.
• Develop deep understanding of key enterprise data-related policies and serve as the policy expert for the business unit, providing education to teams regarding policy implications for business.
• Responsible for holistic platform data quality monitoring, including but not limited to critical data elements.
• Collaborate with and influence product managers to ensure all new use cases are managed according to policies.
• Influence and contribute to strategic improvements to data assessment processes and analytical tools.
• Responsible for monitoring data quality issues, communicating issues, and driving resolution.
• Support current regulatory reporting needs via existing platforms, working with upstream data providers, downstream business partners, as well as technology teams.
• Subject matter expertise on multiple platforms.
• Responsible to partner with the Data Steward Manager in developing and managing the data compliance roadmap.
Generic Managerial Skills, If any
• Drives Innovation & Change: Provides systematic and rational analysis to identify the root cause of problems. Is prepared to challenge the status quo and drive innovation. Makes informed judgments, recommends tailored solutions.
• Leverages Team - Collaboration: Coordinates efforts within and across teams to deliver goals, accountable to bring in ideas, information, suggestions, and expertise from others outside & inside the immediate team.
• Communication: Influences and holds others accountable and has ability to convince others. Identifies the specific data governance requirements and is able to communicate clearly and in a compelling way.
Interested candidates please do share me your updated resume to *******************
Salary Range - $100,000 to $120,000 per year
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.