Staff SW Engineer
Senior software engineer job at Visa
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose - to uplift everyone, everywhere by being the best way to pay and be paid.
Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa.
Job Description
Visa's Technology Organization is a community of problem solvers and innovators reshaping the future of commerce. We operate the world's most sophisticated processing networks capable of handling more than 65k secure transactions a second across 80M merchants, 15k Financial Institutions, and billions of everyday people. While working with us you'll get to work on complex distributed systems and solve massive scale problems centered on new payment flows, business and data solutions, cyber security, and B2C platforms.
The Opportunity:
We are looking for Versatile, curious, and energetic Software Engineers who embrace solving complex challenges on a global scale. As a Visa Software Engineer, you will be an integral part of a multi-functional development team inventing, designing, building, and testing software products that reach a truly global customer base. While building components of powerful payment technology, you will get to see your efforts shaping the digital future of monetary transactions.
The Work itself:
Design code and systems that touch 40% of the world population while influencing Visa's internal standards for scalability, security, and reusability
Collaborate multi-functionally to create design artifacts and develop best-in-class software solutions for multiple Visa technical offerings
Actively contribute to product quality improvements, valuable service technology, and new business flows in diverse agile squads
Develop robust and scalable products intended for a myriad of customers including end-user merchants, b2b, and business to government solutions.
Leverage innovative technologies to build the next generation of Payment Services, Transaction Platforms, Real-Time Payments, and Buy Now Pay Later Technology
Opportunities to make a difference on a global or local scale through mentorship and continued learning opportunities
Essential Functions:
Demonstrates relevant technical working knowledge to understand requirements.
Identifies and contributes to the development and solution strategies to team members that improve the design and functionality of interface features across one or more project features, under minimal guidance.
Applies standard processes on the use of programming languages (e.g. HTML, C++, Java) to write code that fulfills website modification requests and technical requirements.
Collaborates with others to support the piloting of new technology capabilities and features that enhance the user website experience across e-commerce products.
Analyzes bugs for simple issues and applies debugging tools to verify assumptions.
The Skills You Bring:
Energy and Experience: A growth mindset that is curious and passionate about technologies and enjoys challenging projects on a global scale
Challenge the Status Quo: Comfort in pushing the boundaries, ‘hacking' beyond traditional solutions
Language Expertise: Expertise in one or more general development languages (e.g., Java, C#, C++)
Builder: Experience building and deploying modern services and web applications with quality and scalability
Learner: Constant drive to learn new technologies such as Angular, React, Kubernetes, Docker, etc.
Partnership: Experience collaborating with Product, Test, Dev-ops, and Agile/Scrum teams
**We do not expect that any single candidate would fulfill all of these characteristics. For instance, we have exciting team members who are really focused on building scalable systems but didn't work with payments technology or web applications before joining Visa.
This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager.
Qualifications
Basic Qualifications
5+ years of relevant work experience with a Bachelor's Degree or at least 2 years of work experience with an Advanced degree (e.g. Masters, MBA, JD, MD) or 0 years of work experience with a PhD, OR 8+ years of relevant work experience.
Preferred Qualifications
6 or more years of work experience with a Bachelors Degree or 4 or more years of relevant experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 3 years of relevant experience with a PhD
Technical Knowledge: Ability to understand requirements and apply standard processes for programming languages (e.g., HTML, C++, Java).
Software Development Skills:
Expertise in one or more general-purpose languages (Java, C#, C++).
Experience building and deploying modern services and web applications with quality and scalability.
Debugging & Problem-Solving:
Ability to analyze bugs for simple issues and use debugging tools effectively.
Collaboration:
Experience working with Product, Test, DevOps, and Agile/Scrum teams.
Ability to create design artifacts and contribute to best-in-class solutions.
Growth Mindset:
Curiosity and passion for technology.
Comfort with challenging projects on a global scale.
Innovation:
Willingness to challenge the status quo and explore beyond traditional solutions.
Continuous Learning:
Drive to learn new technologies such as Angular, React, Kubernetes, Docker.
Additional Information
Work Hours: Varies upon the needs of the department.
Travel Requirements: This position requires travel 5-10% of the time.
Mental/Physical Requirements: This position will be performed in an office setting. The position will require the incumbent to sit and stand at a desk, communicate in person and by telephone, frequently operate standard office equipment, such as telephones and computers.
Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law.
Visa will consider for employment qualified applicants with criminal histories in a manner consistent with applicable local law, including the requirements of Article 49 of the San Francisco Police Code.
U.S. APPLICANTS ONLY: The estimated salary range for a new hire into this position is 117,000.00 to 169,500.00 USD per year, which may include potential sales incentive payments (if applicable). Salary may vary depending on job-related factors which may include knowledge, skills, experience, and location. In addition, this position may be eligible for bonus and equity. Visa has a comprehensive benefits package for which this position may be eligible that includes Medical, Dental, Vision, 401 (k), FSA/HSA, Life Insurance, Paid Time Off, and Wellness Program.
Engineer III, Software Development
New York, NY jobs
S&P Global Ratings
The Role: Software Engineer
The Team: The team is responsible for building external customer facing websites using emerging tools and technologies. The team works in a significant environment that gives ample opportunities to use creative ideas to take on complex problems for Commercial team.
The Impact: You will have the opportunity every single day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. You will be making meaningful contribution in building solutions for the User Interfaces/Webservices/API/Data Processing
What's in it for you:
Build a career with a global company.
Grow and improve your skills by working on enterprise-level products and new technologies.
Enjoy attractive benefits package including medical benefits, gym discounts, and corporate benefits.
Ongoing education through participation in conferences and training.
Access to the most interesting information technologies.
Responsibilities:
Drive the development of strategic initiatives and BAU in a timely manner, collaborating with stakeholders.
Set priorities and coordinate workflows to efficiently contribute to S&P objectives.
Promote outstanding customer service, high performance, teamwork, and accountability.
Define roles and responsibilities with clear goals and processes.
Contribute to S&P enterprise architecture and strategic roadmaps.
Develop agile practices with continuous development, integration, and deployment.
Collaborate with global technology development teams and cross-functional teams.
What We're Looking For:
Bachelor's / Master's Degree in Computer Science, Data Science, or equivalent.
4+ years of experience in a related role.
Excellent communication and interpersonal skills.
Strong development skills, specifically in ReactJs, Java and related technologies.
Ability to work in a collaborative environment.
Right to work requirements: This role is limited for candidates with indefinite right to work within the USA.
Compensation/Benefits Information (US Applicants Only):
S&P Global states that the anticipated base salary range for this position is $90,000 - $120,000. Final base salary for this role will be based on the individual's geographical location as well as experience and qualifications for the role.
In addition to base compensation, this role is eligible for an annual incentive plan. This role is not eligible for additional compensation such as an annual incentive bonus or sales commission plan.
This role is eligible to receive additional S&P Global benefits. For more information on the benefits we provide to our employees, please
click here.
Principal Software Engineer
South San Francisco, CA jobs
This is a full-time role with a client of Dinan & Associates. This role is with an established company and includes excellent health care and other benefits.
Role: Principal / Senior Principal Software Engineer Industry: Biotechnology / Pharmaceutical R&D Location: San Francisco Bay Area (Hybrid)
The Organization We are a leading global biotechnology company driven to innovate and ensure access to healthcare for generations to come. Our goal is to create a healthier future and more time for patients with their loved ones.
The Position Advances in AI, data, and computational sciences are transforming drug discovery and development. Our Research and Early Development organizations have demonstrated how these technologies accelerate R&D, leveraging data and novel computational models to drive impact.
Our Computational Sciences group is a strategic, unified team dedicated to harnessing the transformative power of data and Artificial Intelligence (AI) to assist scientists in delivering innovative medicines for patients worldwide. Within this group, the Data and Digital Solutions team leads the modernization of our computational and data ecosystems by integrating digital technologies to empower stakeholders, advance data-driven science, and accelerate decision-making.
The Role The Solutions team develops modernized and interconnected computational and data ecosystems. These are foundational to building solutions that accelerate the work done by Computational and Bench Scientists and enable ML/AI tool creation and adoption. Our team specializes in building Data Pipelines and Applications for data acquisition, collection, storage, transformation, linkage, and sharing.
As a Software Engineer in the Solutions Engineering capability, you will work closely with Data Engineers, Product Leaders, and Tech/ML Ops, as well as directly with key partners including Computational Scientists and Research Scientists. You will build robust and scalable systems that unlock the potential of diverse scientific data, accelerating the discovery and development of life-changing treatments.
Key Responsibilities
Technical Leadership: Provide strategic and tactical technical leadership for ongoing initiatives. Identify new opportunities with an eye for consolidation, deprecation, and building common solutions.
System Design: Responsible for technical excellence, ensuring solutions are innovative, best-in-class, and integrated by delivering data flows and pipelines across key domains like Research Biology, Drug Discovery, and Translational Medicine.
Architecture: Learn, deeply understand, and improve Data Workflows, Application Architecture, and Data Ecosystems by leveraging standard patterns (layered architecture, microservices, event-driven, multi-tenancy).
Collaboration: Understand and influence technical decisions around data workflows and application development while working collaboratively with key partners.
AI/ML Integration: Integrate diverse sets of data to power AI/ML and Natural Language Search, enabling downstream teams working on Workflows, Visualization, and Analytics. Facilitate the implementation of AI models.
Who You Are
Education: Bachelor's or Master's degree in Computer Science or similar technical field, or equivalent experience.
Experience:
7+ years of experience in software engineering (Principal Software Engineer level).
12+ years of experience (Sr. Principal Software Engineer level).
Full Stack Expertise: Deep experience in full-stack development is required. Strong skills in building Front Ends using JavaScript, React (or similar libraries) as well as Backends using high-level languages like Python or Java.
Data & Cloud: Extensive experience with Databases, Data Analytics (SQL/NoSQL, ETL, ELT), and APIs (REST, GraphQL). Extensive experience working on cloud-native architectures in public clouds (ideally AWS) is preferred.
Engineering Best Practices: Experience building data applications that are highly reliable, scalable, performant, secure, and robust. You adopt and champion Open Source, Cloud First, API First, and AI First approaches.
Communication: Outstanding communication skills, capable of articulating technical concepts clearly to diverse audiences, including executives and globally distributed technical teams.
Mentorship: Ability to provide technical mentorship to junior developers and foster professional growth.
Domain Knowledge (Preferred): Ideally, you are a full-stack engineer with domain knowledge in biology, chemistry, drug discovery, translational medicine, or a related scientific discipline.
Compensation & Benefits
Competitive salary range commensurate with experience (Principal and Senior Principal levels available).
Discretionary annual bonus based on individual and company performance.
Comprehensive benefits package.
Relocation benefits are available.
Work Arrangement
Onsite presence on the San Francisco Bay Area campus is expected at least 3 days a week.
Senior Embedded Software Engineer
Palo Alto, CA jobs
Source One is a consulting services company and we're currently looking for the following individual to work as a consultant with our direct client, an autonomous mobility solutions company in Palo Alto, CA.
No Third-Party, No Corp to Corp, No Sponsorship
Title: Vehicle Software Platform Engineer
Location: Palo Alto, CA
Onsite: Mon-Fri, 40 hours
Contract Duration: 6 months with likely extension
Pay Rate: $120 - $140 hourly (w2)
Job description
Our partner is helping our client find an experienced Vehicle Software Platform Engineer to join its team developing a scalable, data-driven approach to autonomous and assisted driving.
In this role, you will focus on developing robust, sophisticated software platforms and tooling that underpin the functionality of modern vehicles.
We're looking for a candidate with a strong software development background in embedded, robotics, or automotive systems and the ability to work hands-on in a fast-paced, collaborative, and intercultural environment.
As a Vehicle Software Platform Engineer, you'll:
Work with the team to design, implement, test, and integrate features into the AD/ADAS vehicle platform.
Set up or adapt build flows and other relevant tooling.
Be excited about working hands-on in a fast-paced environment on software closely connected to operating systems, compute hardware, sensors, and vehicles.
Be ready to dive in and learn across the technology stack and leverage experience to develop solutions with sound design principles, extensibility, with safety in mind.
Ideal candidate profile
Excellent understanding of embedded software and systems (automotive, aerospace, robotics, etc.) and related interfaces (Ethernet, CAN, etc.).
Experience with system software development (e.g. drivers, filesystems, sockets) on Linux and/or QNX.
Daily tasks
Work with the team to design, implement, test, and integrate features into the AD/ADAS vehicle platform.
Set up or adapt build flows, and other relevant tooling.
Be excited about working hands-on in a fast-paced environment on software closely connected to operating systems, compute hardware, sensors, and vehicles.
Be ready to dive-in and learn across the technology stack and leverage experience to develop solutions with sound design principles, extensibility, and safety in mind.
Required skills
Bachelor's or Master's degree in Computer Science, Engineering, or a related field highly preferred
3-5+ years of relevant work experience
Proven track record of shipping software to production in our or a nearby domain (e.g., automotive, aerospace, defense, robotics)
Strong C++ and Python programming skills
Strong debugging and troubleshooting skills
Generalist attitude with proven ability to dive deep fast and willingness to learn continuously
C++ Developer
New York, NY jobs
The Role:
We are seeking exceptional C++ Technologists to join our team to further enhance and build within our trading infrastructure.
What You'll Do:
Write high-performance C++ code
Enhance our next-generation trading platform
Implement mission-critical trading infrastructure
What You'll Bring:
A minimum of 2 years of experience writing high-performance C++
Expertise in modern C++ (C++17/20, etc.)
In-depth understanding of network programming and distributed computing
Market Data Knowledsge
Strong knowledge of Unix/Linux fundamentals
Solid grasp of data structures and algorithms
Management Applications Development
Tallahassee, FL jobs
Primary Responsibilities:
Work with software developers, business analysts, data analysts, and other technical and non-technical subject-matter experts to coordinate and facilitate work.
Work with various technical teams (DevOps, DBAs, Network Administrators, Enterprise Development Architects, PMO, etc.) to assist in resolving issues or barriers with applications.
Effectively identify change and use appropriate protocols to manage and communicate this change effectively.
Effectively coordinate resources and assignments among project assignees and ensure work is assigned to the appropriate team members and that service levels are met.
Adhere to the DEP project management methodology, standards, policies, and procedures, as well as technical standards and policies relevant to assigned user stories or tasks.
Manage relationships with DEP program area business partners and develop strong, collaborative relationships with customers to achieve positive project outcomes.
Demonstrate strong relationship and interpersonal skills when working with technical staff, program staff, and the vendor community.
Lead requirements definition meetings with DEP customers.
Gather user requirements through joint requirement-gathering sessions, workshops, questionnaires, surveys, site visits, workflow storyboards, and other methods.
Translate user requirements into documentation that developers and other project team members can readily understand.
Facilitate the negotiation of requirements among multiple stakeholders.
Analyze gathered data and develop solutions or alternative methods of proceeding.
Create Visio process maps, requirements traceability matrices, use cases, test cases, and other needed business-analysis documentation.
Facilitate design sessions with the implementation team to define the solution.
Deliver elements of systems design, including data migration rules, business rules, wireframes, or other detailed deliverables.
Assist in business process redesign and documentation as needed.
Lead and/or participate in systems-testing activities.
Required Qualifications:
5+ years' experience in IT project management, specifically managing medium-to-large scale software application development projects.
5+ years' experience in managing multiple projects concurrently.
In-depth knowledge of the principles, theories, practices and techniques for managing the activities related to planning, managing and implementing software projects and programs.
Documented and proven ability to formulate project plans for managing and monitoring progress on software development projects; to think logically and to analyze and solve problems; compile, organize and analyze data; to evaluate and monitor projects, plans and schedules and implement corrective action plans.
Solid understanding of software development lifecycle methodologies (e.g., waterfall, iterative, agile, etc.)
Strong customer service orientation
Ability to be creative, use sound judgment, and display foresight to identify potential problems in design/specifications and assigned application software systems
Ability to establish and maintain effective working relationships with others
Ability to work independently
Ability to determine work priorities and ensure proper completion of work assignments
Excellent interpersonal, collaborative, oral, and written communication skills
Ability to write technical, business, and plain-language documents and emails with great attention to detail in all written communications
Ability to work well under pressure and meet deadlines without sacrificing quality
Preferred Qualifications:
Project Management Professional (PMP) certification
Experience developing and maintaining detailed project schedules using Microsoft Project
Familiarity with environmental regulatory business processes and practices
Knowledge and understanding of DEP's technical environment
Education:
Bachelor's Degree in Computer Science, Information Systems or other Information Technology major, or equivalent work experience.
Windows Mobile App Developer
Seattle, WA jobs
Must Have Technical/Functional Skills:
• 10+ years of experience as a Windows Mobile App Developer
• Strong Experience in Universal Windows Platform (UWP) Mobile App Development
• Strong experience in languages and technologies of the C#, XAML, MVVM, Object Oriented Programming
• Familiarity with the Service Oriented Architecture and Software Testing
• Strong domain knowledge in Manufacturing Industry / Automotive / Aerospace Industry
• Experience in developing software with Agile methodologies
Roles & Responsibilities:
• Responsible for Design, build and maintain advanced applications for the Windows platform
• Responsible for working with outside data sources and API's and Unit-test code for robustness, including edge cases, usability, and general reliability.
• Responsible for Identify and analyze user requirements, Prioritize, assign and execute tasks throughout the software development life cycle.
• Responsible for Developing Mobile based applications, Write well-designed, efficient code, Review, test and debug team members' code, Design database architecture.
Base Salary Range: $110,000 - $140,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Sr. Gitlab engineer
Dallas, TX jobs
Job Title: Gitlab Engineer
Duration: 6+ Months
Required Skills and Experience:
8+ years of experience in DevOps or Platform Engineering roles
3+ years of hands-on experience with GitLab CI/CD, GitLab Runners, and GitLab administration
Proficiency with scripting and automation (e.g., Bash, Python, or Go)
Experience with infrastructure-as-code tools (Terraform, Ansible, etc.)
Solid understanding of containerization (Docker) and orchestration (Kubernetes)
Familiarity with cloud platforms (AWS, GCP, Azure) and cloud-native tooling
Strong communication skills and a track record of cross-team collaboration
Knowledge of JFrog Artifactory, BitBucket / GIT, SVN and other SCM tools
Knowledge of desired state configuration, automated deployment, continuous integration, and release engineering tools like Puppet, Chef, Jenkins, Bamboo, Maven, Ant etc
Plan and execute the end-to-end migration from Jenkins and Automic to GitLab CI/CD
Configure and manage GitLab Runners, Groups, Projects, and Permissions at scale
Harden GitLab for enterprise usage (SAML/SSO, LDAP, RBAC, backup/restore)
Design, implement, and optimize complex GitLab CI/CD pipelines using YAML best practices
Implement multi-stage, parallel, and conditional workflows for build, test, security scan, and deploy
Integrate static code analysis, security scanning (SAST/DAST), and container scanning into pipelines
Analyze existing Jenkins's freestyle and scripted pipelines; translate to GitLab CI/CD syntax
Migrate Automic workflows/jobs into GitLab, orchestrating dependencies and scheduling
Reduce pipeline execution time through caching, artifact reuse, and pipeline templating
Lead the migration of container build and deployment processes from Docker to Podman/Buildah
Author GitLab CI templates for Podman-based image builds and registries
Ensure the security and compliance of container images (signing, vulnerability scanning)
Leverage Terraform, Ansible, or similar to provision and manage self-hosted GitLab and runners
Implement GitOps practices to manage infrastructure and environment configurations
Automate operational tasks and incident remediation via pipelines and scripts
Partner with application teams to onboard them onto GitLab workflows and best practices
Develop and maintain clear runbooks, wiki pages, and pipeline templates
Conduct workshops, brown-bags, and training sessions to evangelize GitLab CI/CD
Integrate monitoring (Prometheus/Grafana, ELK) for GitLab health and pipeline performance
Implement policies and guardrails to ensure code quality, compliance, and security posture
Troubleshoot and resolve CI/CD or migration-related incidents in a timely manner
Preferred:
A BS in Computer Science or equivalent work experience with good scripting/programming skills
GitLab Certified CI/CD Specialist or GitLab Administrator Certification
Contributions to the GitLab open-source project or similar tooling
Prior software experience with build management, configuration management and/or quality testing
Experience with SCM practices including Agile, continuous integration (CI) and continuous deployment (CD)
Data Engineer
Houston, TX jobs
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Data Engineer
Atlanta, GA jobs
No C2C
We're looking for a hands-on Data Engineer to help build, scale, and fine-tune real-time data systems using Kafka, AWS, and a modern data stack. In this role, you'll work deeply with streaming data, ETL, distributed systems, and PostgreSQL to power analytics, product innovation, and AI-driven use cases. You'll also get to work with AI/ML frameworks, automation, and MLOps tools to support advanced modeling and a highly responsive data platform.
What You'll Do
Design and build real-time streaming pipelines using Kafka, Confluent Schema Registry, and Zookeeper
Build and manage cloud-based data workflows using AWS services like Glue, EMR, EC2, and S3
Optimize and maintain PostgreSQL and other databases with strong schema design, advanced SQL, and performance tuning
Integrate AI and ML frameworks (TensorFlow, PyTorch, Hugging Face) into data pipelines for training and inference
Automate data quality checks, feature generation, and anomaly detection using AI-powered monitoring and observability tools
Partner with ML engineers to deploy, monitor, and continuously improve machine learning models in both batch and real-time pipelines using tools like MLflow, SageMaker, Airflow, and Kubeflow
Experiment with vector databases and retrieval-augmented generation (RAG) pipelines to support GenAI and LLM initiatives
Build scalable, cloud-native, event-driven architectures that power AI-driven data products
What You Bring
Bachelor's degree in Computer Science, Engineering, Math, or a related technical field
3+ years of hands-on data engineering experience with Kafka (Confluent or open-source) and AWS
Experience with automated data quality, monitoring, and observability tools
Strong SQL skills and solid database fundamentals with PostgreSQL and both traditional and NoSQL databases
Proficiency in Python, Scala, or Java for pipeline development and AI integrations
Experience with synthetic data generation, vector databases, or GenAI-powered data products
Hands-on experience integrating ML models into production data pipelines using frameworks like PyTorch or TensorFlow and MLOps tools such as Airflow, MLflow, SageMaker, or Kubeflow
Data Engineer
Irvine, CA jobs
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Platform Engineer / AI Workloads
Santa Rosa, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
San Mateo, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
Sunnyvale, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Data Platform Engineer / AI Workloads
Fremont, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Sr. Forward Deployed Engineer (Palantir)
Dallas, TX jobs
At Trinity Industries, we don't just build railcars and deliver logistics - we shape the future of industrial transportation and infrastructure. As a Senior Forward Deployed Engineer, you'll be on the front lines deploying Palantir Foundry solutions directly into our operations, partnering with business leaders and frontline teams to transform complex requirements into intuitive, scalable solutions. Your work will streamline manufacturing, optimize supply chains, and enhance safety across our enterprise. This is more than a coding role - it's an opportunity to embed yourself in the heart of Trinity's mission, solving real‑world challenges that keep goods and people moving across North America.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
End-to-End Solution Delivery:
Autonomously lead the design, development, and deployment of scalable data pipelines, full applications, and workflows in Palantir Foundry, integrating with cloud platforms (e.g., AWS, Azure, GCP) and external sources (e.g., Snowflake, Oracle, REST APIs). Ensure solutions are reliable, secure, and compliant with industry standards (e.g., GDPR, SOX), while handling ambiguity and delivering on-time results in high-stakes environments. Demonstrate deep expertise in Foundry's ecosystem to independently navigate and optimize complex builds
Full Application and Workflow Development:
Build comprehensive, end-to-end applications and automated workflows using Foundry modules such as Workshop, Slate, Quiver, Contour, and Pipeline Builder. Focus on creating intuitive, interactive user experiences that integrate front-end interfaces with robust back-end logic, enabling seamless operational tools like real-time supply chain monitoring systems or AI-driven decision workflows, going beyond data models to deliver fully functional, scalable solutions
Data Modeling and Transformation for Advanced Analytics:
Architect robust data models and ontologies in Foundry to standardize and integrate complex datasets from manufacturing and logistics sources. Develop reusable transformation logic using PySpark, SQL, and Foundry tools (e.g., Pipeline Builder, Code Repositories) to cleanse, enrich, and prepare data for advanced analytics, enabling predictive modeling, AI-driven insights, and operational optimizations like cost reductions or efficiency gains. Focus on creating semantic integrity across domains to support proactive problem-solving and "game-changing" outcomes
Dashboard Development and Visualization:
Build interactive dashboards and applications using Foundry modules (e.g., Workshop, Slate, Quiver, Contour) to provide real-time KPIs, trends, and visualizations for business stakeholders. Leverage these tools to transform raw data into actionable insights, such as supply chain monitoring or performance analytics, enhancing decision-making and user adoption
AI Integration and Impact:
Elevate business transformation by designing and implementing AIP pipelines and integrations that harness AI/ML for high-impact applications, such as predictive analytics in leasing & logistics, anomaly detection in manufacturing, or automated decision-making in supply chains. Drive transformative innovations through AIP's capabilities, integrating Large Language Models (LLMs), TensorFlow, PyTorch, or external APIs to deliver bottom-line results
Leadership and Collaboration:
Serve as a lead FDE on the team, collaborating with team members through hands-on guidance, code reviews, workshops, and troubleshooting. Lead by example in fostering a culture of efficient Foundry building and knowledge-sharing to scale team capabilities
Business Domain Strategy and Innovation:
Deeply understand Trinity's industrial domains (e.g., leasing financials, manufacturing processes, supply chain logistics) to identify stakeholder needs better than they do themselves. Propose and implement disruptive solutions that drive long-term productivity, retention, and business transformation, incorporating interoperable cloud IDEs such as Databricks for complementary data processing and analytics workflows
Collaboration and Stakeholder Engagement:
Work cross-functionally with senior leadership and teams to gather requirements, validate solutions, and ensure trustworthiness in high-stakes projects
What you'll bring:
Bachelor's degree in Computer Science, Engineering, Data Science, Financial Engineering, Econometrics, or a related field required (Master's preferred)
8 plus years of hands-on experience in data engineering, with at least 4 years specializing in Palantir Foundry (e.g., Ontology, Pipelines, AIP, Workshop, Slate), demonstrating deep, autonomous proficiency in building full applications and workflows
Proven expertise in Python, PySpark, SQL, and building scalable ETL workflows, with experience integrating with interoperable cloud IDEs such as Databricks
Demonstrated ability to deliver end-to-end solutions independently, with strong evidence of quantifiable impacts (e.g., "Built pipeline reducing cloud services expenditures by 30%")
Strong business acumen in industrial domains like manufacturing, commercial leasing, supply chain, or logistics, with examples of proactive innovations
Experience collaborating with team members and leadership in technical environments
Excellent problem-solving skills, with a track record of handling ambiguity and driving results in fast-paced settings
Preferred Qualifications
Certifications in Palantir Foundry (e.g., Foundry Data Engineer, Application Developer)
Experience with AI/ML integrations (e.g., TensorFlow, PyTorch, LLMs) within Foundry AIP for predictive analytics
Familiarity with CI/CD tools and cloud services (e.g., AW, Azure, Google Cloud).
Strongly Desired: Hands-on experience with enterprise visualization platforms such as Qlik, Tableau, or PowerBI to enhance dashboard development and analytics delivery (not required but a significant plus for integrating with Foundry tools).
AWS Data Engineer
Seattle, WA jobs
Must Have Technical/Functional Skills:
We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications.
Roles & Responsibilities:
Key Responsibilities:
• Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data.
• ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems.
• Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data.
• Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data.
• Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations.
• Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health.
• Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals.
• Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Sr. Data Engineer
Dallas, TX jobs
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
Mobile Application Developer
Atlanta, GA jobs
As a Mobile Developer on our team, you'll be comfortable working across both iOS and Android, with the flexibility to step into an agile environment supported by peer reviews, an in-house design team, and a QA team. You'll collaborate closely with designers, developers, and product managers to deliver high-quality features while keeping stability front and center. In this role, you'll balance fixing defects, addressing technical debt, and delivering smaller features that keep the team moving. Your strong product intuition and a commitment to building reliable, functional solutions will help drive our mission of building software that meaningfully connects with people.
Responsibilities:
Triage, reproduce, and resolve high-priority defects across iOS and Android.
Identify and deliver technical-debt reductions that improve stability, performance, and maintainability.
Build and ship features in support of the team with Product and Design.
Strengthen automated testing with unit, integration, and UI coverage with QA.
Monitor crash rate, startup time, and key screen performance using standard mobile observability tools.
Participate in incident response and post-incident reviews, document fixes and prevention steps.
Mentor junior mobile engineers; perform code reviews; keep docs current.
Consistently meet development culture guidelines and engineering standards.
Take on other tasks and duties as assigned.
Qualifications:
5+ years of professional software experience with significant native mobile time.
Depth in at least one platform (iOS Swift or Android Kotlin/Java) and working proficiency in the other.
Strong knowledge of Android and iOS SDKs, different versions, and how to deal with different devices and screen sizes.
Familiarity with connecting mobile applications to APIs.
Strong knowledge of UI design principles, patterns, and best practices.
Clear written and verbal communication about risks, tradeoffs, and timelines.
Experience in working on a team, in a regulated environment with shared code managed in multiple source control repositories.
Must be eligible to work legally in the U.S. without sponsorship.
Optional / Recommended Experience:
Understanding of DevOps and deployment of Android applications.
High-level exposure to tools such as Postman, Bitrise, Github, and Azure DevOps.
Understanding of Android devices and memory management in relation to coding decisions.
Accessibility awareness and experience in regulated or financial domains.
A high level of comfort with ambiguity and openness to learning whatever it takes to solve new challenges.
Caring about people and how the software you make can help them.
Brightwell is an equal opportunity employer (EOE) committed to employing a diverse workforce and sustaining an inclusive culture.
Trade Flow Support - Trading Application Support
New York jobs
Manage global trading platform readiness: pre-trading & post-trading
Provide monitoring and support for end to end transaction flow across the trading platform
Quickly identify, analyze and correct alerts & issues and/or escalate to minimize trade flow impact and preempt outages
Incident management: create and send reports following established process
Intensive interaction with trading community, back office operations, brokers, exchanges, development and infrastructure teams
Support scheduled changes: application deployments, network implementations and server maintenance
Oversee automated processes, scheduled jobs and regular reports
Manage users' requests: track requests, review approvals, updates, resolution, closure, etc.
Identify inefficiencies such as repetitive activities, manual processes and suboptimum tools/workflows to drive continuous improvements via automation, collaboration with internal and external teams
Required Qualifications:
Hands on trade flow support experience a must
Proficient in Linux OS
Proficiency with Bash and a scripting language such as Python or Perl required
Detailed practical knowledge of FIX. Ideally, familiarity with native exchange protocols
Experience in SQL (PostgreSQL, MySQL, MS SQL)
Familiarity with Kdb+ is a plus
Working knowledge with TCP/IP, multicast, tcpdump, clocking (PTP, NTP) - a plus
Bachelor's Degree in Computer Science, Engineering or related subject
Highly organized, detailed, control oriented, enjoys solving problems and brings in a "take charge" attitude
Strong verbal/written communication skills required to support daily interaction with brokers, exchanges, and internal teams
Self-motivated individual that takes initiatives and has an ownership & accountability mindset
Must show the ability to make quick decisions and establish priorities in a multi-tasking & fast-paced environment
Experience with the Atlassian stack (JIRA, Service Desk, Confluence, Bitbucket, Bamboo), Elastic Stack and Nagios
Knowledge of ITIL is an advantage
The minimum base salary for this role is $100,000 if located in New York. This expectation is based on available information at the time of posting. This role may be eligible for discretionary bonuses, which could constitute a significant portion of total compensation. This role may also be eligible for benefits, such as health, dental, and other wellness plans, as well as 401(k) contributions. Successful candidates' compensation and benefits will be determined in consideration of various factors.
Auto-Apply