Principal Software Engineer
South San Francisco, CA jobs
This is a full-time role with a client of Dinan & Associates. This role is with an established company and includes excellent health care and other benefits.
Role: Principal / Senior Principal Software Engineer Industry: Biotechnology / Pharmaceutical R&D Location: San Francisco Bay Area (Hybrid)
The Organization We are a leading global biotechnology company driven to innovate and ensure access to healthcare for generations to come. Our goal is to create a healthier future and more time for patients with their loved ones.
The Position Advances in AI, data, and computational sciences are transforming drug discovery and development. Our Research and Early Development organizations have demonstrated how these technologies accelerate R&D, leveraging data and novel computational models to drive impact.
Our Computational Sciences group is a strategic, unified team dedicated to harnessing the transformative power of data and Artificial Intelligence (AI) to assist scientists in delivering innovative medicines for patients worldwide. Within this group, the Data and Digital Solutions team leads the modernization of our computational and data ecosystems by integrating digital technologies to empower stakeholders, advance data-driven science, and accelerate decision-making.
The Role The Solutions team develops modernized and interconnected computational and data ecosystems. These are foundational to building solutions that accelerate the work done by Computational and Bench Scientists and enable ML/AI tool creation and adoption. Our team specializes in building Data Pipelines and Applications for data acquisition, collection, storage, transformation, linkage, and sharing.
As a Software Engineer in the Solutions Engineering capability, you will work closely with Data Engineers, Product Leaders, and Tech/ML Ops, as well as directly with key partners including Computational Scientists and Research Scientists. You will build robust and scalable systems that unlock the potential of diverse scientific data, accelerating the discovery and development of life-changing treatments.
Key Responsibilities
Technical Leadership: Provide strategic and tactical technical leadership for ongoing initiatives. Identify new opportunities with an eye for consolidation, deprecation, and building common solutions.
System Design: Responsible for technical excellence, ensuring solutions are innovative, best-in-class, and integrated by delivering data flows and pipelines across key domains like Research Biology, Drug Discovery, and Translational Medicine.
Architecture: Learn, deeply understand, and improve Data Workflows, Application Architecture, and Data Ecosystems by leveraging standard patterns (layered architecture, microservices, event-driven, multi-tenancy).
Collaboration: Understand and influence technical decisions around data workflows and application development while working collaboratively with key partners.
AI/ML Integration: Integrate diverse sets of data to power AI/ML and Natural Language Search, enabling downstream teams working on Workflows, Visualization, and Analytics. Facilitate the implementation of AI models.
Who You Are
Education: Bachelor's or Master's degree in Computer Science or similar technical field, or equivalent experience.
Experience:
7+ years of experience in software engineering (Principal Software Engineer level).
12+ years of experience (Sr. Principal Software Engineer level).
Full Stack Expertise: Deep experience in full-stack development is required. Strong skills in building Front Ends using JavaScript, React (or similar libraries) as well as Backends using high-level languages like Python or Java.
Data & Cloud: Extensive experience with Databases, Data Analytics (SQL/NoSQL, ETL, ELT), and APIs (REST, GraphQL). Extensive experience working on cloud-native architectures in public clouds (ideally AWS) is preferred.
Engineering Best Practices: Experience building data applications that are highly reliable, scalable, performant, secure, and robust. You adopt and champion Open Source, Cloud First, API First, and AI First approaches.
Communication: Outstanding communication skills, capable of articulating technical concepts clearly to diverse audiences, including executives and globally distributed technical teams.
Mentorship: Ability to provide technical mentorship to junior developers and foster professional growth.
Domain Knowledge (Preferred): Ideally, you are a full-stack engineer with domain knowledge in biology, chemistry, drug discovery, translational medicine, or a related scientific discipline.
Compensation & Benefits
Competitive salary range commensurate with experience (Principal and Senior Principal levels available).
Discretionary annual bonus based on individual and company performance.
Comprehensive benefits package.
Relocation benefits are available.
Work Arrangement
Onsite presence on the San Francisco Bay Area campus is expected at least 3 days a week.
Senior Embedded Software Engineer
Palo Alto, CA jobs
Source One is a consulting services company and we're currently looking for the following individual to work as a consultant with our direct client, an autonomous mobility solutions company in Palo Alto, CA.
No Third-Party, No Corp to Corp, No Sponsorship
Title: Vehicle Software Platform Engineer
Location: Palo Alto, CA
Onsite: Mon-Fri, 40 hours
Contract Duration: 6 months with likely extension
Pay Rate: $120 - $140 hourly (w2)
Job description
Our partner is helping our client find an experienced Vehicle Software Platform Engineer to join its team developing a scalable, data-driven approach to autonomous and assisted driving.
In this role, you will focus on developing robust, sophisticated software platforms and tooling that underpin the functionality of modern vehicles.
We're looking for a candidate with a strong software development background in embedded, robotics, or automotive systems and the ability to work hands-on in a fast-paced, collaborative, and intercultural environment.
As a Vehicle Software Platform Engineer, you'll:
Work with the team to design, implement, test, and integrate features into the AD/ADAS vehicle platform.
Set up or adapt build flows and other relevant tooling.
Be excited about working hands-on in a fast-paced environment on software closely connected to operating systems, compute hardware, sensors, and vehicles.
Be ready to dive in and learn across the technology stack and leverage experience to develop solutions with sound design principles, extensibility, with safety in mind.
Ideal candidate profile
Excellent understanding of embedded software and systems (automotive, aerospace, robotics, etc.) and related interfaces (Ethernet, CAN, etc.).
Experience with system software development (e.g. drivers, filesystems, sockets) on Linux and/or QNX.
Daily tasks
Work with the team to design, implement, test, and integrate features into the AD/ADAS vehicle platform.
Set up or adapt build flows, and other relevant tooling.
Be excited about working hands-on in a fast-paced environment on software closely connected to operating systems, compute hardware, sensors, and vehicles.
Be ready to dive-in and learn across the technology stack and leverage experience to develop solutions with sound design principles, extensibility, and safety in mind.
Required skills
Bachelor's or Master's degree in Computer Science, Engineering, or a related field highly preferred
3-5+ years of relevant work experience
Proven track record of shipping software to production in our or a nearby domain (e.g., automotive, aerospace, defense, robotics)
Strong C++ and Python programming skills
Strong debugging and troubleshooting skills
Generalist attitude with proven ability to dive deep fast and willingness to learn continuously
Lead Developer
Fort Worth, TX jobs
How will this role impact First Command?
The Lead Developer is a leader across the development organization. They work closely with members across teams and with business owners and leaders. In conjunction with development responsibilities, the Lead Developer establishes and champions best practices and processes for development and First Command's SDLC. They work alongside Architect and Lead roles to define, lead, and assist in vision and design of business solutions, and they are drivers of improvement for our processes and technical practices. A Lead Developer is a person who needs little to no supervision or oversight. They are self-motivated individuals who have strong time management skills which facilitate working on special interest initiatives.
What will the employee do in this role?
Works alongside the architect and consultant roles to define the vision and solutions
Leads team members in solution designs and work breakdown
Participates in all phases of the software development lifecycle
Preparation and execution of test plans (unit, integration and functional)
Leads effort to create and document deployment and release plans
Establishes and champions First Command coding and design standards/best practices
Communicate and work alongside members of their team in support of their day-to-day work items
Works with business partners to ensure alignment between the ask and the output
Works across teams to drive technical and procedural improvements
Performs peer reviews and sign-off for other team member's work to ensure adherence to defined development standards
Key player and leader in an Agile environment, participating in daily huddles, sprint planning, retrospectives, etc.
Continued education of First Command business processes by engaging business partners
Mentors junior team members in best practices and standards
Serve as an escalation point for other team members on technical issues
Works with architect and consultant roles to evaluate new technologies that will define parts of the development technology roadmap
Leads Community of Practices or other internal training opportunities within the development group
Leads troubleshooting processes to determine root cause analysis
Responsible for performing business and technical knowledge transfer with their peers
Continued education to learn additional technologies, agile processes, programming languages, industry best practices and tools that are needed within First Command
What skills/qualifications do you need?
Education
Required - Bachelor's degree
Work Experience
6 to 12 years' experience
Required Knowledge, Skills and Abilities
Required - Expert knowledge of programming languages all 3 tiers (UI, API, DBMS)
Required - Expert knowledge of SQL or comparable data querying language
Required - Expert knowledge of Git and Development IDE
Required - Extensive experience with web and cloud platforms (Azure Preferred)
Required - Solid Knowledge of DevOps tools and mindset
Required - Solid Knowledge of HTML5/CSS3
Required - Solid Experience with REST and SOAP services
Required - Application of SOLID design principles within solutions
Required - Test Driven Development Experience
Required - Visio or comparable drawing tool
Required - Work alongside others and be a team player
Required - Ability to lead a team's efforts and direction
Required - Up to date with latest programming trends, techniques and technologies
Preferred - Knowledge of infrastructure and networking concepts
Preferred - Infrastructure as Code
Senior SDET - Architect
Chicago, IL jobs
Northern Trust is proud to provide innovative financial services and guidance to the world's most successful individuals, families, and institutions by remaining true to our enduring principles of service, expertise, and integrity. With more than 130 years of financial experience and over 22,000 partners, we serve the world's most sophisticated clients using leading technology and exceptional service.
We are seeking a dynamic and innovative Test Architect to lead our Azure Infrastructure as Code (IaC) quality assurance initiatives and contribute to application development projects leveraging Python, Spring Boot, and React. This role bridges the domains of cloud infrastructure, automated testing, and modern application engineering, making it ideal for a technical leader passionate about DevOps, platform reliability, and developer productivity.
Role Overview
As a Test Architect, you will be responsible for designing and implementing robust quality frameworks for Azure-based IaC solutions, driving the adoption of automation best practices, and ensuring infrastructure consistency and compliance across multiple environments. You will also play a key role in developing and integrating supporting applications-ranging from automation scripts to dashboards-using Python for backend logic and Spring Boot/React for full-stack web development.
Key Responsibilities
Architect and evolve comprehensive automated test strategies for Azure IaC, focusing on Terraform, ARM/Bicep, and policy compliance.
Lead the development of Python-based automation tools and scripts for test execution, resource provisioning, configuration validation, and infrastructure reporting.
Design, build, and maintain user-facing dashboards, reporting tools, and workflow automation platforms using Spring Boot and React, enabling data-driven insights into IaC test coverage, drift management, and compliance posture.
Integrate IaC quality gates into CI/CD systems such as GitHub Actions, ensuring all code deployments pass automated test suites and drift detection scans before promotion.
Establish and refine frameworks for drift detection, root cause analysis, and remediation, leveraging both native Azure services and custom-developed solutions.
Collaborate with architects, security specialists, and application developers to align infrastructure test practices with organizational goals, compliance requirements, and evolving cloud technologies.
Promote best practices through code reviews, technical workshops, and documentation.
Design, implement, and maintain comprehensive automated testing suites for Azure IaC using python.
Develop, execute, and refine test cases to validate infrastructure modules, deployments, and policies in Azure environments.
Develop, enhance, and maintain supporting applications and tools using Python, Spring Boot, and React, facilitating automation, reporting, and dashboarding for IaC quality and drift management.
Monitor, report, and remediate infrastructure drift, using tools and frameworks for continuous compliance and configuration management.
Maintain detailed documentation on test coverage, drift findings, and corrective actions taken to ensure auditability and traceability.
Conduct root cause analysis for infrastructure failures and propose solutions to improve test coverage and resilience.
Stay up-to-date with Azure platform enhancements, testing tools, and industry trends in cloud IaC quality, governance, and full-stack development.
Required Skills & Qualifications
Bachelor's or master's degree in computer science, Engineering, or a related technical discipline.
14+ years of experience in cloud infrastructure engineering and automated testing, with a minimum of 3 years focused on Azure public cloud.
Expertise in infrastructure automation using Terraform, ARM templates, and Bicep within Azure environments.
Advanced proficiency in Python for developing test automation, orchestration logic, and data processing pipelines.
Strong background in full-stack application development, including building RESTful APIs and web applications with Spring Boot (Java) and React.
Hands-on experience with configuration management, monitoring, and compliance tools native to Azure, as well as industry-standard frameworks (e.g., Terratest, Pester).
Track record of integrating infrastructure quality assurance into modern CI/CD pipelines.
Excellent analytical, problem-solving, and communication skills, with an emphasis on technical documentation and cross-functional collaboration.
Proficient in supporting, maintaining, and enhancing Spring Boot applications, ensuring seamless integration with backend services, optimized performance, and robust security for enterprise-scale cloud environments.
Extensive hands-on experience with Azure-native monitoring tools such as Azure Monitor, Log Analytics, and Application Insights, enabling proactive detection and resolution of infrastructure issues.
Proficiency in integrating monitoring frameworks with automated test suites and reporting dashboards, ensuring visibility into resource health, compliance drift, and system performance.
Preferred Skills
Experience with multi-cloud environments (AWS, GCP) and hybrid IaC strategies.
Familiarity with containerization (Docker, Kubernetes/AKS) and microservices architectures.
Background in building secure, compliant platforms within regulated industries.
Expertise in workflow automation, event-driven architectures, and data visualization using Python, Spring Boot, and React.
Experience with TypeScript, Next.js, or other modern JavaScript frameworks.
Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions.
Understanding of security best practices (OWASP, JWT, OAuth2, SSO).
Background in performance optimization, caching strategies, and application monitoring.
Exposure to automated testing tools (Jest, Mocha, Selenium, JUnit).
Strong analytical, troubleshooting, and debugging skills.
Relevant Azure and DevOps certifications (AZ-104, AZ-305, DevOps Engineer Expert) are strongly preferred.
Founding Senior Backend Engineer / DeFi
Fremont, CA jobs
We are actively searching for a Founding Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
1+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Founding Senior Backend Engineer / DeFi
San Jose, CA jobs
We are actively searching for a Founding Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
1+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Founding Senior Backend Engineer / DeFi
Santa Rosa, CA jobs
We are actively searching for a Founding Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
1+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Founding Senior Backend Engineer / DeFi
San Francisco, CA jobs
We are actively searching for a Founding Backend Engineer to join our team on a permanent basis. In this position you will lead the design and development of our backend infrastructure that powers our protocol (think everything off-chain). If you are someone that is impressed with what Hyperliquid has accomplished then this role is for you. We are on a mission to build next generation lending and debt protocols. We are open to both Senior level and Architect level candidates for this role.
Your Rhythm:
Drive the architecture, technical design, and implementation of our backend infrastructure
Build and maintain low latency indexing infrastructure
Build and maintain our offline and online data analytic pipelines that power our trading and risk engines
Lead code reviews, providing constructive feedback and ensuring adherence to established coding standards and best practices
Your Vibe:
6+ years of professional software engineering experience
1+ years of experience working on backends interacting with blockchains in production environments
5+ years of experience working with a modern backend languages (Go, Rust, Python, etc) in distributed architectures
Strong knowledge around DeFi products
Open to collaborating onsite a few days a week at our downtown SF office
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Sr. Solutions Delivery Engineer, Palantir
Dallas, TX jobs
At Trinity Industries, we don't just build railcars and deliver logistics - we shape the future of industrial transportation and infrastructure. As a Senior Forward Deployed Engineer, you'll be on the front lines deploying Palantir Foundry solutions directly into our operations, partnering with business leaders and frontline teams to transform complex requirements into intuitive, scalable solutions. Your work will streamline manufacturing, optimize supply chains, and enhance safety across our enterprise. This is more than a coding role - it's an opportunity to embed yourself in the heart of Trinity's mission, solving real‑world challenges that keep goods and people moving across North America.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
End-to-End Solution Delivery:
Autonomously lead the design, development, and deployment of scalable data pipelines, full applications, and workflows in Palantir Foundry, integrating with cloud platforms (e.g., AWS, Azure, GCP) and external sources (e.g., Snowflake, Oracle, REST APIs). Ensure solutions are reliable, secure, and compliant with industry standards (e.g., GDPR, SOX), while handling ambiguity and delivering on-time results in high-stakes environments. Demonstrate deep expertise in Foundry's ecosystem to independently navigate and optimize complex builds
Full Application and Workflow Development:
Build comprehensive, end-to-end applications and automated workflows using Foundry modules such as Workshop, Slate, Quiver, Contour, and Pipeline Builder. Focus on creating intuitive, interactive user experiences that integrate front-end interfaces with robust back-end logic, enabling seamless operational tools like real-time supply chain monitoring systems or AI-driven decision workflows, going beyond data models to deliver fully functional, scalable solutions
Data Modeling and Transformation for Advanced Analytics:
Architect robust data models and ontologies in Foundry to standardize and integrate complex datasets from manufacturing and logistics sources. Develop reusable transformation logic using PySpark, SQL, and Foundry tools (e.g., Pipeline Builder, Code Repositories) to cleanse, enrich, and prepare data for advanced analytics, enabling predictive modeling, AI-driven insights, and operational optimizations like cost reductions or efficiency gains. Focus on creating semantic integrity across domains to support proactive problem-solving and "game-changing" outcomes
Dashboard Development and Visualization:
Build interactive dashboards and applications using Foundry modules (e.g., Workshop, Slate, Quiver, Contour) to provide real-time KPIs, trends, and visualizations for business stakeholders. Leverage these tools to transform raw data into actionable insights, such as supply chain monitoring or performance analytics, enhancing decision-making and user adoption
AI Integration and Impact:
Elevate business transformation by designing and implementing AIP pipelines and integrations that harness AI/ML for high-impact applications, such as predictive analytics in leasing & logistics, anomaly detection in manufacturing, or automated decision-making in supply chains. Drive transformative innovations through AIP's capabilities, integrating Large Language Models (LLMs), TensorFlow, PyTorch, or external APIs to deliver bottom-line results
Leadership and Collaboration:
Serve as a lead FDE on the team, collaborating with team members through hands-on guidance, code reviews, workshops, and troubleshooting. Lead by example in fostering a culture of efficient Foundry building and knowledge-sharing to scale team capabilities
Business Domain Strategy and Innovation:
Deeply understand Trinity's industrial domains (e.g., leasing financials, manufacturing processes, supply chain logistics) to identify stakeholder needs better than they do themselves. Propose and implement disruptive solutions that drive long-term productivity, retention, and business transformation, incorporating interoperable cloud IDEs such as Databricks for complementary data processing and analytics workflows
Collaboration and Stakeholder Engagement:
Work cross-functionally with senior leadership and teams to gather requirements, validate solutions, and ensure trustworthiness in high-stakes projects
What you'll bring:
Bachelor's degree in Computer Science, Engineering, Data Science, Financial Engineering, Econometrics, or a related field required (Master's preferred)
8 plus years of hands-on experience in data engineering, with at least 4 years specializing in Palantir Foundry (e.g., Ontology, Pipelines, AIP, Workshop, Slate), demonstrating deep, autonomous proficiency in building full applications and workflows
Proven expertise in Python, PySpark, SQL, and building scalable ETL workflows, with experience integrating with interoperable cloud IDEs such as Databricks
Demonstrated ability to deliver end-to-end solutions independently, with strong evidence of quantifiable impacts (e.g., "Built pipeline reducing cloud services expenditures by 30%")
Strong business acumen in industrial domains like manufacturing, commercial leasing, supply chain, or logistics, with examples of proactive innovations
Experience collaborating with team members and leadership in technical environments
Excellent problem-solving skills, with a track record of handling ambiguity and driving results in fast-paced settings
Preferred Qualifications
Certifications in Palantir Foundry (e.g., Foundry Data Engineer, Application Developer)
Experience with AI/ML integrations (e.g., TensorFlow, PyTorch, LLMs) within Foundry AIP for predictive analytics
Familiarity with CI/CD tools and cloud services (e.g., AW, Azure, Google Cloud).
Strongly Desired: Hands-on experience with enterprise visualization platforms such as Qlik, Tableau, or PowerBI to enhance dashboard development and analytics delivery (not required but a significant plus for integrating with Foundry tools).
Sr. Lead, Azure Security - Identity & Authentication
Chicago, IL jobs
Northern Trust is proud to provide innovative financial services and guidance to the world's most successful individuals, families, and institutions by remaining true to our enduring principles of service, expertise, and integrity. With more than 130 years of financial experience and over 22,000 partners, we serve the world's most sophisticated clients using leading technology and exceptional service.
Job Description
We are seeking a highly skilled Tech lead with deeper expertise in various security products, authentication, authorization, access management, governance. As a key member of Workforce Authentication and Authorization team you lead a team to play a vital role in ensuring the secure implementation of various solutions (Hybrid and Cloud).
Requirements/Responsibilities-
Lead Identity centric Workforce Security solutions team to develop authentication and access management solutions
Drive the development of identity solutions, access patterns, modern security protocols, practicing Zero trust, least privileged, defense in depth principles
Review and provide feedback on Identity and access management related security solutions proposed by stakeholders and can provide consultation to the partners and IT Management
In-depth knowledge and experience on Entra ID, EPM, Sentinel, Azure, AWS Security
Knowledge on Okta, PingFederate, Entitlement management solutions
Strong knowledge on Identities management on Azure AD with OAuth, OIDC, SAML, SSO, MFA, Conditional access policies, MFA, Kerberos, LDAP, Identity Federations etc.
Experience in providing security solutions for Java based Micro services, React based frontends and Android/iOS based mobile applications on the Azure
Hands-of experience in JWT, session handling, Code signing, Certificate authentication, TLS/SSL, API Security, Application registration, application integration scenarios etc.
Awareness of API Management, Firewalls, DLP, VPNs, DNS, Azure Defender, MCAS, Sentinel, WAFs, Application Gateways, NSGs, App Proxy, Radius clusters, CDN etc.
Good understanding of Cloud Infrastructure Entitlement Management solution (CIEM) to ensure smooth remediation of toxic combinations, high risk entitlements etc.
Understanding and application of threat modeling concepts and methodologies
Understanding of Applications security, OWASP standards, security best practices, browser compatibilities/storages/cookies
Acts as Workforce cybersecurity expert to in solutions spanning end user computing, proxy solutions, MFA, SSO, conditional accesses, Passwordless, Yubikey, bio-metric solutions, identity and governance scenarios, Secrets Management, automation, role based access control, Privileged identity management, Just in time accesses etc.
Participates in solutions to support- token handling, OIDC/ OAuth flows, authorization patterns, identity federation, cloud architectures, cryptograpgy, cloud native services, cloud security etc.
Deeper understanding on Cloud Security areas such as Policies, RBAC, activities, identities, privileged access management etc
Ability to support operations in troubleshooting complex identity scenarios with hands-on experience on Sentinel/KQL/Audit logs etc.
Good understanding of concepts related to docker Security, container orchestrations/Kubernetes
Qualifications
Bachelor's degree in computer science or a related discipline and experience in information security, or an equivalent combination of education and work experience.
Deep knowledge of application or infrastructure systems architecture, usually having experience with multiple system technologies.
Excellent consultative and communication skills, and the ability to work effectively with client, partner, and IT management and staff.
Ten years of experience in the Information Security role. Five years of experience as a Tech lead
CISSP, CSSP, or Cloud security certification preferred
Strong collaboration skills and a analytical ability
Certifications on Azure, AWS security will be preferred
Sr. Gitlab engineer
Dallas, TX jobs
Job Title: Gitlab Engineer
Duration: 6+ Months
Required Skills and Experience:
8+ years of experience in DevOps or Platform Engineering roles
3+ years of hands-on experience with GitLab CI/CD, GitLab Runners, and GitLab administration
Proficiency with scripting and automation (e.g., Bash, Python, or Go)
Experience with infrastructure-as-code tools (Terraform, Ansible, etc.)
Solid understanding of containerization (Docker) and orchestration (Kubernetes)
Familiarity with cloud platforms (AWS, GCP, Azure) and cloud-native tooling
Strong communication skills and a track record of cross-team collaboration
Knowledge of JFrog Artifactory, BitBucket / GIT, SVN and other SCM tools
Knowledge of desired state configuration, automated deployment, continuous integration, and release engineering tools like Puppet, Chef, Jenkins, Bamboo, Maven, Ant etc
Plan and execute the end-to-end migration from Jenkins and Automic to GitLab CI/CD
Configure and manage GitLab Runners, Groups, Projects, and Permissions at scale
Harden GitLab for enterprise usage (SAML/SSO, LDAP, RBAC, backup/restore)
Design, implement, and optimize complex GitLab CI/CD pipelines using YAML best practices
Implement multi-stage, parallel, and conditional workflows for build, test, security scan, and deploy
Integrate static code analysis, security scanning (SAST/DAST), and container scanning into pipelines
Analyze existing Jenkins's freestyle and scripted pipelines; translate to GitLab CI/CD syntax
Migrate Automic workflows/jobs into GitLab, orchestrating dependencies and scheduling
Reduce pipeline execution time through caching, artifact reuse, and pipeline templating
Lead the migration of container build and deployment processes from Docker to Podman/Buildah
Author GitLab CI templates for Podman-based image builds and registries
Ensure the security and compliance of container images (signing, vulnerability scanning)
Leverage Terraform, Ansible, or similar to provision and manage self-hosted GitLab and runners
Implement GitOps practices to manage infrastructure and environment configurations
Automate operational tasks and incident remediation via pipelines and scripts
Partner with application teams to onboard them onto GitLab workflows and best practices
Develop and maintain clear runbooks, wiki pages, and pipeline templates
Conduct workshops, brown-bags, and training sessions to evangelize GitLab CI/CD
Integrate monitoring (Prometheus/Grafana, ELK) for GitLab health and pipeline performance
Implement policies and guardrails to ensure code quality, compliance, and security posture
Troubleshoot and resolve CI/CD or migration-related incidents in a timely manner
Preferred:
A BS in Computer Science or equivalent work experience with good scripting/programming skills
GitLab Certified CI/CD Specialist or GitLab Administrator Certification
Contributions to the GitLab open-source project or similar tooling
Prior software experience with build management, configuration management and/or quality testing
Experience with SCM practices including Agile, continuous integration (CI) and continuous deployment (CD)
Data Engineer
Houston, TX jobs
We are looking for a talented and motivated Python Data Engineers. We need help expanding our data assets in support of our analytical capabilities in a full-time role. This role will have the opportunity to interface directly with our traders, analysts, researchers and data scientists to drive out requirements and deliver a wide range of data related needs.
What you will do:
- Translate business requirements into technical deliveries. Drive out requirements for data ingestion and access
- Maintain the cleanliness of our Python codebase, while adhering to existing designs and coding conventions as much as possible
- Contribute to our developer tools and Python ETL toolkit, including standardization and consolidation of core functionality
- Efficiently coordinate with the rest of our team in different locations
Qualifications
- 6+ years of enterprise-level coding experience with Python
- Computer Science, MIS or related degree
- Familiarity with Pandas and NumPy packages
- Experience with Data Engineering and building data pipelines
- Experience scraping websites with Requests, Beautiful Soup, Selenium, etc.
- Strong understating of object-oriented design, design patterns, SOA architectures
- Proficient understanding of peer-reviewing, code versioning, and bug/issue tracking tools.
- Strong communication skills
- Familiarity with containerization solutions like Docker and Kubernetes is a plus
Data Engineer
Irvine, CA jobs
Thank you for stopping by to take a look at the Data Integration Engineer role I posted here on LinkedIN, I appreciate it.
If you have read my s in the past, you will recognize how I write job descriptions. If you are new, allow me to introduce myself. My name is Tom Welke. I am Partner & VP at RSM Solutions, Inc and I have been recruiting technical talent for more than 23 years and been in the tech space since the 1990s. Due to this, I actually write JD's myself...no AI, no 'bots', just a real live human. I realized a while back that looking for work is about as fun as a root canal with no anesthesia...especially now. So, rather than saying 'must work well with others' and 'team mindset', I do away with that kind of nonsense and just tell it like it is.
So, as with every role I work on, social fit is almost as important as technical fit. For this one, technical fit is very very important. But, we also have some social fit characteristics that are important. This is the kind of place that requires people to dive in and learn. The hiring manager for this one is actually a very dear friend of mine. He said something interesting to me not all that long ago. He mentioned, if you aren't spending at least an hour a day learning something new, you really are doing yourself a disservice. This is that classic environment where no one says 'this is not my job'. So that ability to jump in and help is needed for success in this role.
This role is being done onsite in Irvine, California. I prefer working with candidates that are already local to the area. If you need to relocate, that is fine, but there are no relocation dollars available.
I can only work with US Citizens or Green Card Holders for this role. I cannot work with H1, OPT, EAD, F1, H4, or anyone that is not already a US Citizen or Green Card Holder for this role.
The Data Engineer role is similar to the Data Integration role I posted. However, this one is mor Ops focused, with the orchestration of deployment and ML flow, and including orchestrating and using data on the clusters and managing how the models are performing. This role focuses on coding & configuring on the ML side of the house.
You will be designing, automating, and observing end to end data pipelines that feed this client's Kubeflow driven machine learning platform, ensuring models are trained, deployed, and monitored on trustworthy, well governed data. You will build batch/stream workflows, wire them into Azure DevOps CI/CD, and surface real time health metrics in Prometheus + Grafana dashboards to guarantee data availability. The role bridges Data Engineering and MLOps, allowing data scientists to focus on experimentation and the business sees rapid, reliable predictive insight.
Here are some of the main responsibilities:
Design and implement batch and streaming pipelines in Apache Spark running on Kubernetes and Kubeflow Pipelines to hydrate feature stores and training datasets.
Build high throughput ETL/ELT jobs with SSIS, SSAS, and T SQL against MS SQL Server, applying Data Vault style modeling patterns for auditability.
Integrate source control, build, and release automation using GitHub Actions and Azure DevOps for every pipeline component.
Instrument pipelines with Prometheus exporters and visualize SLA, latency, and error budget metrics to enable proactive alerting.
Create automated data quality and schema drift checks; surface anomalies to support a rapid incident response process.
Use MLflow Tracking and Model Registry to version artifacts, parameters, and metrics for reproducible experiments and safe rollbacks.
Work with data scientists to automate model retraining and deployment triggers within Kubeflow based on data freshness or concept drift signals.
Develop PowerShell and .NET utilities to orchestrate job dependencies, manage secrets, and publish telemetry to Azure Monitor.
Optimize Spark and SQL workloads through indexing, partitioning, and cluster sizing strategies, benchmarking performance in CI pipelines.
Document lineage, ownership, and retention policies; ensure pipelines conform to PCI/SOX and internal data governance standards.
Here is what we are seeking:
At least 6 years of experience building data pipelines in Spark or equivalent.
At least 2 years deploying workloads on Kubernetes/Kubeflow.
At least 2 years of experience with MLflow or similar experiment‑tracking tools.
At least 6 years of experience in T‑SQL, Python/Scala for Spark.
At least 6 years of PowerShell/.NET scripting.
At least 6 years of experience with with GitHub, Azure DevOps, Prometheus, Grafana, and SSIS/SSAS.
Kubernetes CKA/CKAD, Azure Data Engineer (DP‑203), or MLOps‑focused certifications (e.g., Kubeflow or MLflow) would be great to see.
Mentor engineers on best practices in containerized data engineering and MLOps.
Data Platform Engineer / AI Workloads
Sunnyvale, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
DevOps Engineer
Chicago, IL jobs
Role*
DevOps Engineer
Mandatory Technical Skills*
6+ years AWS DevOps Experience and Cloud Services
Storage - S3
Networking and content delivery - Api gateway , VPC endpoint, IAM role user, VPC, Subnet, routes,Route53, LoadBalancer, Target Groups, Listener Rule
Compute - EC2, Serverless-Lambda
Security and identity - IAM, Security Groups, Target Groups, Security At rest and transit, SSM, KMS
Application integration - SNS/SQS, Event bridge.
Containers - Docker, AWS EKS, Terraform, GitOps, Argo CD
Scripting - Bash, Shell, Python, PowerShell, Helm charts
Monitoring - Dynatrace, CloudWatch Kubernetes,
CI/CD pipelines with Harness Automation Tool
Additional Technical Skills*
• Experience in Automation, Providing CI/CD Pipelines Harness, AWS CodePipeline
addressing AWS EKS, GitOps approach with Helm charts and ArgoCD and Lambda deployment issues, working knowledge on this two service with .net, java and python language.
• Experience in GitOps approach with ArgoCd and Helm charts
• Experience in AWS Cloud infrastructure - Lambda, DynamoDB, S3, RDS, EBS, ECS, EKS, ECR, EFS, EC2, Route53, ELB,APIgateway, AppSync, Auto Scaling, StepFunctions, CloudWatch, CodeArtifact, IAM.
• Experience in Configuration management - Ansible / Ansible Tower
• Code quality and Security Control - SonarQube. Experience in SAST and DAST Scanning mainly with Veracode.
• Hands on Experience in Migration On-Premises - Cloud
• Continuous Integrations - Harness, TeamCity, Jenkins
• Scripting - Bash, Shell, Python, PowerShell
• Monitoring - ELK, Dynatrace, CloudWatch
• Source code management - GIT, BitBucket, GitHub, GitHub Actions
• Containerization + Orchestration - Docker, Docker-Compose, AWS EKS, Kubernetes
Good to have skills*
Harness, OIDC setup, Disaster Recovery, .Net and Java (Moderate level)
Key responsibilities*
• Understanding customer requirements and Implementing various development, testing, automation tools, and IT infrastructure
* Responsible for setting up end to end code pipeline for the application deployment and their maintenance.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for DevOps operation.
- Proficient with Lambda and EKS, working experience with .net and java program to use on AWS EKS and AWS Lambda.• Troubleshooting and fixing the bugs as required.
• Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement
• Encouraging and building automated processes wherever possible
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
Data Platform Engineer / AI Workloads
Sonoma, CA jobs
We are actively searching for a Data Infrastructure Engineer to join our team on a permanent basis. In this founding engineer role you will focus on building next-generation data infrastructure for our AI platform. If you have a passion for distributed systems, unified storage, orchestration, and retrieval for AI workloads we would love to speak with you.
Your Rhythm:
Design, build, and maintain data infrastructure systems such as distributed compute, data orchestration, distributed storage, streaming infrastructure, machine learning infrastructure while ensuring scalability, reliability, and security
Ensure our data platform can scale by orders of magnitude while remaining reliable and efficient
Tackle complex challenges in distributed systems, databases, and AI infrastructure
Collaborate with technical leadership to define and refine the product roadmap
Write high-quality, well-tested, and maintainable code
Contribute to the open-source community and engage with developers in the space
Your Vibe:
5+ years experience designing building distributed database systems
Expertise in building and operating scalable, reliable and secure database infrastructure systems
Strong knowledge around distributed compute, data orchestration, distributed storage, streaming infrastructure
Strong knowledge of SQL and NoSQL databases, such as MySQL, Postgres, and MongoDB.
Programming skills in Python
Passion for building developer tools and scalable infrastructure
Our Vibe:
Relaxed work environment
100% paid top of the line health care benefits
Full ownership, no micro management
Strong equity package
401K
Unlimited vacation
An actual work/life balance, we aren't trying to run you into the ground. We have families and enjoy life too!
Sr. Data Engineer
Dallas, TX jobs
Trinity Industries is searching for a Sr. Data Engineer to join our Data Analytics team in Dallas, TX! The successful candidate will work with the Trinity Rail teams to develop and maintain data pipelines in Azure utilizing Databricks, Python and SQL.
Join our team today and be a part of Delivering Goods for the Good of All!
What you'll do:
Facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
Work with leadership to prioritize business and information needs
Engage with product and app development teams to gather requirements, and create technical requirements
Utilize and implement data engineering best practices and coding strategies
Be responsible for data ingress into storage
What you'll need:
Bachelor's Degree Computer Science, Information Management, or related field required; Masters preferred
8+ years in data engineering including prior experience in data transformation
Databricks experience building data pipelines using the medallion architecture, bronze to gold
Advanced skills in Spark and structured streaming, SQL, Python
Technical expertise regarding data models, database design/development, data mining and other segmentation techniques
Experience with data conversion, interface and report development
Experience working with IOT and/or geospatial data in a cloud environment (Azure)
Adept at queries, report writing and presenting findings
Prior experience coding utilizing repositories and multiple coding environments
Must possess effective communication skills, both verbal and written
Strong organizational, time management and multi-tasking skills
Experience with data conversion, interface and report development
Adept at queries, report writing and presenting findings
Process improvement and automation a plus
Nice to have:
Databricks Data Engineering Associate or Professional Certification > 2023
IT Software Development Intern
Austin, TX jobs
Who we are:
Farm Credit Bank of Texas is a $40.9 billion wholesale bank that has been financing agriculture and rural America for over 100 years. Headquartered in Austin, Texas, we provide funding and services to rural lending associations in five states, and we are active in the nation's capital markets.
While you may not be familiar with our name, Farm Credit Bank of Texas plays a critical role in supporting the businesses that make it possible for America to maintain access to an affordable and safe food supply, an industry which is one of the most innovative and evolving of our time. And while you help us deliver on our mission, we deliver on our commitment to you as a valued employee by providing competitive compensation, generous health and wellness benefits packages and an attractive hybrid workplace located along the bluffs of the Colorado River just minutes west of downtown Austin.
We seek out top talent in their fields, whether it be technology, finance, accounting, credit, human resources, or other administrative functions, and welcome you to join us in our mission to feed the world.
Your Future in Tech Starts Here:
Are you a problem-solver, a code enthusiast, or someone who loves exploring the latest in AI and automation? Do you want to work on real projects that make an impact instead of just shadowing someone else? If so, this IT Software Development Internship is built for you.
We're looking for driven, tech-savvy students ready to bridge the gap between classroom knowledge and hands-on experience. Here, you'll be part of a dynamic team, developing software, automating processes, and exploring cutting-edge technologies-all while being mentored by industry pros. This position is generally a 3-month paid assignment (May-August), and may be shorter or longer based on business needs.
What You'll Get to Do:
Code Like a Pro - Develop applications in .NET, Python, Java, and React while learning best practices in clean, efficient coding.
Automate Everything - Design, test, and implement automation scripts that improve workflow and efficiency.
Dive into AI & Cybersecurity - Explore artificial intelligence, machine learning, and application security to gain future-proof skills.
Solve Real-World Problems - Work on live projects that contribute to business success, not just hypothetical case studies.
Be Mentored by Experts - Learn from experienced developers who are ready to help you grow and sharpen your skills.
Collaborate & Innovate - Work with a team to enhance automation, AI solutions, and technical infrastructure.
Who We're Looking For:
Currently pursuing a bachelor's degree in computer science, business, or a related field.
Must be an upperclassman (junior or senior) enrolled in a college or university program.
You have a passion for technology and problem-solving.
You have some experience with programming (C#, Python, Java, or React preferred).
You're eager to learn about automation, AI, and cybersecurity.
You're a team player with strong communication skills.
You're ready to apply what you've learned in a fast-paced, real-world environment.
Why This Internship:
Hands-On Experience - No busy work here; you'll be writing code, troubleshooting, and contributing to meaningful projects.
Skill Development - Gain in-demand skills that will make you stand out in today's competitive job market.
Flexibility - Work around your academic schedule while getting valuable industry experience.
Career Growth - Impress future employers with real-world projects on your resume.
This isn't just an internship-it's a launchpad for your future career in technology. If you're ready to turn knowledge into experience, apply today and let's build something great together!
Our culture:
In a world filled with unpredictable challenges, we invest in our people and ensure they have dependable careers with ample growth opportunities. As part of the larger Farm Credit System, we focus on building our culture around personal relationships and the ability to be connected to leadership through in-person conversations, regular town halls and employee engagement events. We are deeply committed to attracting and fostering a diverse workforce, development and career advancement and recognizing the hard work of individuals who contribute to our success.
Important note: We care about your hiring process and take it seriously. A real person will review your applications, meaning response timelines may vary. The interviewing process at Farm Credit Bank of Texas may include phone calls and emails, on-site interviews, and requests for portfolios or demonstrations of work. We cannot personally follow-up with each applicant, and we will do our best to create a professional, respectful, and thorough process for candidates with whom we identify as a potential fit.
A/EOE/M/F/D/V
Auto-ApplySoftware Developer Intern
Chicago, IL jobs
Pay Range: $45 - $50 per hour The Opportunity Group One Trading, a dynamic options trading firm, is actively seeking highly-motivated people who are interested in learning and getting hands-on experience with electronic trading systems. Our system is built on the .NET platform using C#. The system has enough variety to challenge and interest anyone: an information-dense user interface, a high-throughput, low latency parallel computational core, service-oriented architecture, and interfaces to exchanges' market data and execution systems.
Who We Are
We are committed to creating a diverse environment and are proud to be an equal opportunity employer. At Group One, we value transparency and collaboration coming from unique perspectives and backgrounds. We strive to create a workplace in which all employees have an opportunity to participate and contribute to the success of the business.
What We Do
Group One is a proprietary trading firm specializing in market making and liquidity providing strategies in options markets. Our traders provide competitive liquidity across a broad range of securities by managing portfolios of several hundred issues and simultaneously streaming quotes across multiple exchanges. Our systems and network infrastructure are vital in ensuring success.
The Skill Set
During this internship, you will be mentored by senior developers and gain exposure to the business of equity options trading. You will gain real-world experience with a team that develops proprietary software in an ever-changing, fast-paced, and competitive environment.
A broad, multi-platform skill set will help you be successful in this role, but there will be plenty of learning opportunities no matter how many of these skills you bring in with you. We also encourage applicants with non-traditional backgrounds to apply.
* Completing or working towards a BS, MS, or Ph.D. in Computer Science, Computer Engineering, or a related technical field.
* Familiarity with a C-style object-oriented language: C#, Java, C++, or Objective-C.
* Solid Computer Science and software development fundamentals.
* Desire to learn and grow as a software developer.
* Ability to effectively communicate and collaborate with colleagues across different functional units and locations.
Within 1 Month, You'll
* Complete your initial orientation, become familiar with our environment, our infrastructure, and our business.
* Learn the roles and skillsets of your Dev teammates to discern and utilize the best advice and information for yourself and others.
Within 3 Months, You'll
* Engage and contribute directly to projects with real implications for our traders and company.
The Benefits
We offer a competitive compensation package which includes company-paid housing or housing stipend, individualized mentorship, fresh fruit and snacks daily, casual dress environment, social events, and more! In-person interview expenses for travel and childcare will be reimbursed by Group One.
Software Developer Intern
Chicago, IL jobs
Job Description
Software Developer Intern
Pay Range: $45 - $50 per hour
The Opportunity
Group One Trading, a dynamic options trading firm, is actively seeking highly-motivated people who are interested in learning and getting hands-on experience with electronic trading systems. Our system is built on the .NET platform using C#. The system has enough variety to challenge and interest anyone: an information-dense user interface, a high-throughput, low latency parallel computational core, service-oriented architecture, and interfaces to exchanges' market data and execution systems.
Who We Are
We are committed to creating a diverse environment and are proud to be an equal opportunity employer. At Group One, we value transparency and collaboration coming from unique perspectives and backgrounds. We strive to create a workplace in which all employees have an opportunity to participate and contribute to the success of the business.
What We Do
Group One is a proprietary trading firm specializing in market making and liquidity providing strategies in options markets. Our traders provide competitive liquidity across a broad range of securities by managing portfolios of several hundred issues and simultaneously streaming quotes across multiple exchanges. Our systems and network infrastructure are vital in ensuring success.
The Skill Set
During this internship, you will be mentored by senior developers and gain exposure to the business of equity options trading. You will gain real-world experience with a team that develops proprietary software in an ever-changing, fast-paced, and competitive environment.
A broad, multi-platform skill set will help you be successful in this role, but there will be plenty of learning opportunities no matter how many of these skills you bring in with you. We also encourage applicants with non-traditional backgrounds to apply.
Completing or working towards a BS, MS, or Ph.D. in Computer Science, Computer Engineering, or a related technical field.
Familiarity with a C-style object-oriented language: C#, Java, C++, or Objective-C.
Solid Computer Science and software development fundamentals.
Desire to learn and grow as a software developer.
Ability to effectively communicate and collaborate with colleagues across different functional units and locations.
Within 1 Month, You'll
Complete your initial orientation, become familiar with our environment, our infrastructure, and our business.
Learn the roles and skillsets of your Dev teammates to discern and utilize the best advice and information for yourself and others.
Within 3 Months, You'll
Engage and contribute directly to projects with real implications for our traders and company.
The Benefits
We offer a competitive compensation package which includes company-paid housing or housing stipend, individualized mentorship, fresh fruit and snacks daily, casual dress environment, social events, and more! In-person interview expenses for travel and childcare will be reimbursed by Group One.
Job Posted by ApplicantPro