Systems Engineer, Client Engineering
Austin, TX jobs
Our team owns the Client Management Systems, Application infrastructure, and Collaboration tools globally for Amazon. We build complex multi-platform solutions that help our customers get more done whilst improving their user experience. We are looking for Systems Engineer with a mac OS-focused infrastructure and development background. We are using AWS products to evolve traditional enterprise tools and services at a large scale. If you are passionate about AWS and have experience automating large-scale mac OS infrastructure deployments, keep reading.
In this role, you and your team will directly influence the mac OS end-point management at Amazon. You will use native AWS services, external 3rd party services and DevOps patterns to manage a growing fleet of mac OS client devices. We are looking for a motivated team member to deliver results to our customers. This is a hands-on position where your daily activities will range from new system development to supporting the customers that depend on your services. You will use AWS products and DevOps patterns to build the next generation of mac OS-based management tools.
Key job responsibilities
· Contribute to automation efforts for a team advancing a service ownership culture with a primary focus on the mac OS clients
· Embrace the AWS ecosystem and drive innovation on top of it
· Instill a culture that drives DevOps, holds a high-quality bar with code reviews, drives automation efforts to empower, and removes barriers for your team
· Translation of complex functional and technical requirements into detailed architecture and design
· Working with others on the engineering team to participate in day-to-day development activities, contributing to architecture decisions, participating in designs, design reviews, and implementation.
· Build services that can scale across hundred of thousands clients used by Amazonians worldwide
· Deliver quality features on-time and on-budget and execution against project plans and delivery commitments.
· Maintain current technical knowledge of rapidly changing technology, always on the lookout for new technologies, and work with management and development teams to evolve current processes.
A day in the life
Review change requests and prioritize OS configuration tasks. Attend team meetings to discuss ongoing projects and challenges. Test new configurations in sandbox environments, documenting issues and collaborating with the security team. Prepare and execute deployment plans, starting with non-critical systems. Monitor deployments, troubleshoot issues, and make necessary adjustments. Respond to urgent configuration problems as they arise. Stay updated on OS patches and maintain accurate documentation. Prepare reports on deployment progress and plan future rollouts. Balance proactive improvements with reactive problem-solving to ensure optimal system performance. Continuously update knowledge and skills to keep pace with evolving OS technologies.
BASIC QUALIFICATIONS- 4+ years of site reliability engineering (SRE), systems engineering, systems administration, DevOps, security administration, or network administration experience
- Bachelor's degree in Systems Engineering, Computer Science, or related field or relevant work experience
- Experience in site reliability engineering (SRE), systems engineering, systems administration, DevOps, security administration, or network administration
- Experience in systems engineering
- Experience in any of the following: Python, Java, Perl, PHP, Ruby, Bash, Shell or equivalent
- 5+ years of engineering work managing large-scale services experience
PREFERRED QUALIFICATIONS- Knowledge of TCP/IP and networking protocols such as HTTP and DNS
- Experience designing and developing scripts to automate operational burdens and reviewing scripting changes to ensure they meet the standards for maintainability, scalability and security
- Experience working in 24/7 production environment
- Knowledge of configuration management systems, such as Puppet, Chef, Ansible, or related systems
- 7+ Years mac OS administration at scale
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $94,000/year in our lowest geographic market up to $207,900/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Rust Engineer
Palo Alto, CA jobs
Mission: To build control and configuration software for IoT products such as residential energy products such as Powerwall, Solar Inverter, or Wall Connector.
Day to Day:
The Energy Residential Device Software team is looking for a Software Engineer to build control and configuration software for residential energy products such as Powerwall, Solar Inverter, or Wall Connector.
These products resemble other, more familiar IoT products in many ways: They need to be installed, configured, and diagnosed.
Once operational, they need to run autonomously, respond to commands, and provide customers with useful telemetry. All of this requires control and configuration tools that are reliable and resilient, which is where you come in.
With your help, we can make installing more renewable energy technology easier, faster, and cheaper.
Our software stack is as diverse as our products.
It includes embedded and Linux-based systems, web and native apps, cloud services and local IoT protocols.
Pragmatism, willingness to dive into new codebases, eagerness to work with stakeholders, and engineering leadership are key strengths we expect you to bring to the table.
You will also be responsible to:
Collaborate with Product Managers and Engineers from other disciplines to develop designs and specifications
Work with other engineering teams to develop APIs
Contribute to overall system architecture
Develop modern applications for installation, configuration, and diagnosis
Provide technical leadership and innovation to improve developer productivity, product reliability, and overall system resiliency
Must Haves:
1. Strong experience with Rust
2. Real-world experience with Internet and IoT protocols (e.g., HTTP, REST, websockets, gRPC, Matter)
3. Need to have experience hands on building devices - IoT or Embedded Linux Devices
4. Experience with C and C++ specifically in an embedded environment
Benefits:
The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time , paid sick and safe time , hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable)
Plumbing Engineer
Houston, TX jobs
Plumbing Engineer-in-Training (EIT I) - Design the Systems That Keep Buildings Flowing
Are you passionate about designing essential systems that bring buildings to life? We're seeking a motivated Plumbing Engineer-in-Training (0-2 years of experience) to join a collaborative MEP (Mechanical, Electrical, and Plumbing) engineering team.
In this role, you'll contribute to designing innovative and sustainable plumbing systems for educational, commercial, and institutional projects. You'll gain hands-on experience working with senior engineers, learning the full design process-from concept to construction-while building a strong foundation for your professional growth.
Key Responsibilities
Assist in the design and documentation of plumbing systems, including domestic water, sanitary waste, storm drainage, and fire protection.
Perform engineering calculations and support the production of AutoCAD and Revit drawings.
Review shop drawings, RFIs, and submittals related to plumbing system design.
Conduct field observations to verify installations and ensure alignment with design intent.
Collaborate with cross-discipline teams to deliver high-quality, coordinated construction documents.
Requirements
Bachelor's degree in Mechanical Engineering or a related field.
Successful completion of the Fundamentals of Engineering (F.E.) exam.
Certified or eligible for certification as an Engineer-in-Training (EIT).
Excellent attention to detail, organization, and problem-solving skills.
Ability to manage multiple projects while collaborating effectively in a team environment.
Preferred Qualifications
Familiarity with AutoCAD, Revit, Bluebeam, and Microsoft Office Suite.
Working knowledge of plumbing and mechanical codes (Uniform Plumbing Code, NFPA standards).
Prior internship or experience in an MEP consulting environment.
Understanding of water distribution, drainage, gas, and fire protection systems.
What We Offer
Competitive salary commensurate with experience.
Comprehensive health insurance, 401(k), and paid holidays/PTO.
Mentorship and professional development opportunities.
Exposure to diverse and meaningful building projects.
A collaborative, supportive team environment that values innovation and growth.
Backend Engineer - Python / API (Onsite)
Beverly Hills, CA jobs
CGS Business Solutions is committed to helping you, as an esteemed IT Professional, find the next right step in your career. We match professionals like you to rewarding consulting or full-time opportunities in your area of expertise. We are currently seeking Technical Professionals who are searching for challenging and rewarding jobs for the following opportunity:
CGS is hiring on behalf of one of our Risk & Protection Services clients in the West LA area for a full-time role. We're looking for a strategic Backend Engineer to join a high-growth team building next-generation technology. In this role, you'll play a critical part in architecting and delivering scalable backend services that power an AI-native agent workspace. You'll translate complex business needs into secure, high-performance, and maintainable systems. This opportunity is ideal for a hands-on engineer who excels at designing cloud-native architectures and thrives in a fast-paced, highly collaborative startup environment.
What You'll Do
• Partner closely with engineering, product, and operations to define high-impact problems - and craft the right technical solutions.
• Design and deliver scalable backend systems using modern architectures and best practices.
• Build Python APIs and complex backend logic on top of AWS serverless infrastructure.
• Contribute to the architecture and evolution of core system components.
• Elevate engineering standards, tooling, and backend development processes across the team.
Who You Are
• 6+ years of software engineering experience, with deep expertise in building end-to-end systems and a strong backend focus.
• Expert-level proficiency in Python and API development with Flask
• Strong understanding of AWS and cloud-native architecture.
• Experience with distributed systems, APIs, and data modeling.
• Proven ability to architect and optimize systems for performance and reliability.
• Excellent technical judgment and ability to drive clarity and execution in ambiguous environments.
• Experience in insurance or enterprise SaaS is a strong plus.
About CGS Business Solutions: CGS specializes in IT business solutions, staffing and consulting services. With a strong focus in IT Applications, Network Infrastructure, Information Security, and Engineering. CGS is an INC 5000 company and is honored to be selected as one of the Best IT Recruitment Firms in California. After five consecutive Fastest Growing Company titles, CGS continues to break into new markets across the USA. Companies are counting on CGS to attract and help retain these resource pools in order to gain a competitive advantage the rapidly changing business environments.
Servicenow ITSM/Platform Engineer
Plano, TX jobs
📌 Job Title: ServiceNow ITSM / Platform Admin & Engineer
💼 Employment Type: Contract (W2 Only)
About the Role
We are seeking a highly experienced ServiceNow ITSM / Platform Admin & Engineer to design, implement, and manage complex ServiceNow environments. This role requires deep hands-on expertise in ServiceNow ITSM modules, platform architecture, integrations, and the latest innovations such as Virtual Agent, Now Assist, and custom GenAI workflows. The ideal candidate will have a strong understanding of ITIL processes, technical architecture, and the ability to lead end-to-end ServiceNow initiatives that enhance service delivery and user experience.
Required Skills & Qualifications
12+ years of overall IT experience, including 5-8+ years in ServiceNow administration, development, or architecture.
Strong hands-on experience with ServiceNow ITSM modules or platform administration.
Proficiency in JavaScript, HTML, CSS, Glide API, and ServiceNow scripting.
Deep understanding of ServiceNow architecture, MID Server, high availability, and disaster recovery.
Experience in designing and deploying complex ServiceNow solutions, including Virtual Agent, Now Assist, and GenAI workflows.
Solid understanding of ITIL processes and frameworks.
Experience with ServiceNow integrations and third-party tool connectivity.
Excellent problem-solving, analytical, and stakeholder management skills.
Experience working in Agile/Scrum teams.
Nice to Have
ServiceNow Certified System Administrator (CSA) and/or Certified Implementation Specialist - ITSM (CIS-ITSM).
Experience with additional ServiceNow modules such as ITOM, ITAM, CSM, or HRSD.
Familiarity with Flow Designer, IntegrationHub, Performance Analytics, and other advanced ServiceNow capabilities.
Servicenow ITSM Engineer (Only W2 Contract) - Need independent Candidates - Plano TX / Richmond VA
Plano, TX jobs
Role: Servicenow Developer
Keywords : Servicenow Workflows, Service Catalog, REST/SOAP API, IntegrationHub, Flow Designer
● Experience Supporting technical implementation of various ServiceNow modules such as Change
Management, Incident Management, Problem Management and Service Catalog.
● Configure and enhance the Service Catalog, including request forms, catalog items, and
fulfillment processes.
● Utilize IntegrationHub for building and managing complex integrations.
● Develop and maintain solutions using Flow Designer for process automation.
● Proven experience with ServiceNow Workflows development and customization.
User Interface Engineer
Austin, TX jobs
We are looking for a talented UI engineer to explore new initiatives with the AiDP Frameworks team. On the team you will investigate new component library frameworks, and enable new ways of building UI with AI agents. The Frameworks team builds and maintains the Bricks component library. Bricks is a multi-framework component library built with Web Components and the Stencil framework. We are looking to understand potential alternatives to Stencil for managing a highly used component library that works across the most popular UI frameworks
Responsibilities:
Investigate cross-platform component library technologies and build proof-of-concept applications to evaluate their features and limitations.
Implement and enhance a component library-based MCP (Model Context Protocol) server.
Architect and develop multi-step AI agents that integrate with UI component libraries and frameworks.
Ensure AI agents are reliable, scalable, and cost-optimized for high-volume environments.
Produce comprehensive technical analyses comparing frameworks and provide actionable recommendations.
Work independently to deliver high-quality, well-tested code.
Qualifications:
Web Components: At least 1 year
AI Agents
MCP (Model Context Protocol)
Responsive Design: 2-5 years
TypeScript: 2-5 years
HTML/CSS/JavaScript: 2-5 years
Component Libraries: At least 1 year
Nice to Have:
ReactJS: At least 1 year
Vue.js: At least 1 year
AngularJS: At least 1 year
Sass/Scss: At least 1 year
Accessibility: At least 1 year
Stencil.js: At least 1 year
Snowflake Engineer
Frisco, TX jobs
Hi
Job Title: Snowflake Data Engineering with AWS, Python and PySpark
Duration: 12 months
Required Skills & Experience:
• 10+ years of experience in data engineering and data integration roles.
• Experts working with snowflake ecosystem integrated with AWS services & PySpark.
• 8+ years of Core Data engineering skills - Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming.
• 5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works.
• Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages.
• Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake.
• Snowflake SQL and Stored Procedures (JavaScript or Python-based).
• Proficient in Python for data ingestion, transformation, and automation.
• Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema).
• Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar).
• Proficiency in SQL and performance tuning.
• Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices.
• Strong communication skills and ability to collaborate in agile teams.
If interested, Please share below details with update resume:
Full Name:
Phone:
E-mail:
Rate:
Location:
Visa Status:
Availability:
SSN (Last 4 digit):
Date of Birth:
LinkedIn Profile:
Availability for the interview:
Availability for the project:
Gen AI Engineer
Dallas, TX jobs
Role : Gen AI Engineer with GCP
Contract
Must have : Gen AI , LLM , RAG , MLOps , Vertex AI ,GCP exp
Design and build end-to-end AI/ML systems and applications, from experimentation and data preprocessing to production deployment.
Implement and optimize Generative AI models (text, image, multimodal) and integrate capabilities like Retrieval-Augmented Generation (RAG) and prompt engineering strategies to enhance LLMs with external knowledge sources.
Leverage a wide range of GCP services, including Vertex AI, Big Query, Cloud Run, GKE (Google Kubernetes Engine), Dataflow, and Pub/Sub, to build, train, and deploy custom AI models and solutions.
Manage the entire model lifecycle, including training, evaluation, fine-tuning, versioning, deployment, and monitoring performance in production environments.
Optimize models and systems for improved performance, scalability, efficiency, and cost, implementing techniques like model quantization and GPU memory optimization.
Build and maintain scalable and reliable ML pipelines using MLOps practices, employing tools like Docker and Kubernetes for containerization and CI/CD pipelines for automated deployment.
Document technical designs, processes, and best practices, and potentially mentor junior team members
DevSecops Engineer
Erie, PA jobs
Role: DevSecOps Engineer
Primary Skills : TFS, Github
About the job:
As a Senior DevOps Engineer (good to have Azure experience/knowledge), you'll design and manage high-performing cloud environments. You will be responsible for migration of legacy source code repositories and CI/CD pipelines from TFS to GitHub, while modernizing development workflows, introducing automation, and strengthening engineering standards
Know your team:
At ValueMomentum's Technology Solution Centers, we are a team of passionate engineers who thrive on tackling complex business challenges with innovative solutions while transforming the P&C insurance value chain. We achieve this through strong engineering foundation and continuously refining our processes, methodologies, tools, agile delivery teams, and core engineering archetypes. Our core expertise lies in six key areas: Platforms, Infra/Cloud, Application, Data, Core, and Quality Assurance
Join our team that invests in your growth. Our Infinity Program empowers you to build your career with role-specific skill development leveraging immersive learning platforms. You'll have the opportunity to work with some of the best minds serving insurance customers in US, UK and Canadian markets.
Key Responsibilities:
Lead end-to-end migration of .NET and digital applications from TFS to GitHub.
Analyze existing TFS repository structures, branching models, TFVC/Git configurations, and pipelines.
Design and implement migration strategy (lift-and-shift and modernized structure).
Leverage and customize accelerators built using custom PowerShell automation.
Convert legacy TFS pipelines to GitHub Actions workflows
Create reusable pipeline templates
Implement GitHub Actions best practices: environments, secrets, self-hosted runners
Lead the remediation of secrets stored in application code by migrating them to Azure Key Vault or its equivalent.
Establish enterprise-wide secure DevOps standards.
Build automation scripts/tools for repeatable migration activities.
Set up and maintain robust Github Actions workflows.
Ensure compliance and security by enforcing security best practices, including identity and access management.
Troubleshoot complex issues and implement preventive measures.
Required Skills & Experience:
Strong expertise in GitHub, GitHub Actions, GitHub Advanced Security.
Hands-on experience migrating large-scale applications from TFS to GitHub.
CI/CD experience with TFS and GitHub
Strong scripting abilities in PowerShell or Python.
Knowledge of .NET build pipelines (MSBuild, NuGet, unit testing frameworks).
Strong understanding of DevOps concepts: branching strategies, versioning, security scanning, artifact management.
Understanding of networking, IAM, and cloud security.
Ability to lead discussions with customer stakeholders.
Strong documentation, communication, and presentation skills.
Ability to work independently and handle complex problem-solving.
Comfortable owning end-to-end deliverables.
DevSecops Engineer
Erie, PA jobs
Job Title: Senior Dev-SecOps Engineer
Primary skills: Dev-SecOps Engineer, Guidewire, Property & Casualty Insurance
Responsibilities
Following are the day-to-day work activities:
CI/CD Pipeline Management: Design, implement, and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines for Guidewire applications using tools like TeamCity, GitLab CI, and others.
Infrastructure Automation: Automate infrastructure provisioning and configuration management using tools such as Terraform, Ansible, or CloudFormation.
Monitoring and Logging: Implement and manage monitoring and logging solutions to ensure system reliability, performance, and security.
Collaboration: Work closely with development, QA, and operations teams to streamline processes and improve efficiency.
Security: Enhance the security of the IT infrastructure and ensure compliance with industry standards and best practices.
Troubleshooting: Identify and resolve infrastructure and application issues, ensuring minimal downtime and optimal performance.
Documentation: Maintain comprehensive documentation of infrastructure configurations, processes, and procedures.
Requirements
Candidates are required to have these mandatory skills to get the eligibility of their profile assessed. The must have requirements are:
Educational Background: Bachelor's degree in computer science, Information Technology, or a related field.
Experience:
6-10 years of experience in a DevOps or systems engineering role.
Hands-on experience with cloud platforms (AWS, Azure, GCP).
Technical Skills:
Proficiency in scripting languages (e.g., Python, Power Shell). (2-3 years)
Experience with CI/CD tools (e.g., Jenkins, GitLab CI). (3-5 yrs)
Knowledge of containerization technologies (e.g., Docker, Kubernetes).- good to have.
Strong understanding of networking, security, and system administration. ((3-5 yrs)
Familiarity with monitoring tools such as DynaTrace/Datadog / Splunk
Familiarity with Agile development methodologies.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and teamwork abilities.
Ability to work independently
Google ADK AI Engineer
Cupertino, CA jobs
Infosys is seeking Google ADK AI Engineer- In this role, you will interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation, Application Architecture definition and Design. You will play an important role in creating the high-level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition, and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications:
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
Candidate must be located within commuting distance of Sunnyvale/ Cupertino, CA or be willing to relocate to the area. This position may require travel within the US
Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time.
At least 4 years of Information Technology experience.
Strong programming skills in Python and applied experience with a range of LLMs.
Expert in applying and extending AI agent frameworks (Google ADK, LangChain, etc.).
Hands-on experience developing, deploying, and operating AI agents in production.
Experience deploying AI systems in large-scale, hybrid environments with a focus on performance and reliability.
Experience in writing SQL
Preferred Qualifications:
A portfolio of deployed agentic systems that have delivered significant, measurable impact.
Experience in evaluation frameworks for AI agents.
Proven years of software development or machine learning experience.
Proven expertise in optimizing AI agents for latency, scalability, and cost.
Experience in Big data would be an added advantage
A track record of thought leadership or contributions to the field of agentic AI.
M.S. or PhD in computer science, machine learning, or a related field or equivalent practical experience
Good understanding of Agile software development frameworks
Strong communication and Analytical skills
Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams
Experience and desire to work in a global delivery environment
The job entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face. Travel may be required as per the job requirements.
EEO/About Us:
About Us
Infosys is a global leader in next-generation digital services and consulting. We enable clients in more than 50 countries to navigate their digital transformation. With over four decades of experience in managing the systems and workings of global enterprises, we expertly steer our clients through their digital journey. We do it by enabling the enterprise with an AI-powered core that helps prioritize the execution of change. We also empower the business with agile digital at scale to deliver unprecedented levels of performance and customer delight. Our always-on learning agenda drives their continuous improvement through building and transferring digital skills, expertise, and ideas from our innovation ecosystem.
Infosys provides equal employment opportunities to applicants and employees without regard to race; color; sex; gender identity; sexual orientation; religious practices and observances; national origin; pregnancy, childbirth, or related medical conditions; status as a protected veteran or spouse/family member of a protected veteran; or disability.
Autodesk Vault Engineer-- CDC5695559
Plano, TX jobs
Auto Desk Upgrade
Lead planning, execution and validation of Autodesk Vault upgrade
Collaborate with engineering CAD, and IT Team to support Vault upgrade roadmap.
Ensure robust data migration, backup and recovery strategies.
Conduct required validation after upgrade
Document upgrade procedures and train end user as needed.
Coordinate with Autodesk support for unresolved upgrade related issues.
SCCM Package deployment:
Validation SCCM packages work as expected.
Investigate/resolve any installation failures after package installation.
Monitor deployment success rates and troubleshoot issues.
Track and resolve user- reported bugs or regressions introduced during the upgrade.
Support rollback or contingency plans if critical issues arises.
Manage Vault user roles, permissions, and access controls.
Support CAD teams with Vault- related workflows.
Manually install/updates SCCM for some applications.
MLOps Engineer
Philadelphia, PA jobs
Role : ML Ops Lead
Duration : Long Term
Skills :
4 - 7 years of experience in DevOps, MLOps, platform engineering, or cloud infrastructure.
Strong skills in containerization (Docker, Kubernetes), API hosting, and cloud-native services.
Experience with vector DBs (e.g., FAISS, Pinecone, Weaviate) and model hosting stacks.
Familiarity with logging frameworks, APM tools, tracing layers, and prompt/versioning logs.
Bonus: exposure to LangChain, LangGraph, LLM APIs, and retrieval-based architectures.
Responsibilities :
Set up and manage runtime environments for LLMs, vector DBs, and orchestration flows (e.g., LangGraph).
Support deployments in cloud, hybrid, and client-hosted environments.
Containerize systems for deployment (Docker, Kubernetes, etc.) and manage inference scaling.
Integrate observability tooling: prompt tracing, version logs, eval hooks, error pipelines.
Collaborate on RAG stack deployments (retriever, ranker, vector DB, toolchains).
Support CI/CD, secrets management, error triage, and environment configuration.
Contribute to platform-level IP, including reusable scaffolding and infrastructure accelerators.
Ensure systems are compliant with governance expectations and auditable (esp. in insurance contexts).
Preferred Attributes :
Systems thinker with strong debugging skills..
Able to work across cloud, on-prem, and hybrid client environments.
Comfortable partnering with architects and engineers to ensure smooth delivery.
Proactive about observability, compliance, and runtime reliability.
About ValueMomentum
ValueMomentum is a leading solutions provider for the global property & casualty insurance industry, supported by deep domain and technology capabilities. We offer a comprehensive suite of advisory, development, implementation, and maintenance services across the entire P&C insurance value chain. This includes Underwriting, Claims, Distribution, and more, empowering insurers to stay ahead with sustained growth, high performance, and enhanced stakeholder value. Trusted by over 75 insurers, ValueMomentum is one of the largest standalone insurance-focused solutions providers to the US insurance industry.
Our culture - Our fuel
At ValueMomentum, we believe in making employees win by nurturing them from within, collaborating and looking out for each other.
People first - We make employees win.
Nurture leaders - We nurture from within.
Enjoy wins - Celebrating wins and creating leaders.
Collaboration - A culture of collaboration and people-centricity.
Diversity - Committed to diversity, equity, and inclusion.
Fun - Help people have fun at work.
Sr Data Platform Engineer
Elk Grove, CA jobs
Hybrid role 3X a week in office in Elk Grove, CA; no remote capabilities
This is a direct hire opportunity.
We're seeking a seasoned Senior Data Platform Engineer to design, build, and optimize scalable data solutions that power analytics, reporting, and AI/ML initiatives. This full‑time role is hands‑on, working with architects, analysts, and business stakeholders to ensure data systems are reliable, secure, and high‑performing.
Responsibilites:
Build and maintain robust data pipelines (structured, semi‑structured, unstructured).
Implement ETL workflows with Spark, Delta Lake, and cloud‑native tools.
Support big data platforms (Databricks, Snowflake, GCP) in production.
Troubleshoot and optimize SQL queries, Spark jobs, and workloads.
Ensure governance, security, and compliance across data systems.
Integrate workflows into CI/CD pipelines with Git, Jenkins, Terraform.
Collaborate cross‑functionally to translate business needs into technical solutions.
Qualifications:
7+ years in data engineering with production pipeline experience.
Expertise in Spark ecosystem, Databricks, Snowflake, GCP.
Strong skills in PySpark, Python, SQL.
Experience with RAG systems, semantic search, and LLM integration.
Familiarity with Kafka, Pub/Sub, vector databases.
Proven ability to optimize ETL jobs and troubleshoot production issues.
Agile team experience and excellent communication skills.
Certifications in Databricks, Snowflake, GCP, or Azure.
Exposure to Airflow, BI tools (Power BI, Looker Studio).
Data Engineer (AWS Redshift, BI, Python, ETL)
Manhattan Beach, CA jobs
We are seeking a skilled Data Engineer with strong experience in business intelligence (BI) and data warehouse development to join our team. In this role, you will design, build, and optimize data pipelines and warehouse architectures that support analytics, reporting, and data-driven decision-making. You will work closely with analysts, data scientists, and business stakeholders to ensure reliable, scalable, and high-quality data solutions.
Responsibilities:
Develop and maintain ETL/ELT pipelines for ingesting, transforming, and delivering data.
Design and enhance data warehouse models (star/snowflake schemas) and BI datasets.
Optimize data workflows for performance, scalability, and reliability.
Collaborate with BI teams to support dashboards, reporting, and analytics needs.
Ensure data quality, governance, and documentation across all solutions.
Qualifications:
Proven experience with data engineering tools (SQL, Python, ETL frameworks).
Strong understanding of BI concepts, reporting tools, and dimensional modeling.
Hands-on experience with cloud data platforms (e.g., AWS, Azure, GCP) is a plus.
Excellent problem-solving skills and ability to work in a cross-functional environment.
Sr. Cloud Data Engineer
Malvern, PA jobs
Job Title: Sr. Cloud Data Engineer
Duration: 12 months+ Contract
Contract Description:
Responsibilities:
Maintain and optimize AWS-based data pipelines to ensure timely and reliable data delivery.
Develop and troubleshoot workflows using AWS Glue, PySpark, Step Functions, and DynamoDB.
Collaborate on code management and CI/CD processes using Bitbucket, GitHub, and Bamboo.
Participate in code reviews and repository management to uphold coding standards.
Provide technical guidance and mentorship to junior engineers and assist in team coordination.
Qualifications:
9-10 years of experience in data engineering with strong hands-on AWS expertise.
Proficient in AWS Glue, PySpark, Step Functions, and DynamoDB.
Skilled in managing code repositories and CI/CD pipelines (Bitbucket, GitHub, Bamboo).
Experience in team coordination or mentoring roles.
Familiarity with Wealth Asset Management, especially personal portfolio performance, is a plus
Azure Data Engineer Sr
Irving, TX jobs
Minimum 7 years of relevant work experience in data engineering, with at least 2 years in a data modeling.
Strong technical foundation in Python, SQL, and experience with cloud platforms (Azure,).
Deep understanding of data engineering fundamentals, including database architecture and design, Extract, transform and load (ETL) processes, data lakes, data warehousing, and both batch and streaming technologies.
Experience with data orchestration tools (e.g., Airflow), data processing frameworks (e.g., Spark, Databricks), and data visualization tools (e.g., Tableau, Power BI).
Proven ability to lead a team of engineers, fostering a collaborative and high-performing environment.
Senior Snowflake Data Engineer
Santa Clara, CA jobs
About the job
Why Zensar?
We're a bunch of hardworking, fun-loving, people-oriented technology enthusiasts. We love what we do, and we're passionate about helping our clients thrive in an increasingly complex digital world. Zensar is an organization focused on building relationships, with our clients and with each other-and happiness is at the core of everything we do. In fact, we're so into happiness that we've created a Global Happiness Council, and we send out a Happiness Survey to our employees each year. We've learned that employee happiness requires more than a competitive paycheck, and our employee value proposition-grow, own, achieve, learn (GOAL)-lays out the core opportunities we seek to foster for every employee. Teamwork and collaboration are critical to Zensar's mission and success, and our teams work on a diverse and challenging mix of technologies across a broad industry spectrum. These industries include banking and financial services, high-tech and manufacturing, healthcare, insurance, retail, and consumer services. Our employees enjoy flexible work arrangements and a competitive benefits package, including medical, dental, vision, 401(k), among other benefits. If you are looking for a place to have an immediate impact, to grow and contribute, where we work hard, play hard, and support each other, consider joining team Zensar!
Zensar is seeking an Senior Snowflake Data Engineer -Santa Clara, CA-Work from office all 5 days-This is open for Full time with excellent benefits and growth opportunities and contract role as well.
Job Description:
Key Requirements:
Strong hands-on experience in data engineering using Snowflake and Databricks, with proven ability to build and optimize large-scale data pipelines.
Deep understanding of data architecture principles, including ingestion, transformation, storage, and access control.
Solid experience in system design and solution architecture, focusing on scalability, reliability, and maintainability.
Expertise in ETL/ELT pipeline design, including data extraction, transformation, validation, and load processes.
In-depth knowledge of data modeling techniques (dimensional modeling, star, and snowflake schemas).
Skilled in optimizing compute and storage costs across Snowflake and Databricks environments.
Strong proficiency in administration, including database design, schema management, user roles, permissions, and access control policies.
Hands-on experience implementing data lineage, quality, and monitoring frameworks.
Advanced proficiency in SQL for data processing, transformation, and automation.
Experience with reporting and visualization tools such as Power BI and Sigma Computing.
Excellent communication and collaboration skills, with the ability to work independently and drive technical initiatives.
Zensar believes that diversity of backgrounds, thought, experience, and expertise fosters the robust exchange of ideas that enables the highest quality collaboration and work product. Zensar is an equal opportunity employer. All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Zensar is committed to providing veteran employment opportunities to our service men and women. Zensar is committed to providing equal employment opportunities for persons with disabilities or religious observances, including reasonable accommodation when needed. Accommodations made to facilitate the recruiting process are not a guarantee of future or continued accommodations once hired.
Zensar does not facilitate/sponsor any work authorization for this position.
Candidates who are currently employed by a client or vendor of Zensar may be ineligible for consideration.
Zensar values your privacy. We'll use your data in accordance with our privacy statement located at: *********************************
Python Data Engineer- THADC5693417
Houston, TX jobs
Must Haves:
Strong proficiency in Python; 5+ years' experience.
Expertise in Fast API and microservices architecture and coding
Linking python based apps with sql and nosql db's
Deployments on docker, Kubernetes and monitoring tools
Experience with Automated testing and test-driven development
Git source control, git actions, ci/cd , VS code and copilot
Expertise in both on prem sql dbs (oracle, sql server, Postgres, db2) and no sql databases
Working knowledge of data warehousing and ETL Able to explain the business functionality of the projects/applications they have worked on
Ability to multi task and simultaneously work on multiple projects.
NO CLOUD - they are on prem
Day to Day:
Insight Global is looking for a Python Data Engineer for one of our largest oil and gas clients in Downtown Houston, TX. This person will be responsible for building python-based relationships between back-end SQL and NoSQL databases, architecting and coding Fast API and Microservices, and performing testing on back-office applications. The ideal candidate will have experience developing applications utilizing python and microservices and implementing complex business functionality utilizing python.