Satellite GNC Engineer, Amazon Leo
Data engineer job in Redmond, WA
Amazon Leo is Amazon's low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity.
Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
This position is part of the Satellite Attitude Determination and Control team. You will design and analyze the control system and algorithms, support development of our flight hardware and software, help integrate the satellite in our labs, participate in flight operations, and see a constellation of satellites flow through the production line in the building next door.
Key job responsibilities
- Design and analyze algorithms for estimation, flight control, and precise pointing using linear methods and simulation.
- Develop and apply models and simulations, with various levels of fidelity, of the satellite and our constellation.
- Component level environmental testing, functional and performance checkout, subsystem integration, satellite integration, and in space operations.
- Manage the spacecraft constellation as it grows and evolves.
- Continuously improve our ability to serve customers by maximizing payload operations time.
- Develop autonomy for Fault Detection and Isolation on board the spacecraft.
A day in the life
This is an opportunity to play a significant role in the design of an entirely new satellite system with challenging performance requirements. The large, integrated constellation brings opportunities for advanced capabilities that need investigation and development. The constellation size also puts emphasis on engineering excellence so our tools and methods, from conceptualization through manufacturing and all phases of test, will be state of the art as will the satellite and supporting infrastructure on the ground.
You will find that Amazon Leo's mission is compelling, so our program is staffed with some of the top engineers in the industry. Our daily collaboration with other teams on the program brings constant opportunity for discovery, learning, and growth.
About the team
Our team has lots of experience with various satellite systems and many other flight vehicles. We have bench strength in both our mission and core GNC disciplines. We design, prototype, test, iterate and learn together. Because GNC is central to safe flight, we tend to drive Concepts of Operation and many system level analyses.
BASIC QUALIFICATIONS- PhD, or Master's degree and 4+ years of quantitative field research experience
- Experience investigating the feasibility of applying scientific principles and concepts to business problems and products
- Experience analyzing both experimental and observational data sets
PREFERRED QUALIFICATIONS- Knowledge of R, MATLAB, Python or similar scripting language
- Experience with agile development
- Experience building web based dashboards using common frameworks
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $136,000/year in our lowest geographic market up to $212,800/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Sr Packaging TPM, Amazon Leo Antenna Development
Data engineer job in Redmond, WA
Amazon Leo is Amazon's low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity.
A Packaging Technical Program Manager is responsible for managing the complete packaging lifecycle of Leo customer terminal programs, driving execution of packaging to support product launches and sustaining packaging programs. This role involves managing organizational packaging roadmaps including OP1/OP2 narrative contributions and owning organizational goals as a senior technology program owner accountable for overall strategy and cross-functional team coordination. Key responsibilities include defining the packaging vision and tenets; setting objectives and analyzing data to drive quantified improvements; influencing resource allocation; and creating comprehensive packaging plans with global workback schedules for structural development, testing, and builds (pack outs). The position requires deep understanding of packaging systems, their limitations, scaling factors, and architectural decisions, while coordinating schedule creation with inter-disciplinary teams to ensure packaging structure and artwork development and approval are completed on time, along with providing Bill of Material (BOM) support and managing Print On Demand (POD) labeling design and production.
To ensure business and technical stakeholder needs are aligned, you drive mindful discussions that lead to crisp decisions while providing context (past, current, and future) for design direction and long-term perspective. You partner with Industrial Design, customers, and engineering teams to determine a cohesive design language for packaging across the portfolio. You use your technical judgment to question proposals and test assumptions, including whether solutions need to be built at all. You make strategic trade-offs between competing priorities such as time, effort, and feature scope. You create plans with clear, measurable success criteria and clearly communicate progress and outcomes.
Export Control Requirement:
Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
Key job responsibilities
You identify and bridge gaps between teams, processes, and system architectures. You help teams reduce exposure to classic failure modes such as insufficient requirements documentation, ineffective cross-team collaboration, and long-term impacts from third-party technology dependencies. You solve problems and proactively identify and mitigate risks before they become roadblocks. You demonstrate good judgment in how and when to escalate without damaging relationships. You are data-driven, regularly reviewing metrics and proactively seeking new and improved data mechanisms for visibility. You ensure your program stays aligned with organizational objectives.
You understand technical program management and engineering best practices, using this knowledge to assess development processes, test plans, and operations/maintenance requirements. You work with teams to improve concurrent project delivery while streamlining or eliminating excess process. You influence teams to decouple from dependencies and eliminate architecture problems that stifle innovation. You are an excellent communicator who writes effective narratives (Amazon 6-pager) and presents them effectively to Directors and VPs. You deliver the right outcomes with limited guidance. When confronted with discordant views, you find the best path forward and build consensus to influence others. You actively recruit and develop others through mentoring and providing constructive feedback
A day in the life
Every day brings new challenges to Leo. In the morning, you may be working on the technology roadmap for the next five years and in the afternoon diving deep on a technical issue delaying a build. The ability to set priorities using high value judgement is a must in the role as you will have independence on focus on what you deem critical to the organization.
About the team
Amazon Leo is focused on providing broadband access to underserved customers across the world. We are passionate about accomplishing this mission as quickly as possible without sacrificing quality or performance. We are constantly solving new problems and embrace people who thrive in a startup like environment and have an entrepreneurial mindset.
BASIC QUALIFICATIONS- 5+ years of technical product or program management experience
- Experience managing programs across cross functional teams, building processes and coordinating release schedules
- 7+ years of relevant work experience in the packaging and/or program management field
- BA/BS degree in Packaging, Engineering, UX Program Management, or related studies
PREFERRED QUALIFICATIONS- 5+ years of project management disciplines including scope, schedule, budget, quality, along with risk and critical path management experience
- Experience managing projects across cross functional teams, building sustainable processes and coordinating release schedules
- Experience defining KPI's/SLA's used to drive multi-million dollar businesses and reporting to senior leadership
- Experience communicating technical details verbally and in writing
- Packaging Engineering, UX Program Management, or direct packaging development experience
- 8+ Experience in Aviation, Mobility or Consumer Electronics Program Management
- Knowledge or experience in global packaging development and manufacturing.
- Experience with label design, label design programs, implementing, utilizing and/or sustaining Print-On-Demand (POD) labeling processes.
- Strong interpersonal skills; ability to work closely with people at all levels of the organization to facilitate the implementation of packaging programs including structure, artwork, factory build readiness and escalations.
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $133,900/year in our lowest geographic market up to $231,400/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
Palantir Data Engineer
Data engineer job in Seattle, WA
Responsibilities
Provide F-F support and mentoring for site users on Palantir Applications including: Quiver, Contour, Workshop, Object Explorer, Pipeline Builder, Automate, AIP Logic, AIP Agent Studio, etc.
ad-hoc mentoring sessions with users
Regular Office Hours for drop-in sessions (weekly)
Regular Lunch n Learn (or similar) sessions (monthly)
Coordinate with central teams on onsite training and hackathons
Build and run campaign on targeted Palantir Applications to develop deep expertise at site (eg Workshop, AIP)
Issue management: responsible for tracking and following up on site-specific Palantir support tickets through to resolution.
Site testing: participate in Beta/Early Access testing for centrally developed Palantir Applications (or site applications promoted to central managed).
Data quality management and governance: responsible for tracking and following up on Palantir Ontology/Dataset related issues raised at site but not addressed by regular Palantir support ticket. Liaise with central team for data editing permissions and data quality metrics/KPIs.
Develop ‘site Palantir skills development plan' for the site; 12 month roadmap of departments/users for targeted skills development campaign. Includes understanding of current platform utilization and agree targets with site LT.
R&D - develop documented and re-usable examples of how specific Palantir Applications could be useful for the site with actual data; site users can follow to build their own applications.
Experience
Expertise in key Palantir Applications including: Quiver, Contour, Workshop, Object Explorer, Pipeline Builder, Automate, AIP Logic and AIP Agent Studio.
Expertise in the Palantir ecosystem: understanding role of Ontology, Datasets, code repositories and how permissions are managed in the platform.
Knowledgeable in best practices for building and scaling applications developed in Palantir Foundry, including code repositories and branching.
Useful: programming (Python and/or typescript, SQL)
Useful: knowledge of the refining/downstream business
Excellent soft skills - working with end-users and teaching small groups on technology
Staff Data Engineer
Data engineer job in Bellevue, WA
*Immigration sponsorship is not available in this role*
We are looking for an experienced Data Engineer (8+ years of experience) with deep expertise in Flink SQL to join our engineering team. This role is ideal for someone who thrives on building robust real-time data processing pipelines and has hands-on experience designing and optimizing Flink SQL jobs in a production environment.
You'll work closely with data engineers, platform teams, and product stakeholders to create scalable, low-latency data solutions that power intelligent applications and dashboards.
⸻
Key Responsibilities:
• Design, develop, and maintain real-time streaming data pipelines using Apache Flink SQL.
• Collaborate with platform engineers to scale and optimize Flink jobs for performance and reliability.
• Build reusable data transformation logic and deploy to production-grade Flink clusters.
• Ensure high availability and correctness of real-time data pipelines.
• Work with product and analytics teams to understand requirements and translate them into Flink SQL jobs.
• Monitor and troubleshoot job failures, backpressure, and latency issues.
• Contribute to internal tooling and libraries that improve Flink developer productivity.
Required Qualifications:
• Deep hands-on experience with Flink SQL and the Apache Flink ecosystem.
• Strong understanding of event time vs processing time semantics, watermarks, and state management.
• 3+ years of experience in data engineering, with strong focus on real-time/streaming data.
• Experience writing complex Flink SQL queries, UDFs, and windowing operations.
• Proficiency in working with streaming data formats such as Avro, Protobuf, or JSON.
• Experience with messaging systems like Apache Kafka or Pulsar.
• Familiarity with containerized deployments (Docker, Kubernetes) and CI/CD pipelines.
• Solid understanding of distributed system design and performance optimization.
Nice to Have:
• Experience with other stream processing frameworks (e.g., Spark Structured Streaming, Kafka Streams).
• Familiarity with cloud-native data stacks (AWS Kinesis, GCP Pub/Sub, Azure Event Hub).
• Experience in building internal tooling for observability or schema evolution.
• Prior contributions to the Apache Flink community or similar open-source projects.
Why Join Us:
• Work on cutting-edge real-time data infrastructure that powers critical business use cases.
• Be part of a high-caliber engineering team with a culture of autonomy and excellence.
• Flexible working arrangements with competitive compensation.
AWS Data Engineer
Data engineer job in Seattle, WA
Must Have Technical/Functional Skills:
We are seeking an experienced AWS Data Engineer to join our data team and play a crucial role in designing, implementing, and maintaining scalable data infrastructure on Amazon Web Services (AWS). The ideal candidate has a strong background in data engineering, with a focus on cloud-based solutions, and is proficient in leveraging AWS services to build and optimize data pipelines, data lakes, and ETL processes. You will work closely with data scientists, analysts, and stakeholders to ensure data availability, reliability, and security for our data-driven applications.
Roles & Responsibilities:
Key Responsibilities:
• Design and Development: Design, develop, and implement data pipelines using AWS services such as AWS Glue, Lambda, S3, Kinesis, and Redshift to process large-scale data.
• ETL Processes: Build and maintain robust ETL processes for efficient data extraction, transformation, and loading, ensuring data quality and integrity across systems.
• Data Warehousing: Design and manage data warehousing solutions on AWS, particularly with Redshift, for optimized storage, querying, and analysis of structured and semi-structured data.
• Data Lake Management: Implement and manage scalable data lake solutions using AWS S3, Glue, and related services to support structured, unstructured, and streaming data.
• Data Security: Implement data security best practices on AWS, including access control, encryption, and compliance with data privacy regulations.
• Optimization and Monitoring: Optimize data workflows and storage solutions for cost and performance. Set up monitoring, logging, and alerting for data pipelines and infrastructure health.
• Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and deliver data solutions aligned with business goals.
• Documentation: Create and maintain documentation for data infrastructure, data pipelines, and ETL processes to support internal knowledge sharing and compliance.
Base Salary Range: $100,000 - $130,000 per annum
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
Data Scientist
Data engineer job in Shoreline, WA
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience
Business Intelligence Engineer
Data engineer job in Seattle, WA
Pay rate range - $55/hr. to $60/hr. on W2
Onsite role
Must Have -
Expert Python and SQL
Visualization and development
Required Skills
- 5-7 years of experience working with large-scale complex datasets
- Strong analytical mindset, ability to decompose business requirements into an analytical plan, and execute the plan to answer those business questions
- Strong working knowledge of SQL
- Background (academic or professional) in statistics, programming, and marketing
- SAS experience a plus
- Graduate degree in math/statistics, computer science or related field, or marketing is highly desirable.
- Excellent communication skills, equally adept at working with engineers as well as business leaders
Daily Schedule
- Evaluation of the performance of program features and marketing content along measures of customer response, use, conversion, and retention
- Statistical testing of A/B and multivariate experiments
- Design, build and maintain metrics and reports on program health
- Respond to ad hoc requests from business leaders to investigate critical aspects of customer behavior, e.g. how many customers use a given feature or fit a given profile, deep dive into unusual patterns, and exploratory data analysis
- Employ data mining, model building, segmentation, and other analytical techniques to capture important trends in the customer base
- Participate in strategic and tactical planning discussions
About the role
understanding customer behavior is paramount to our success in providing customers with convenient, fast free shipping in the US and international markets.
As a Senior Business Intelligence Engineer, you will work with our world-class marketing and technology teams to ensure that we continue to delight our customers.
You will meet with business owners to formulate key questions, leverage client's vast Data Warehouse to extract and analyze relevant data, and present your findings and recommendations to management in a way that is actionable
Synthetic Data Engineer (Observability & DevOps)
Data engineer job in Seattle, WA
About the Role: We're building a large-scale synthetic data generation engine to produce realistic observability datasets - metrics, logs, and traces - to support AI/ML training and benchmarking. You will design, implement, and scale pipelines that simulate complex production environments and emit controllable, parameterized telemetry data.
🧠 What You'll Do \t•\tDesign and implement generators for metrics (CPU, latency, throughput) and logs (structured/unstructured).
\t•\tBuild configurable pipelines to control data rate, shape, and anomaly injection.
\t•\tDevelop reproducible workload simulations and system behaviors (microservices, failures, recoveries).
\t•\tIntegrate synthetic data storage with Prometheus, ClickHouse, or Elasticsearch.
\t•\tCollaborate with ML researchers to evaluate realism and coverage of generated datasets.
\t•\tOptimize for scale and reproducibility using Docker containers.
✅ Who You Are \t•\tStrong programming skills in Python.
\t•\tFamiliarity with observability tools (Grafana, Prometheus, ELK, OpenTelemetry).
\t•\tSolid understanding of distributed systems metrics and log structures.
\t•\tExperience building data pipelines or synthetic data generators.
\t•\t(Bonus) Knowledge of anomaly detection, time-series analysis, or generative ML models.
💸 Pay $50 - 75/hr depending on experience Remote, flexible hours Project timeline: 5-6 weeks
Principal Software Engineer
Data engineer job in Seattle, WA
About the job
The Wissen team continues to expand its footprint in the USA, Canada, UK, Australia, and India. More openings to come as we continue to grow the team!
Please read below for a brilliant career opportunity.
Role: Principal Software Engineer
Title: AVP/VP
Location: Seattle, WA (Day 1 Onsite/Hybrid) - Fulltime
Mode of Work: 3 days/week onsite required
Required Experience: 10+ years
Primary Responsibilities
Scope, lead, build the backend portion of new user facing features in an evolving product with a growing internal user base.
Work closely in a cross-discipline team to build full stack user facing features. This will include doing API development, data engineering, and cloud infrastructure development types of projects.
Help continue to mature the cloud-based platform that is critical to the day-to-day of the company. Make it increasingly low touch and robust. Work with the support team to handle production issues. Help them help us in our endeavor in keeping the app up and available all the time.
Work with engineers in other project teams to properly integrate with their services and applications in the execution of ETL style workflows.
Develop and execute against both short- and long-term roadmaps. Make effective tradeoffs that consider business priorities, user experience, and a sustainable technical foundation. Maintaining quality is important.
Provide technical leadership to other Software Engineers; mentor new or less senior developers; conduct code reviews; foster an engaging, collaborative environment; share experience, knowledge, and ideas to improve processes and productivity. As a senior member of the team, you will be looked upon for guidance in helping to grow our technical knowledge base.
Required Experience:
Master's degree in Computer Science or related area of study or experience
10+ years of experience shipping high-quality user-facing products and engineering large systems.
10+ years of hands-on development experience managing all aspects of technical projects with a proven track record of delivering well architected and well written software solutions
7+ years of experience in writing Python while implementing programs.
Extensive experience designing, developing, and maintaining scalable services and APIs to support diverse business needs; cloud native development experience (AWS preferred)
Familiarity with functional programming is a bonus.
Proven expertise architecting and delivering scalable enterprise solutions leveraging AWS cloud native technologies, micro-services, and a relentless focus on performance and resilience.
Terraform and use tech like AWS DynamoDB, AWS Opensearch, AWS Neptune, AWS Lambdas, AWS ECS, Gitlab, Sentry so exposure to these is a plus.
We are open to skilled engineers with experience in other languages and equivalent tech.
Are comfortable working on a new product under fluid conditions, seamlessly balancing tactical and strategic considerations.
Measure your success in terms of business impact, not lines of code.
Strong leadership skills including vision and strategy, influencing and consensus building, communication, total quality commitment, ownership and accountability, and project management. You are often cited as the inspiration for engineers that join your team.
Strong aptitude for highly efficient data structures and algorithms; proven track record of becoming a subject matter expert in areas related to current assignments.
Ensures that software solutions remain integrated, efficient, and appropriate for a highly regulated industry; passionate, forward thinking and creative individual with high ethical standards and integrity.
Enjoy working with a diverse group of people with different areas of expertise. Engineering works closely with a variety of teams: Client Relations, Investment Operations, Portfolio Management, Sales. Our goal is to help make work flow between these different functional groups.
Benefits:
Healthcare insurance for you and your family (medical, dental, vision).
Short / Long term disability insurance.
Life Insurance.
Accidental death & disability Insurance.
401K.
3 weeks of Paid Time Off.
Support and fee coverage for immigration needs.
Remote office set up stipend.
Support for industry certifications.
Additional cash incentives.
Re-skilling opportunities to transition between technologies.
Schedule: Monday to Friday
Work Mode: Hybrid
Job Type: Full-time
We are: A high end technical consulting firm built and run by highly qualified technologists. Our workforce consists of 5000+ highly skilled professionals, with leadership from Wharton, MIT, IITs, IIMs, and NITs and decades of experience at Goldman Sachs, Morgan Stanley, MSCI, Deutsche Bank, Credit Suisse, Verizon, British Telecom, ISRO etc. Without any external funding or investments, Wissen Technology has grown its revenues by 100% every other year since it started as a subsidiary of Wissen Group in 2015. We have a global presence with offices in the US, India, UK, Australia, Mexico, and Canada.
You are: A true tech or domain ninja. Or both. Comfortable working in a quickly growing profitable startup, have a “can do” attitude and are willing to take on any task thrown your way.
You will:
Develop and promote the company's culture of engineering excellence.
Define, develop and deliver solutions at a top tier investment bank or another esteemed client.
Perform other duties as needed
Your Education and Experience:
We value candidates who can execute on our vision and help us build an industry-leading organization.
Graduate-level degree in computer science, engineering, or related technical field
Wissen embraces diversity and is an equal opportunity employer. We are committed to building a team that represents a variety of backgrounds, skills, and abilities. We believe that the more inclusive our team is, the better our work will be. All qualified applicants, including but not limited to LGBTQ+, Minorities, Females, the Disabled, and Veterans, are encouraged to apply.
About Wissen Technology:
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for diverse industries, including Banking, E-commerce, Telecom, Healthcare, Manufacturing, and Energy. We help clients build world-class products. We have offices in the US, India (Bangalore, Hyderabad, Chennai, Gurugram, Mumbai, Pune), UK, Australia, Mexico, Vietnam, and Canada.
We empower businesses with a dynamic portfolio of services and accelerators tailored to today's digital demands and based on future ready technology stack. Our services include Industry Leading Custom Software Development, AI-Driven Software Engineering, Generative AI & Machine Learning, Real-Time Data Analytics & Insights, Interactive Data Visualization & Decision Intelligence, Intelligent Process Automation, Multi-Cloud & Hybrid Cloud Strategies, Cross-Platform Mobile Experiences, CI/CD-Powered Agile DevOps, Automated Quality Engineering, and cutting-edge integrations.
Certified as a Great Place to Work for five consecutive years (2020-2025) and recognized as a Top 20 AI/ML vendor by CIO Insider, Wissen Group has delivered multimillion-dollar projects for over 20 Fortune 500 companies. Wissen Technology delivers exceptional value on mission-critical projects through thought leadership, ownership, and reliable, high-quality, on-time delivery.
Our industry-leading technical expertise stem from the talented professionals we attract. Committed to fostering their growth and providing top-tier career opportunities, Wissen ensures an outstanding experience and value for our clients and employees.
We Value:
Perfection: Pursuit of excellence through continuous improvement.
Curiosity: Fostering continuous learning and exploration.
Respect: Valuing diversity and mutual respect.
Integrity: Commitment to ethical conduct and transparency.
Transparency: Open communication and trust.
Website: **************
Glassdoor Reviews: *************************************************************
Wissen Thought leadership: https://**************/articles/
Latest in Wissen in CIO Insider:
**********************************************************************************************************************
Employee Speak:
***************************************************************
LinkedIn: **************************************************
About Wissen Interview Process:
https://**************/blog/we-work-on-highly-complex-technology-projects-here-is-how-it-changes-whom-we-hire/
Wissen: A Great Place to Work
https://**************/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-r-institute-india
https://**************/blog/here-is-what-ownership-and-commitment-mean-to-wissenites/
Wissen | Driving Digital Transformation
A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.
Senior Staff Software Engineer
Data engineer job in Seattle, WA
We are seeking a highly experienced Senior Staff Software Engineer to lead and deliver complex technical projects from inception to deployment. This role requires a strong background in software architecture, hands-on development, and technical leadership across the full software development lifecycle.
This role is with a fast-growing technology company pioneering AI-driven solutions for real-world infrastructure. Backed by significant recent funding and valued at over $5 billion, the company is scaling rapidly across multiple verticals, including mobility, retail, and hospitality. Its platform leverages computer vision and cloud technologies to create frictionless, intelligent experiences, positioning it as a leader in the emerging Recognition Economy-a paradigm where physical environments adapt in real time to user presence and context.
Required Qualifications10+ years of professional software engineering experience.
Proven track record of leading and delivering technical projects end-to-end.
Strong proficiency in Java or Scala.
Solid understanding of cloud technologies (AWS, GCP, or Azure).
Experience with distributed systems, microservices, and high-performance applications.
Preferred / Bonus SkillsAdvanced expertise in Scala.
Prior experience mentoring engineers and building high-performing teams.
Background spanning FAANG companies or high-growth startups.
Exposure to AI/ML or general AI technologies.
C++ Software Engineer w/ radio frequency and signal processing
Data engineer job in Everett, WA
NO SPOSORSHIP
Sr. C++ Software Systems Engineer - radio frequency and signal processing
SALARY: $165k - $205k plus 20% bonus
LOCATION: EVERETT, WA 98204 - Must live within one hour drive to come into the office a couple times a month
Strong radio frequency and signal processing background. You will develop radio frequency. CC++ extensive digital signal processing (DSP) and math background radio frequency (RF) windows networking and socket programming embedded software
Solutions to ensure the efficient use of frequencies, long distance communications, monitoring and security communications intelligence applications, we improve communications and protect military forces and infrastructure around the world.
This person will apply their strong radio frequency and signal processing background, and software development skills to meeting signal detection, identification, processing, geolocation and analysis challenges facing spectrum regulators, intelligence organizations and defense agencies around the globe.
Perform QA testing and analysis of new hardware and software performance up to the system level. Develop automated QA test software and systems.
Required Experience
US Person or Permanent Resident
Extensive experience in design, implementation and testing of complex realtime multithreaded software applications
Extensive C/C++ software development experience (6+ years)
Extensive Digital Signal Processing (DSP) and math background
Radio Frequency (RF) theory and practice (propagation, antennas, receivers, signals, systems, etc.)
RF Signals expertise, including signal modulation, demodulation, decoding and signal analysis techniques and tools
Programming for Windows operating systems
Networking and socket level programming
Databases and database programming
Ability to quickly learn and support a large existing C++ code base
System QA testing, including developing and executing test plans and writing automated QA test programs
Excellent communications skills
Ability to write technical product documentation
Preferred Knowledge, Skills, and Abilities
SIGINT/COMINT/EW experience
RF Direction Finding and Geolocation concepts, including AOA and TDOA
Mapping concepts, standards, and programming
Audio signal processing including analog and digital demodulation
Drone signals and protocols (uplink and downlink including video)
Experience operating commercial drones
Full Motion Video (FMV) systems, including STANAG 4609, KLV Metadata, MPEG-2 Transport Stream, H.264/265 encoding
Programming expertise:
Highly proficient in C/C++
Multithreaded realtime processing
Programming with Qt
Programming in Python
Embedded programming
Realtime hardware control and data acquisition
High performance graphics
GUI design and programming
Networking and socket level programming
Databases and database programming (incl. SQL)
XML and XML programming
JSON and JSON programming
API programming (developing and using)
Software licensing
AI concepts and programming
Tools:
RF Measurement equipment (VSA/spectrum analyzers, signal generators, and other electronic test equipment)
Windows OS, including desktop, server and embedded variants
Microsoft Visual Studio and TFS
Qt
Python
Intel IPP
InstallShield
Postgres and Microsoft databases packages
Experience with Visual Basic, MFC, C#, WPF/XAML and other Windows development tools/API's
Linux OS
6+ years relevant work experience
MSEE (or BSEE with extended relevant work experience) with emphasis on RF communication systems, Digital Signal Processing, and software
Senior Software Engineer (Azure Databricks, DLT Pipelines, Terraform Dev, CD/CI, Data Platform) Contract at Bellevue, WA
Data engineer job in Bellevue, WA
Senior Software Engineer (Azure Databricks, DLT Pipelines, Coding, CD/CI, Data Platform & Data Integration)
Contract at Bellevue, WA
Must Have Experience:
Hands-on experience with Azure Databricks/DLT Pipelines (Delta Live Tables)
Good programming skills - C#, Java or Python
CI/CD experience
Data platform/Data integration experience
The Role / Responsibilities
The Senior Software Engineer, is a hands-on engineer who works from design through implementation of large-scale systems that is data centric for the MA Platform. This is a thought leadership role in the Data Domain across all of Client's' Analytics, with the expectation that the candidate will demonstrate and propagate best practices and processes in software development. The candidate is expected to drive things on their own with minimal supervision from anyone.
• Design, code, test, and develop features to support large-scale data processing pipelines, for our multi-cloud SaaS platform with good quality, maintainability, and end to end ownership.
• Define and leverage data models to understand cost drivers, to create concrete action plans that address platform concerns on Data.
Qualifications
• 5+ years of experience in building and shipping production grade software systems or services, with one or more of the following: Distributed Systems, large-scale data processing, data storage, Information Retrieval and/or Data Mining, Machine Learning fundamentals.
• BS/MS/ in Computer Science or equivalent industry experience.
• Experience building and operating online services and fault-tolerant distributed systems at internet scale.
• Demonstrable experience shipping software, internet scale services using GraphQL/REST API(s) on Microsoft Azure and/or Amazon Web Services(AWS) cloud.
• Experience writing code in C++/C#/Java using agile and test-driving development (TDD).
• 3+ years in cloud service development - Azure or AWS services.
Preferred Qualifications
• Excellent verbal and written communications skills (to engage with both technical and non-technical stakeholders at all levels).
• Familiarity with Extract Transform Load (ETL) Pipelines, Data Modelling, Data Engineering and past ML experience is a plus.
• Experience in Data Bricks and/or Microsoft Fabric will be an added plus.
• Hands-on experience using distributed computing platforms like Apache Spark, Apache Flink Apache Kafka or Azure EventHub.
Software Engineer
Data engineer job in Redmond, WA
Programmers.io is currently looking for a Software Engineer
Onsite Role in Redmond, Washington, United States
FULL Time Role -( Open for US CITIZENS OR GREEN CARD HOLDERS ) NO C2C
About the Role
We are looking for a talented Software Engineer with strong expertise in .NET, React, and Microsoft Azure to join our growing development team. You will be responsible for designing, developing, and deploying scalable, high-performance web applications and cloud-based solutions. The ideal candidate is passionate about building robust software and thrives in an agile, fast-paced environment.
Key Responsibilities
Design, develop, and maintain modern web applications using .NET (Core/6/7) and React.js.
Build and consume RESTful APIs and integrate with external services.
Develop and deploy Azure-based cloud applications using services like App Service, Functions, Storage, Service Bus, and Azure SQL.
Collaborate with cross-functional teams (Product, QA, DevOps) to deliver high-quality software solutions.
Implement best practices in coding, testing, CI/CD, and cloud deployment.
Participate in code reviews, sprint planning, and architectural discussions.
NO C2C
If you are interested, please apply or feel free to share your updated resume at ************************
DevOps Engineer
Data engineer job in Issaquah, WA
Title: GCP DevOps Engineer - AI Contact Center & UDP
W2 Contract
We are seeking a dedicated GCP DevOps Engineer to drive the automation, resilience, and secure infrastructure deployment for our AI Contact Center and Unified Data Platform (UDP). This role will focus on creating robust CI/CD pipelines, automating infrastructure-as-Code (IaC), and ensuring a scalable, immutable, and fully auditable environment for all GCP resources.
Direct experience automating the deployment and configuration management of Google Contact Center AI (CCAI) or Customer Engagement Suite (CES) components (e.g., Dialogflow CX).
Key Responsibilities
Infrastructure Automation & CI/CD
Infrastructure-as-Code (IaC): Develop, maintain, and secure all GCP infrastructure (VPC, IAM, BigQuery datasets, Pub/Sub topics, Cloud Functions, GKE/Cloud Run services) using Terraform or similar IaC tools.
CI/CD Pipeline Development: Design and implement robust, fully automated CI/CD pipelines (e.g., using Cloud Build, Jenkins, or GitLab) for deploying Google CES configurations (Dialogflow code, bot logic), custom integration services, and UDP data pipelines.
Deployment Strategy: Implement blue/green or canary deployment strategies to minimize downtime and ensure reliable rollouts of new contact center features and AI model updates.
Data Platform (UDP) Automation & Reliability
Data Pipeline Automation: Collaborate with Data Engineers to automate the provisioning and deployment of high-volume, real-time data ingestion pipelines (using Pub/Sub, Dataflow, and BigQuery) that power the UDP.
Configuration Management: Automate the configuration and maintenance of cloud resources that support call routing, call setup, and channel integration logic to ensure consistency across environments.
Scalability Management: Implement auto-scaling solutions for core contact center backend services and data processing components to handle peak call volumes and sudden spikes in data ingestion.
Security & Governance
Policy-as-Code: Implement and enforce security policies and best practices (e.g., least-privilege IAM, network restrictions) using tools like Forseti or native GCP Policy Constraints.
Artifact Management: Establish and manage artifact repositories (e.g., Artifact Registry) for all code, container images, and deployment artifacts.
Monitoring & Observability: Integrate pipeline deployments with monitoring and observability tools, ensuring automated logging and alerting for failure points within Verint WFM integration and Salesforce channel integration components.
Google CES & CCAI - Deployment & Configuration
Deploy and configure Google CES and CCAI components (Dialogflow CX, Agent Assist, CCAI Insights).
Build and manage conversational flows, intents, entities, and virtual agents.
Set up integrations with telephony systems and contact center platforms.
Google CES & CCAI - System Integration
Integrate CCAI with CRMs (Salesforce, ServiceNow, Zendesk).
Configure data pipelines using Pub/Sub, Cloud Functions, BigQuery, and Cloud Storage.
Support telephony routing, IVR flows, and cloud contact center integrations.
Google CES & CCAI -Operations & Management
Monitor system performance, availability, and error logs.
Manage production workloads, releases, and platform updates.
Troubleshoot CCAI/CES issues and ensure stable operations.
Required Skills and Experience
12+ years of experience in a DevOps or SRE role, with a strong focus on Google Cloud Platform (GCP).
Expert-level proficiency with Terraform or similar IaC tools for managing complex GCP environments.
Deep practical experience designing and implementing CI/CD pipelines (e.g., Cloud Build, Jenkins).
Solid experience containerizing applications and managing orchestration platforms (e.g., Cloud Run or GKE).
Strong scripting skills in Python or Go for automation tasks and custom integration services.
Familiarity with the challenges and requirements of deploying software in a highly-regulated environment like a Contact Center, including real-time systems like call routing.
Software Engineer
Data engineer job in Redmond, WA
Are you an experienced Software Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Software Engineer to work at their company in Redmond, WA.
The main function of a Lab/Test Engineer at this level is to apply configuration skills at an intermediate to high level. The Test Engineer will analyze, design and develop test plans and should be familiar with at least one programming language. We're on the lookout for a contract Engineer with extensive experience in configuring and testing hardware devices across Windows Server and Ubuntu Server platforms. The ideal candidate will not only be technically adept but also possess strong analytical skills, capable of producing comprehensive and detailed reports. Proficiency in scripting languages is essential. The role involves deploying and managing test machines, refining test plans, executing test cases, performing hardware diagnostics, troubleshooting issues, and collaborating closely with the development team to advance the functionality of hardware systems. Experience with CI/CD pipelines, C++ and Rust development will be considered a significant asset. The main function of a Lab/Test Engineer at this level is to apply configuration skills at an intermediate to high level. The Test Engineer will analyze, design and develop test plans and should be familiar with at least one programming language.
Primary Responsibilities/Accountabilities:
Perform repeatable testing procedures and processes.
Verify triggers, stored procedures, referential integrity, hardware product or system specifications.
Interpret and modify code as required which may include C/C++, C#, batch files, make files, Perl scripts, queries, stored procedures and/or triggers.
Identifies and defines project team quality and risk metrics.
Provides assistance to other testers.
Designs and develops robust automated test harnesses with a focus on Application/System/Inter-System level issues.
Perform job functions within the scope of application/system performance, threading issues, bottleneck identification, writing small footprint and less intrusive code for critical code testing, tackling system/application intermittent failures, etc.
Purpose of the Team: The purpose of this team is to focus on security hardware and intellectual property. Their work is primarily open source, with some potential for internal code review.
Key projects: This role will contribute to supporting development and testing for technologies deployed in the Azure fleet.
Typical task breakdown and operating rhythm: The role will consist of 10% meetings, 10% reporting, and 80% heads down (developing and testing).
Qualifications:
Years of Experience Required: 8-10+ overall years of experience in the field.
Degrees or certifications required: N/A
Best vs. Average: The ideal resume would contain Rust experience, experience with open-source projects,
Performance Indicators: Performance will be assessed based on quality of work, meeting deadlines, and flexibility.
Minimum 8+ years of experience with test experience with data center/server hardware.
Minimum 8+ years of experience with development experience with C++ (and Python).
Minimum 2+ years of experience with an understanding of CI/CD and ADO pipelines.\
Software testing experience in Azure Cloud/Windows/Linux server environments required.
Ability to read and write at least one programming language such as C#, C/C++, SQL, etc, RUST is a plus!
Knowledge of software quality assurance practices, with strong testing aptitude.
Knowledge of personal computer hardware is required as is knowledge of deploying and managing hosts and virtual test machines
Knowledge of internet protocols and networking fundamentals preferred.
Must have a solid understanding of the software development cycle.
Demonstrated project management ability required.
Experience with CI/CD pipelines
Bachelor's degree in Computer Science required and some business/functional knowledge and/or industry experience preferred.
5-7 years' experience.
8-10 years' experience.
Preferred:
Database programming experience, i.e. SQL Server, Sybase, Oracle, Informix and/or DB2 may be required.
Software testing experience in a Web-based or Windows client/server environment required.
Experience in development and/or database administration experience using a product is required.
Ability to read and write at least one programming language such as C#, C/C++, SQL, etc.
Knowledge of software quality assurance practices, with strong testing aptitude.
Knowledge of personal computer hardware may be required.
Knowledge of internet protocols and networking fundamentals preferred.
Must have a solid understanding of the software development cycle.
Demonstrated project management ability required.
Beyond Trust Engineer - PAM
Data engineer job in Seattle, WA
Title: Beyond Trust Engineer - PAM
Job Type: Temporary Assignment
Work Type: Hybrid
Duration: 6 Months
Payrate: $ 70.00 - 70.00/hr.
TekWissen is a global workforce management provider headquartered in Ann Arbor, Michigan that offers strategic talent solutions to our clients world-wide. The below job opportunity is to one of Our clients Who is a fashion specialty retailer founded on a simple idea: offer each customer the best possible service, quality, value, and selection. We are looking for an individual to provide specialized Information Technology support for our strategic business partners within the client Corporate Center.
Job Description:
As a PAM Platform Engineer on Client's Identity & Access Management team, you'll be a key technical specialist responsible for designing, implementing, and maintaining our enterprise-wide Privileged Access Management infrastructure using BeyondTrust.
You'll lead the rollout of BeyondTrust and support ongoing management of our privileged access solutions, including password management, endpoint privilege management, and session management capabilities across our retail technology ecosystem.
A day in the life:
PAM Platform Leadership: Serve as the primary technical expert for privileged access management solutions, including architecture, deployment, configuration, and optimization of password vaults and endpoint privilege management systems
Enterprise PAM Implementation: Design and execute large-scale PAM deployments across Windows, mac OS, and Linux environments, ensuring seamless integration with existing infrastructure
Policy Development & Management: Create and maintain privilege elevation policies, credential rotation schedules, access request workflows, and governance rules aligned with security and compliance requirements
Integration & Automation: Integrate PAM solutions with ITSM platforms, SIEM tools, vulnerability scanners, directory services, and other security infrastructure to create comprehensive privileged access workflows
Troubleshooting & Support: Provide expert-level technical support for PAM platform issues, performance optimization, privileged account onboarding, and user access requests
Security & Compliance: Ensure PAM implementations meet PCI DSS, and other requirements through proper audit trails, session recording and monitoring, and privileged account governance
Documentation & Training: Develop technical documentation, procedures, and training materials for internal teams and end users
Continuous Improvement: Monitor platform performance, evaluate new features, and implement best practices to enhance security posture and operational efficiency
Skills:
4-6+ years of hands-on experience implementing and managing enterprise PAM platforms such as CyberArk, BeyondTrust, Delinea (Thycotic) in large-scale environments
Vendor certifications in one or more major PAM platforms (CyberArk Certified Delivery Engineer, BeyondTrust Certified Implementation Engineer, Delinea certified professional, etc.) preferred
Deep expertise in privileged account discovery, credential management, password rotation, session management, and access request workflows using enterprise PAM solutions
Strong understanding of Windows Server administration, Active Directory, Group Policy, and PowerShell scripting
Experience with Linux/Unix system administration and shell scripting for cross-platform PAM deployments
Knowledge of networking fundamentals including protocols, ports, certificates, load balancing, and security hardening
Experience with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes)
Understanding of identity and access protocols (SAML, OIDC, OAuth, SCIM, LDAP) and their integration with PAM solutions
Technical Skills:
PAM Platforms: Experience with major vendors (CyberArk Privileged Access Security, BeyondTrust Password Safe/EPM, Delinea Secret Server/Privilege Manager, Ping Identity PingOne Protect)
Operating Systems: Windows Server (2016/2019/2022), Windows 10/11, mac OS, RHEL, Ubuntu, SUSE
Databases: SQL Server, MySQL, PostgreSQL, Oracle for PAM backend configuration
Virtualization: VMware vSphere, Hyper-V, cloud-based virtual machines
Scripting: PowerShell, Bash, Python for automation and integration tasks
Security Tools: Integration experience with vulnerability scanners, endpoint detection tools, and identity governance platforms
Preferred Qualifications:
Experience with multiple PAM vendors and platform migration/integration projects
Knowledge of DevOps practices, CI/CD pipelines, and Infrastructure as Code (Terraform, Ansible)
Familiarity with ITSM integration (ServiceNow, Jira) for ticket-driven privileged access workflows
Experience with SIEM integration and security monitoring platforms (Splunk, QRadar, etc.)
Understanding of zero trust architecture and least privilege access principles
Experience with secrets management platforms (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault)
Previous experience in retail technology environments or large-scale enterprise deployments
Industry certifications such as CISSP, CISM, or relevant cloud security certifications.
Education:
Bachelors or relevant exp.
TekWissen Group is an equal opportunity employer supporting workforce Diversity.
Firmware Software Engineer IV - AOSP
Data engineer job in Redmond, WA
Minimum qualifications
3+ years work experience with C/C++/C#
3+ years work experience in AOSP development
2+ years experience with Java (or Kotlin)
Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
Preferred qualifications
2+ years experience developing software for games, autonomous vehicles, robotics or other high performance real-time environments
Experience with high-bandwidth communication
Experience with camera integration
Experience with low level firmware and RTOS
Senior Adobe Creative Automation & Generative AI Developer
Data engineer job in Redmond, WA
HCLTech is looking for a highly talented and self- motivated [Senior Adobe Creative Automation & Generative AI Developer] to join it in advancing the technological world through innovation and creativity.
Job Title: Senior Adobe Creative Automation & Generative AI Developer
Job ID: 1634313BR
Position Type: Full-time
Location: Redmond, WA
Role/Responsibilities
Architect and develop end-to-end creative automation workflows that unify Photoshop, Firefly, Gen AI Studio, Adobe Express, and AEM Assets for seamless creative production and delivery.
Build programmatic image editing solutions using Adobe Photoshop APIs for resizing, cropping, color correction, smart object manipulation, and automated brand asset application.
Utilize Adobe Firefly and Gen AI Studio for generative image creation, branded template generation, and automated content variation aligned with campaign and brand standards.
Develop and fine-tune custom Firefly models for domain-specific creative needs - including brand-aligned generative imagery, typography styles, and text-to-image workflows trained on internal creative datasets.
Integrate custom AI and computer vision models (TensorFlow, PyTorch, OpenCV) with Firefly and Photoshop workflows for background removal, intelligent tagging, object detection, and brand compliance validation.
Build high-volume batch automation pipelines for image transformation, compression, metadata enrichment, and version control - combining Adobe APIs with external tools like ImageMagick and Azure Functions.
Enable seamless AEM integration to manage asset ingestion, versioning, tagging, delivery, and AI-enriched metadata synchronization.
Implement AI-driven validation processes to ensure creative assets adhere to brand guidelines (color, font, layout, imagery).
Develop dynamic creative personalization workflows powered by Firefly and Adobe Target to support adaptive marketing campaigns.
Collaborate with designers, marketers, and content operations teams to translate creative needs into scalable technical automation modules.
Stay at the forefront of Adobe AI innovation, exploring new capabilities in Firefly, Gen AI Studio, and Sensei, and recommend ways to extend them with custom model development.
Mentor team members on Adobe APIs, automation patterns, and responsible AI best practices in creative workflows.
Qualifications & Experience
Minimum Requirements
Proven experience with Adobe Photoshop APIs, Firefly APIs, Gen AI Studio, and Adobe Express APIs.
Strong hands-on expertise in Adobe AEM Assets for asset management, metadata automation, and content delivery integration.
Experience in custom Firefly model development and fine-tuning, including data preparation, prompt optimization, and integration into creative pipelines.
Proficiency in Python, Node.js, or JavaScript for building Adobe API integrations and automation workflows.
Deep understanding of computer vision and machine learning frameworks (TensorFlow, PyTorch) for creative asset tagging and generation.
Familiarity with Adobe Sensei and AI-powered automation features (auto-tagging, smart cropping, personalization).
Experience designing scalable image processing pipelines that enforce brand compliance and creative quality at volume.
Strong analytical and problem-solving skills with the ability to bridge technical and creative disciplines.
Excellent communication and collaboration skills to work with cross-functional creative, marketing, and engineering teams.
Desired Qualifications
Prior experience implementing custom generative AI solutions within the Adobe ecosystem.
Knowledge of responsible AI development practices, including data curation, mitigation, and brand safety in generative workflows.
Familiarity with Adobe Target and Adobe Analytics for creative optimization and performance measurement.
Experience managing large-scale creative libraries in AEM with AI-enriched metadata for search and retrieval.
Background in digital marketing automation, creative operations, or AI-driven personalization.
Pay and Benefits
Pay Range Minimum: $50.00 per hour
Pay Range Maximum: $52.00 per hour
HCLTech is an equal opportunity employer, committed to providing equal employment opportunities to all applicants and employees regardless of race, religion, sex, color, age, national origin, pregnancy, sexual orientation, physical disability or genetic information, military or veteran status, or any other protected classification, in accordance with federal, state, and/or local law. Should any applicant have concerns about discrimination in the hiring process, they should provide a detailed report of those concerns to ****************** for investigation.
A candidate's pay within the range will depend on their skills, experience, education, and other factors permitted by law. This role may also be eligible for performance-based bonuses subject to company policies. In addition, this role is eligible for the following benefits subject to company policies: medical, dental, vision, pharmacy, life, accidental death & dismemberment, and disability insurance; employee assistance program; 401(k) retirement plan; 10 days of paid time off per year (some positions are eligible for need-based leave with no designated number of leave days per year); and 10 paid holidays per year
How You'll Grow
At HCLTech, we offer continuous opportunities for you to find your spark and grow with us. We want you to be happy and satisfied with your role and to really learn what type of work sparks your brilliance the best. Throughout your time with us, we offer transparent communication with senior-level employees, learning and career development programs at every level, and opportunities to experiment in different roles or even pivot industries. We believe that you should be in control of your career with unlimited opportunities to find the role that fits you best.
Data Scientist
Data engineer job in Olympia, WA
# Job Description: AI Task Evaluation & Statistical Analysis Specialist
## Role Overview We're seeking a data-driven analyst to conduct comprehensive failure analysis on AI agent performance across finance-sector tasks. You'll identify patterns, root causes, and systemic issues in our evaluation framework by analyzing task performance across multiple dimensions (task types, file types, criteria, etc.). ## Key Responsibilities - **Statistical Failure Analysis**: Identify patterns in AI agent failures across task components (prompts, rubrics, templates, file types, tags) - **Root Cause Analysis**: Determine whether failures stem from task design, rubric clarity, file complexity, or agent limitations - **Dimension Analysis**: Analyze performance variations across finance sub-domains, file types, and task categories - **Reporting & Visualization**: Create dashboards and reports highlighting failure clusters, edge cases, and improvement opportunities - **Quality Framework**: Recommend improvements to task design, rubric structure, and evaluation criteria based on statistical findings - **Stakeholder Communication**: Present insights to data labeling experts and technical teams ## Required Qualifications - **Statistical Expertise**: Strong foundation in statistical analysis, hypothesis testing, and pattern recognition - **Programming**: Proficiency in Python (pandas, scipy, matplotlib/seaborn) or R for data analysis - **Data Analysis**: Experience with exploratory data analysis and creating actionable insights from complex datasets - **AI/ML Familiarity**: Understanding of LLM evaluation methods and quality metrics - **Tools**: Comfortable working with Excel, data visualization tools (Tableau/Looker), and SQL ## Preferred Qualifications - Experience with AI/ML model evaluation or quality assurance - Background in finance or willingness to learn finance domain concepts - Experience with multi-dimensional failure analysis - Familiarity with benchmark datasets and evaluation frameworks - 2-4 years of relevant experience
Databricks Engineer
Data engineer job in Seattle, WA
5+ years of experience in data engineering or similar roles.
Strong expertise in Databricks, Apache Spark, and PySpark.
Proficiency in SQL, Python, and data modeling concepts.
Experience with cloud platforms (Azure preferred; AWS/GCP is a plus).
Knowledge of Delta Lake, Lakehouse architecture, and partitioning strategies.
Familiarity with data governance, security best practices, and performance tuning.
Hands-on experience with version control (Git) and CI/CD pipelines.
Roles & Responsibilities:
Design and develop ETL/ELT pipelines using Azure Databricks and Apache Spark.
Integrate data from multiple sources into the data lake and data warehouse environments.
Optimize data workflows for performance and cost efficiency in cloud environments (Azure/AWS/GCP).
Implement data quality checks, monitoring, and alerting for pipelines.
Collaborate with data scientists and analysts to provide clean, curated datasets.
Ensure compliance with data governance, security, and privacy standards.
Automate workflows using CI/CD pipelines and orchestration tools (e.g., Airflow, Azure Data Factory).
Troubleshoot and resolve issues in data pipelines and platform components.
TCS Employee Benefits Summary:
Discretionary Annual Incentive.
Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
Family Support: Maternal & Parental Leaves.
Insurance Options: Auto & Home Insurance, Identity Theft Protection.
Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
Time Off: Vacation, Time Off, Sick Leave & Holidays.
Legal & Financial Assistance: Legal Assistance , 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing.
#LI-RJ2
Salary Range - $100,000-$140,000 a year