Bigdata Developer
Data engineer job at People Tech Group
People Tech Group Inc. is a privately held company who does business across the globe. The company was re- formed in 2005 and is headquartered in Bellevue, Washington with offices in Chicago, Pittsburgh in the USA, and multiple offices in India. People Tech Group was created from a vision by the owner to successfully deliver to clients a cost effective full-cycle services and solutions.
Our vision is to continually refine key capabilities based on market insight, client needs, industry trends and the strategic direction of enterprise solutions. This approach has led us to provide an integrated set of services that combines industry-based consulting, top-tier managed services, and advanced technology services.
Job Description
Hands on experience on Spark, Phoenix, Kafka and HBase on a Large Scale Productionalized project (Data handled should be at least couple of 100's of TB).
At least 3+ years of experience on Big Data on Production Environment, Large Scale Big Data Project.
Should be familiar with Azure HDInsight and hot to deploy and execute Big Data jobs on Azure.
Qualifications
Bachelors degree required
Additional Information
All your information will be kept confidential according to EEO guidelines.
System Development Engineer, STRADA - Service Transformation and Deployment Automation
Seattle, WA jobs
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we're the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain - and we're looking for talented people who want to help.
You'll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You'll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you'll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion.
Do you like helping U.S. Intelligence Community agencies implement innovative cloud computing solutions and solve technical problems? Would you like to do this using the latest cloud computing technologies? Do you have a knack for helping these groups understand application architectures and integration approaches, and the consultative and leadership skills to launch a project on a trajectory to success?
Amazon is seeking a Linux/Unix Systems Administrator/Engineer with the ability to automate day to day tasks and develop/build software and/or services from the ground up. A good candidate must have strong Linux/Unix Systems Administration knowledge, including shell scripting, and a proficiency in at least one development language. They must be able to think at “Amazon scale” to solve problems in permanent, sustainable, and scalable ways. With an eye toward utilizing the best solution over the quickest solution.
At Amazon scale, Network Engineers rely on an ever increasing number of tools to manage thousands of network devices that support AWS services. Our Systems Team manages the hundreds of tools/services and components that the Network Engineers rely on to keep the network operational. This includes systems that track loss and incident correlation, scaling and building of new and existing network devices, and a full suite of monitoring tools. Many of these tools provided integrations with one another to give a broad to in depth view of the status of the network and aid in troubleshooting.
This position requires that the candidate selected be a US Citizen and must currently possess and maintain an active TS/SCI security clearance with polygraph.
Key job responsibilities
The Systems Development Engineers on the team are responsible for maintaining the network tools/systems described above within US GovCloud and other US Government air gapped regions. This includes troubleshooting problems with systems and services, regular deployment of new versions of the systems and their subcomponents, deployment/system validation and testing, service monitoring, standing up new services/tools, etc. The team works with many different internal Software Development teams to drive improvement of the systems/services within the team's scope. It is important to be able to work collaboratively and independently to investigate and document issues and create solutions to solve them at scale.
Do you:
Calmly and quickly diagnose and fix critical systems failures in high pressure situations?
Manage and grow innovative, production-quality tools to solve real operational problems, in Python, Perl, Ruby, Shell, Java, etc.?
Investigate complicated technical issues scientifically and thoroughly, and assist in fixing them so they don't come back?
Understand how a modern, cloud-hosted application stack works from top to bottom?
Know how to provide technical solutions to real business problems in a global organization?
If you're a customer-focused System/DevOps Engineer who would like to contribute to a critical success story, we would love to hear from you! Over a million customers rely on Amazon Web Services (AWS) network for building and running their applications and businesses. Our customers success depends on our world-class network infrastructure, and our network depends on our systems team!
Physical requirements:
- Must be able to work in a 24x7 team on call rotation, with ability to drive into workplace for critical events/needs.
- The ability to sit in front of computer during scheduled work hours with appropriate breaks while maintaining a high level of alertness and attention to detail.
- Travel to data center/systems sites and Amazon/customer offices as needed.
- Experience dealing with customers during problem resolution and operating efficiently under pressure.
About the team
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness.
We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
BASIC QUALIFICATIONS- 3+ years of non-internship professional software development experience
- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience
- Experience programming with at least one software programming language
- Current, active US Government Security Clearance of TS/SCI with Polygraph
PREFERRED QUALIFICATIONS- 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience
- Bachelor's degree in computer science or equivalent
Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit ********************************************************* for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $129,300/year in our lowest geographic market up to $223,600/year in our highest geographic market. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, please visit ******************************************************** This position will remain posted until filled. Applicants should apply via our internal or external career site.
DevOps Engineer
Seattle, WA jobs
A globally leading consumer device company based in Seattle, WA is looking for a DevOps Engineer, Cloud Infrastructure to join their dynamic team!
Job Responsibilities:
• Manage Kubernetes clusters by performing improvements and regular maintenance. Perform cloud infrastructure operational tasks.
• Perform Database administration tasks including migration, instrumenting of telemetry, performance monitoring, cost monitoring and consolidation.
• CI/CD-focused projects that include implementing features for GitOps + software lifecycle tooling and processes.
Required Skills:5 years of relevant experience
3-5 years experience with Kubernetes (configuration, operations, deployment),and related technologies: Helm, ArgoCD, GitOps.
3-5 years experience with AWS and database administration: Postgres, RDS (eg. Aurora)
3 years experience in Python + Bash scripting
Operational Experience with cloud-based service infrastructure (DNS, loadbalancing, ingress, telemetry and logging)
Type: Contract
Duration: 9 months with extension
Work Location: Seattle, WA (Remote)
Pay range: $ 74.00 - $ 89.00 (DOE)
Beyond Trust Engineer
Seattle, WA jobs
PAM Platform Leadership: Serve as the primary technical expert for privileged access management solutions, including architecture, deployment, configuration, and optimization of password vaults and endpoint privilege management systems
Enterprise PAM Implementation: Design and execute large-scale PAM deployments across Windows, mac OS, and Linux environments, ensuring seamless integration with existing infrastructure
Policy Development & Management: Create and maintain privilege elevation policies, credential rotation schedules, access request workflows, and governance rules aligned with security and compliance requirements
Integration & Automation: Integrate PAM solutions with ITSM platforms, SIEM tools, vulnerability scanners, directory services, and other security infrastructure to create comprehensive privileged access workflows
Troubleshooting & Support: Provide expert-level technical support for PAM platform issues, performance optimization, privileged account onboarding, and user access requests
Security & Compliance: Ensure PAM implementations meet PCI DSS, and other requirements through proper audit trails, session recording and monitoring, and privileged account governance
Documentation & Training: Develop technical documentation, procedures, and training materials for internal teams and end users
Continuous Improvement: Monitor platform performance, evaluate new features, and implement best practices to enhance security posture and operational efficiency
Required Experience:
4-6+ years of hands-on experience implementing and managing enterprise PAM platforms such as CyberArk, BeyondTrust, Delinea (Thycotic) in large-scale environments
Vendor certifications in one or more major PAM platforms (CyberArk Certified Delivery Engineer, BeyondTrust Certified Implementation Engineer, Delinea certified professional, etc.) preferred
Deep expertise in privileged account discovery, credential management, password rotation, session management, and access request workflows using enterprise PAM solutions
Strong understanding of Windows Server administration, Active Directory, Group Policy, and PowerShell scripting
Experience with Linux/Unix system administration and shell scripting for cross-platform PAM deployments
Knowledge of networking fundamentals including protocols, ports, certificates, load balancing, and security hardening
Experience with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes)
Understanding of identity and access protocols (SAML, OIDC, OAuth, SCIM, LDAP) and their integration with PAM solutions
Technical Skills:
PAM Platforms: Experience with major vendors (CyberArk Privileged Access Security, BeyondTrust Password Safe/EPM, Delinea Secret Server/Privilege Manager, Ping Identity PingOne Protect)
Operating Systems: Windows Server (2016/2019/2022), Windows 10/11, mac OS, RHEL, Ubuntu, SUSE
Databases: SQL Server, MySQL, PostgreSQL, Oracle for PAM backend configuration
Virtualization: VMware vSphere, Hyper-V, cloud-based virtual machines
Scripting: PowerShell, Bash, Python for automation and integration tasks
Security Tools: Integration experience with vulnerability scanners, endpoint detection tools, and identity governance platforms
Preferred Qualifications:
Experience with multiple PAM vendors and platform migration/integration projects
Knowledge of DevOps practices, CI/CD pipelines, and Infrastructure as Code (Terraform, Ansible)
Familiarity with ITSM integration (ServiceNow, Jira) for ticket-driven privileged access workflows
Experience with SIEM integration and security monitoring platforms (Splunk, QRadar, etc.)
Understanding of zero trust architecture and least privilege access principles
Experience with secrets management platforms (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault)
Previous experience in retail technology environments or large-scale enterprise deployments
Industry certifications such as CISSP, CISM, or relevant cloud security certifications
Firmware Software Engineer IV
Redmond, WA jobs
Immediate need for a talented Firmware Software Engineer IV. This is a 12 Months opportunity with long-term potential and is located in Redmond, WA (Onsite). Please review the job description below and contact me ASAP if you are interested.
Job Diva ID: 25-94264
Pay Range: $85- $90 /hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience: -
Key skills; “MIPI” , “Firmware” , "C", "Camera
Develop firmware to integrate custom image sensors with an MCU
8 years' experience in Firmware or Embedded Software Development in C/C .
Familiarity with MIPI C-PHY and image sensors.
Experience with Zephyr OS, Embedded Linux or other RTOS.
Our client is a leading Meta Industry, and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
By applying to our jobs, you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Software Engineer IV
Redmond, WA jobs
Immediate need for a talented Software Engineer IV. This is a 06+ Months Contract opportunity with long-term potential and is located in Redmond, WA (Onsite). Please review the job description below and contact me ASAP if you are interested.
Job ID:25-92637
Pay Range: $88 - $90/hour. Employee benefits include, but are not limited to, health insurance (medical, dental, vision), 401(k) plan, and paid sick leave (depending on work location).
Key Requirements and Technology Experience:
Skills-Research AND C++/C# AND GAME OR UNITY OR UNREAL
BS in Computer Science or a related technical field
1+ years experience developing user interfaces / interactive experiences
2+ years experience developing with C++/C# in a game engine (Unity, Unreal)
2-3+ years in Python for data analysis (numpy, scipy)
Strong communication and interpersonal skills
Familiarity with user research, behavioral experiments
Familiarity with sensing technologies (IMU, EMG, cameras, microphones)
Advanced degree (MS, PhD) in a related field
Experience developing for AR/VR/wearables
Experience using ML libraries and/or LLMs
Experience w/ C++
Our client is a leading Meta Industry and we are currently interviewing to fill this and other similar contract positions. If you are interested in this position, please apply online for immediate consideration.
Pyramid Consulting, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
By applying to our jobs you agree to receive calls, AI-generated calls, text messages, or emails from Pyramid Consulting, Inc. and its affiliates, and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here.
Software Dev Engineer
Redmond, WA jobs
Title: Software Dev Engineer
Required Skills & Qualifications
4-10 years of experience in software development.
Strong proficiency in Python and backend development (APIs, business logic, integrations).
Experience with AWS Lambda, DynamoDB, and serverless architecture.
Hands-on experience with React for frontend development.
Proficient in scripting (Python, Bash, or similar).
Experience working with databases:
Preferred: DynamoDB
Also accepted: SQL-based DBs or MongoDB
Solid understanding of REST APIs, microservices, and cloud-based application design.
Nice-to-Have Skills
Experience with CI/CD pipelines (CodePipeline, GitHub Actions, Jenkins, etc.)
Knowledge of infrastructure-as-code tools such as CloudFormation, AWS CDK, or other IaC frameworks.
Familiarity with containerization (Docker) is a plus.
Benefits:
The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time , paid sick and safe time , hours of paid vacation time, weeks of paid parental leave, paid holidays annually - As Applicable)
Software Engineer
Redmond, WA jobs
Are you an experienced Software Engineer with a desire to excel? If so, then Talent Software Services may have the job for you! Our client is seeking an experienced Software Engineer to work at their company in Redmond, WA.
The main function of a Lab/Test Engineer at this level is to apply configuration skills at an intermediate to high level. The Test Engineer will analyze, design and develop test plans and should be familiar with at least one programming language. We're on the lookout for a contract Engineer with extensive experience in configuring and testing hardware devices across Windows Server and Ubuntu Server platforms. The ideal candidate will not only be technically adept but also possess strong analytical skills, capable of producing comprehensive and detailed reports. Proficiency in scripting languages is essential. The role involves deploying and managing test machines, refining test plans, executing test cases, performing hardware diagnostics, troubleshooting issues, and collaborating closely with the development team to advance the functionality of hardware systems. Experience with CI/CD pipelines, C++ and Rust development will be considered a significant asset. The main function of a Lab/Test Engineer at this level is to apply configuration skills at an intermediate to high level. The Test Engineer will analyze, design and develop test plans and should be familiar with at least one programming language.
Primary Responsibilities/Accountabilities:
Perform repeatable testing procedures and processes.
Verify triggers, stored procedures, referential integrity, hardware product or system specifications.
Interpret and modify code as required which may include C/C++, C#, batch files, make files, Perl scripts, queries, stored procedures and/or triggers.
Identifies and defines project team quality and risk metrics.
Provides assistance to other testers.
Designs and develops robust automated test harnesses with a focus on Application/System/Inter-System level issues.
Perform job functions within the scope of application/system performance, threading issues, bottleneck identification, writing small footprint and less intrusive code for critical code testing, tackling system/application intermittent failures, etc.
Purpose of the Team: The purpose of this team is to focus on security hardware and intellectual property. Their work is primarily open source, with some potential for internal code review.
Key projects: This role will contribute to supporting development and testing for technologies deployed in the Azure fleet.
Typical task breakdown and operating rhythm: The role will consist of 10% meetings, 10% reporting, and 80% heads down (developing and testing).
Qualifications:
Years of Experience Required: 8-10+ overall years of experience in the field.
Degrees or certifications required: N/A
Best vs. Average: The ideal resume would contain Rust experience, experience with open-source projects,
Performance Indicators: Performance will be assessed based on quality of work, meeting deadlines, and flexibility.
Minimum 8+ years of experience with test experience with data center/server hardware.
Minimum 8+ years of experience with development experience with C++ (and Python).
Minimum 2+ years of experience with an understanding of CI/CD and ADO pipelines.\
Software testing experience in Azure Cloud/Windows/Linux server environments required.
Ability to read and write at least one programming language such as C#, C/C++, SQL, etc, RUST is a plus!
Knowledge of software quality assurance practices, with strong testing aptitude.
Knowledge of personal computer hardware is required as is knowledge of deploying and managing hosts and virtual test machines
Knowledge of internet protocols and networking fundamentals preferred.
Must have a solid understanding of the software development cycle.
Demonstrated project management ability required.
Experience with CI/CD pipelines
Bachelor's degree in Computer Science required and some business/functional knowledge and/or industry experience preferred.
5-7 years' experience.
8-10 years' experience.
Preferred:
Database programming experience, i.e. SQL Server, Sybase, Oracle, Informix and/or DB2 may be required.
Software testing experience in a Web-based or Windows client/server environment required.
Experience in development and/or database administration experience using a product is required.
Ability to read and write at least one programming language such as C#, C/C++, SQL, etc.
Knowledge of software quality assurance practices, with strong testing aptitude.
Knowledge of personal computer hardware may be required.
Knowledge of internet protocols and networking fundamentals preferred.
Must have a solid understanding of the software development cycle.
Demonstrated project management ability required.
Staff Data Engineer
Seattle, WA jobs
Impinj is a leading RAIN RFID provider and Internet of Things pioneer. We're inventing ways to connect every thing to the Internet - including retail apparel, retail general merchandise, healthcare items, automobile parts, airline baggage, food and much more. With more than 100 billion items connected to date, and multiple Fortune 500 enterprises around the world using our platform, we solve for a better understanding of our world. If it's a thing, we're working to connect it. Join Impinj and help us realize our vision of a boundless IoT- connecting trillions of everyday items to the Internet.
Team Overview:
We are seeking a Data Engineer with deep experience in managing and processing high-volume IoT data to enable the cloud-based training of machine learning models that power real-time inference on edge devices. In this role, you will architect and maintain cloud-based data infrastructure and pipelines that support ML workflows for training, validation, and deployment of machine learning models optimized for deployment in edge environments. This is a multi-functional role requiring close collaboration with ML engineers, systems engineers, cloud architects, and embedded systems teams to deliver high-quality, efficient, and scalable data solutions that power intelligent behavior on resource-constrained devices such as fixed and handheld RFID readers.
What You Will Do:
* Design data workflows to support model training, evaluation, and retraining cycles for deployment on edge devices
* Work closely with ML engineers to align data formats, labeling standards, feature extraction for edge-compatible models, and feedback loops for model improvement
* Architect and maintain scalable data pipelines to ingest, process, store, and access large volumes of structured and semi-structured RFID time-series data from edge networks
* Develop automated systems for data versioning, labeling, augmentation, and quality assurance
* Establish and maintain data APIs and interfaces to query, consume, and update datasets
* Manage large datasets using distributed storage and compute frameworks (e.g., Apache Spark, Hadoop, or Dask)
* Ensure data security, compliance, and consistency across the full data lifecycle
* Drive improvements in data performance and reliability, especially for low-latency ML inference use cases
* Implement robust ETL/ELT workflows for preparing data for cloud-based ML model training and evaluation
* Collaborate and coordinate with large scale data collection projects
* Monitor and optimize data pipelines for performance, reliability, and cost across edge-to-cloud infrastructure
* Optimize data flow and compute for performance, cost, and latency in hybrid edge-cloud environments
What You Will Bring:
* Bachelor's degree in Data Engineering, Electrical Engineering or a related field and 8 years of related experience, or equivalent combination of education and experience
* 8+ years of experience in data engineering working with Machine Learning pipelines
* Deep understanding of data pipeline design, ETL/ELT processes, automated workflow orchestration (e.g. Apache Airflow)
* Strong programming skills in Python (especially for data workflows), with experience building scalable, maintainable pipelines. (e.g. Pandas, numpy)
* Strong experience with structured and unstructured databases (SQL, MongoDB, DuckDB)
* Strong understanding of cloud infrastructure (AWS, Azure, or GCP), especially cloud storage, compute, and ML tools (e.g., SageMaker, Vertex AI, Azure ML)
* Experience with data lake/data warehouse technologies (e.g., S3 + Glue, BigQuery, Snowflake, Delta Lake)
* Knowledge of machine learning model lifecycles, including training, validation, and deployment
* Understanding of data versioning, feature engineering, and ML lifecycle management
* Understanding of machine learning data needs, including labeling, versioning, and model-ready dataset preparation
* Familiar with distributed data systems and big data tools (e.g., Spark, Kafka, Hadoop)
Compensation & Benefits:
The benefits listed below may vary depending on the nature of your employment with Impinj and the country where you work.
The typical base pay range for this role across the US is $129,000 - $200,000. Individual base pay depends on various factors such as complexity and responsibility of role, job duties, requirements, and relevant experience and skills. Both market wage data and the mid-point of the pay range is reviewed and used as the starting point for all new hire offers. Offers are made within the base pay range applicable at the time.
At Impinj certain roles are eligible for additional rewards, including merit increases, annual bonus and stock. These awards are allocated based on individual performance. In addition, certain roles also have the opportunity to earn sales incentives based on revenue or utilization, depending on the terms of the plan and the employee's role. US based employees have access to healthcare benefits; a 401(k) plan and company match among others.
For a more comprehensive list of US employment benefits, click here.
US Export Controls:
This position has access to technologies or data subject to U.S. export control regulations. Under these laws, the release or transfer of export-controlled items or information to individuals who are not classified as "U.S. persons" (as defined by Immigration & Nationality Act) may require prior authorization from the U.S. government. We may require additional documentation related to national identity to determine whether an export compliance license is required for any export-controlled items. This information is requested solely for the purpose of complying with U.S. export control laws and will not be used for other purposes. Learn more about export compliance here.
Why work at Impinj:
Know you're making a difference. Competitive benefits. Support for remote work or a desk with a view. Weekly Q&A sessions with our executive team. Impinj provides an environment that fosters openness and innovation and is developing technology that delivers a positive impact on the world. Collaboration and teamwork are highly valued, and accomplishments are duly celebrated. We have an open paid time-off policy paired with a respect for work/life balance. Our headquarters is located in Seattle with spectacular views of the Olympics, Lake Union, and Mt Baker, which can be enjoyed from our rooftop deck. Our Brazilian site is in Porto Alegre, Rio Grande do Sul state, at "Tecnopuc," a technology park that offers a very nice workplace for the development of groundbreaking technologies. Impinj is committed to creating a diverse and inclusive work environment and welcomes applicants from all backgrounds.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Auto-ApplyStaff Data Engineer
Seattle, WA jobs
Impinj is a leading RAIN RFID provider and Internet of Things pioneer. We're inventing ways to connect every thing to the Internet - including retail apparel, retail general merchandise, healthcare items, automobile parts, airline baggage, food and much more. With more than 100 billion items connected to date, and multiple Fortune 500 enterprises around the world using our platform, we solve for a better understanding of our world. If it's a thing, we're working to connect it. Join Impinj and help us realize our vision of a boundless IoT- connecting trillions of everyday items to the Internet.
Team Overview:
We are seeking a Data Engineer with deep experience in managing and processing high-volume IoT data to enable the cloud-based training of machine learning models that power real-time inference on edge devices. In this role, you will architect and maintain cloud-based data infrastructure and pipelines that support ML workflows for training, validation, and deployment of machine learning models optimized for deployment in edge environments. This is a multi-functional role requiring close collaboration with ML engineers, systems engineers, cloud architects, and embedded systems teams to deliver high-quality, efficient, and scalable data solutions that power intelligent behavior on resource-constrained devices such as fixed and handheld RFID readers.
What You Will Do:
Design data workflows to support model training, evaluation, and retraining cycles for deployment on edge devices
Work closely with ML engineers to align data formats, labeling standards, feature extraction for edge-compatible models, and feedback loops for model improvement
Architect and maintain scalable data pipelines to ingest, process, store, and access large volumes of structured and semi-structured RFID time-series data from edge networks
Develop automated systems for data versioning, labeling, augmentation, and quality assurance
Establish and maintain data APIs and interfaces to query, consume, and update datasets
Manage large datasets using distributed storage and compute frameworks (e.g., Apache Spark, Hadoop, or Dask)
Ensure data security, compliance, and consistency across the full data lifecycle
Drive improvements in data performance and reliability, especially for low-latency ML inference use cases
Implement robust ETL/ELT workflows for preparing data for cloud-based ML model training and evaluation
Collaborate and coordinate with large scale data collection projects
Monitor and optimize data pipelines for performance, reliability, and cost across edge-to-cloud infrastructure
Optimize data flow and compute for performance, cost, and latency in hybrid edge-cloud environments
What You Will Bring:
Bachelor's degree in Data Engineering, Electrical Engineering or a related field and 8 years of related experience, or equivalent combination of education and experience
8+ years of experience in data engineering working with Machine Learning pipelines
Deep understanding of data pipeline design, ETL/ELT processes, automated workflow orchestration (e.g. Apache Airflow)
Strong programming skills in Python (especially for data workflows), with experience building scalable, maintainable pipelines. (e.g. Pandas, numpy)
Strong experience with structured and unstructured databases (SQL, MongoDB, DuckDB)
Strong understanding of cloud infrastructure (AWS, Azure, or GCP), especially cloud storage, compute, and ML tools (e.g., SageMaker, Vertex AI, Azure ML)
Experience with data lake/data warehouse technologies (e.g., S3 + Glue, BigQuery, Snowflake, Delta Lake)
Knowledge of machine learning model lifecycles, including training, validation, and deployment
Understanding of data versioning, feature engineering, and ML lifecycle management
Understanding of machine learning data needs, including labeling, versioning, and model-ready dataset preparation
Familiar with distributed data systems and big data tools (e.g., Spark, Kafka, Hadoop)
Compensation & Benefits:
The benefits listed below may vary depending on the nature of your employment with Impinj and the country where you work.
The typical base pay range for this role across the US is $129,000 - $200,000. Individual base pay depends on various factors such as complexity and responsibility of role, job duties, requirements, and relevant experience and skills. Both market wage data and the mid-point of the pay range is reviewed and used as the starting point for all new hire offers. Offers are made within the base pay range applicable at the time.
At Impinj certain roles are eligible for additional rewards, including merit increases, annual bonus and stock. These awards are allocated based on individual performance. In addition, certain roles also have the opportunity to earn sales incentives based on revenue or utilization, depending on the terms of the plan and the employee's role. US based employees have access to healthcare benefits; a 401(k) plan and company match among others.
For a more comprehensive list of US employment benefits, click here.
US Export Controls:
This position has access to technologies or data subject to U.S. export control regulations. Under these laws, the release or transfer of export-controlled items or information to individuals who are not classified as "U.S. persons" (as defined by Immigration & Nationality Act) may require prior authorization from the U.S. government. We may require additional documentation related to national identity to determine whether an export compliance license is required for any export-controlled items. This information is requested solely for the purpose of complying with U.S. export control laws and will not be used for other purposes. Learn more about export compliance here.
Why work at Impinj:
Know you're making a difference. Competitive benefits. Support for remote work or a desk with a view. Weekly Q&A sessions with our executive team. Impinj provides an environment that fosters openness and innovation and is developing technology that delivers a positive impact on the world. Collaboration and teamwork are highly valued, and accomplishments are duly celebrated. We have an open paid time-off policy paired with a respect for work/life balance. Our headquarters is located in Seattle with spectacular views of the Olympics, Lake Union, and Mt Baker, which can be enjoyed from our rooftop deck. Our Brazilian site is in Porto Alegre, Rio Grande do Sul state, at “Tecnopuc,” a technology park that offers a very nice workplace for the development of groundbreaking technologies. Impinj is committed to creating a diverse and inclusive work environment and welcomes applicants from all backgrounds.
We are an equal opportunity employer and do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Auto-ApplyData Engineer
Redmond, WA jobs
Vignesh
KRG Technologies Inc.
************ *405
vignesh.c at krgtech.com
Job Description
5-8 years' experience as a technical analyst or data analyst, in an engineering or technology operations environment
• Prior experience and knowledge of MS systems tools such as MS Sales, Billing Systems, Azure Insights, Commerce Platform and Commerce BI preferred
• Experience with Big data analysis, troubleshooting and root cause analysis
• Ad hoc data analysis for optimization opportunities
• SQL skills needed; Cosmos skills highly preferred
• Experience in C# is a strong plus
• Excel power user (PowerPivot, Power Query)
• PowerBI skills needed
• Good communication and investigative skills
• Working knowledge of statistics and machine learning techniques strong plus
• Ability to work with data scientists, software engineers, data architects
Additional Information
All your information will be kept confidential according to EEO guidelines.
Staff Data Engineer
Bothell, WA jobs
IonQ is developing the world's most powerful full-stack quantum computer based on trapped-ion technology. We are pushing past the limits of classical physics and current supercomputing technology to unlock a new era of computing. Quantum computing has the potential to impact every area of human society for the better. IonQ's computers will soon redefine industries like medicine, materials science, finance, artificial intelligence, machine learning, cryptography, and more. IonQ is at the forefront of this technological revolution.
Quantum computers generate huge amounts of data that we can use to make quantum computers even better. We are seeking an enthusiastic Staff Data Engineer who is passionate about data engineering and machine learning. You will help build high-quality, scalable, and resilient distributed systems that power our data pipelines. You'll be responsible for leading the development of key components of our data platform while collaborating with experimental and theoretical physics teams. Help us use data to build the world's best quantum computers!
The ideal candidate will have experience leading or contributing to multiple simultaneous product development efforts, projects, and initiatives. You'll be able to balance technical expertise and savvy with strong business judgment to make great technology choices. You'll strive for simplicity and demonstrate significant creativity and incisive judgment.
Responsibilities:
Design, architect, develop, test, deploy, and maintain our data pipelines and warehousing
Ensure the quality of our systems through design and code reviews
Collaborate with an experienced interdisciplinary staff
Identify and drive opportunities for us to continuously improve how we do things
Approach problems pragmatically
Assist in the career development of others, providing mentorship on advanced technical issues
You'd be a good fit with:
Bachelor's degree in Computer Science, Mathematics, Physics, Statistics, or equivalent practical experience -- non-traditional backgrounds are welcome here as well
8+ years experience with software development in Python
8+ years experience with SQL and NoSQL databases
A love of collaborating across teams in an interdisciplinary environment
Experience designing data models and data warehouses
Experience with data processing using traditional and distributed systems
Experience writing and maintaining ETLs which operate on a variety of structured and unstructured sources
Experience with Unix or Linux systems
Excellent verbal and written communication skills
You'd be a great fit with:
Master's degree or PhD in Computer Science, Mathematics, or Statistics, or related scientific field
Significant experience with technologies such as:
Kubernetes, Postgres, Airflow, Grafana, Apache Iceberg, Apache Superset, Trino/PrestoDB, DBT, Terraform, Segment, InfluxDB, and OpenTelemetry
Experience with Data Governance, specifically covering data catalogs and data quality
Experience with deep learning platforms like Keras, Tensor-flow and/or PyTorch
Location: This role will work onsite at our office located in Bothell, WA. We are open to hybrid and remote options for the right candidate.
Job ID: IONQ-342
The approximate base salary range for this position is $162,920 - $213,304.
Compensation will vary depending on multiple individualized factors, including market location, job-related knowledge, skills, and experience. Posted base salary figures are subject to change as new market data becomes available. Beyond base salary, total compensation includes a variable bonus and equity component and a range of benefit options found on our career site at ionq.co/jobs. Details of participation in these benefit plans will be provided when a candidate receives an offer of employment. Our US benefits include comprehensive medical, dental, and vision plans, matching 401K, unlimited PTO and paid holidays, parental/adoption leave, legal insurance, a home internet stipend, and pet insurance!
IonQ's HQ is located in College Park, Maryland, just outside of Washington DC. We are actively building out our recently opened manufacturing and production facility in Bothell, WA (near Seattle). Depending on the position, you may be required to be near one of our offices in College Park, Seattle, Toronto, Canada, and Basel, Switzerland. However, IonQ will expand into additional domestic and international geographies, so don't let this stop you from applying!
At IonQ, we believe in fair treatment, access, opportunity, and advancement for all while striving to identify and eliminate barriers. We empower employees to thrive by fostering a culture of autonomy, productivity, and respect. We are dedicated to creating an environment where individuals can feel welcomed, respected, supported, and valued.
We are committed to equity and justice. We welcome different voices and viewpoints and do not discriminate on the basis of race, religion, ancestry, physical and/or mental disability, medical condition, genetic information, marital status, sex, gender, gender identity, gender expression, transgender status, age, sexual orientation, military or veteran status, or any other basis protected by law. We are proud to be an Equal Employment Opportunity employer.
US Technical Jobs. The position you are applying for will require access to technology that is subject to U.S. export control and government contract restrictions. Employment with IonQ is contingent on either verifying “U.S. Person” (e.g., U.S. citizen, U.S. national, U.S. permanent resident, or lawfully admitted into the U.S. as a refugee or granted asylum) status for export controls and government contracts work, obtaining any necessary license, and/or confirming the availability of a license exception under U.S. export controls. Please note that in the absence of confirming you are a U.S. Person for export control and government contracts work purposes, IonQ may choose not to apply for a license or decline to use a license exception (if available) for you to access export-controlled technology that may require authorization, and similarly, you may not qualify for government contracts work that requires U.S. Persons, and IonQ may decline to proceed with your application on those bases alone. Accordingly, we will have some additional questions regarding your immigration status that will be used for export control and compliance purposes, and the answers will be reviewed by compliance personnel to ensure compliance with federal law.
US Non-Technical Jobs. Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Accordingly, we will have some additional questions regarding your immigration status that will be used for export control and compliance purposes, and the answers will be reviewed by compliance personnel to ensure compliance with federal law.
If you are interested in being a part of our team and mission, we encourage you to apply!
Auto-ApplyFlight Data Translation Engineer
Seattle, WA jobs
Company:
The Boeing Company
We are Boeing Global Services (BGS) Engineering team creating and implementing innovative technologies that make the impossible possible and enabling the future of aerospace. We provide engineering design and support, including aftermarket modifications, and are innovating to make product and services safety even stronger. Join us and put your passion, determination, and skill to work building the future! #TheFutureIsBuiltHere #ChangeTheWorld
Boeing Seattle is seeking a Flight Data Engineer in Seattle, Washington to automate flight data translations within the Boeing Global Services, reporting to the Manager of Prognostics Development working out of the Seattle, WA office. The Flight Data Translation Engineer will be responsible for the delivery of cutting-edge flight data analytics products to commercial aviation customers. As part of an integrated product team, the successful candidate will be part of a technical team supporting product feature development and life support.
In this role, the Flight Data Translation Engineer will work with a team of highly motivated aviation SMEs, software architects, developers, and data scientists. This role will work with portfolio leadership, development teams and stakeholders to establish and drive the implementation of flight data management for the success of each product and initiative.
Position Responsibilities:
Leverage the power of data and engagement insights to drive advancements in flight safety, fleet reliability and sustainability, while executing on the Boeing Flight Data Analytics value and vision
Collaborate closely with engineering, DevOps, product management, and airline customers in an agile environment
Ensure high quality aviation domain data is available for advanced aviation data insights that benefit both our internal teams and customers
Gain a deep understanding of our customers, their flight operations, and their analytics needs; you will play a pivotal role in finding innovative solutions to meet their evolving needs
Configure, test and validate aircraft safety, maintenance, sustainability and reliability data models for use in product development, and create high quality documentation required to support these tasks
Define and validate experiments using Machine Learning, as assigned by Lead Data Architect, or Product Management
Ensure data integrity across the entire data pipeline by leveraging expert-level knowledge of ARINC 717, 767 and other protocols, as applied to different aircraft types, including Boeing and Airbus
Proactively identify and troubleshoot data integrity issues to ensure products are providing an exceptional customer experience
Employ expertise in database and schema design, anticipating the data questions and data narratives that stakeholders may require
Perform hands-on technical work, including working with data, building utilities and tooling, and providing flight data and integrated dataset support to both the product team and customers
Work closely with the lead data architect and product managers to strategically shape the future direction of our analytics product
Undertake other tasks, projects, and initiatives as needed
This position must meet export control compliance requirements. To meet export control compliance requirements, a “U.S. Person” as defined by 22 C.F.R. §120.15 is required. “U.S. Person” includes U.S. Citizen, lawful permanent resident, refugee, or asylee.
Basic Qualifications (Required Skills/Experience):
Bachelor degree in engineering, computer science, mathematics, physics or chemistry
1+ years of progressive experience in software development and design
Experienced working with aerospace systems ARINC standards (ARINC 717, 767)
5+ years of related work experience or an equivalent combination of technical education and experience
Preferred Qualifications (Education/Experience):
9+ years of related work experience or an equivalent combination of technical education and experience
Working experience with cloud solutions
Experience with programming, eg java or python, SQL
Experience in data science and/or data analytics
Advanced experience in business intelligence tools, statistics and data modelling
Experience working in a global technology organization and managing many stakeholders
Experience with airline safety, airline maintenance engineering, avionics or aircraft flight data recording
Experience with Airbus and Boeing aircraft operations
Drug Free Workplace:
Boeing is a Drug Free Workplace where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies.
Total Rewards and Pay Transparency:
At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities.
The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work.
The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements.
Pay is based upon candidate experience and qualifications, as well as market and business considerations.
Summary pay range:
Career $114,750 - $155,250
Expert $138,550 - $187,450
Language Requirements:
Not Applicable
Education:
Bachelor's Degree or Equivalent
Relocation:
This position offers relocation based on candidate eligibility.
Export Control Requirement:
This position must meet export control compliance requirements. To meet export control compliance requirements, a “U.S. Person” as defined by 22 C.F.R. §120.15 is required. “U.S. Person” includes U.S. Citizen, lawful permanent resident, refugee, or asylee.
Safety Sensitive:
This is not a Safety Sensitive Position.
Security Clearance:
This position does not require a Security Clearance.
Visa Sponsorship:
Employer will not sponsor applicants for employment visa status.
Contingent Upon Award Program
This position is not contingent upon program award
Shift:
Shift 1 (United States of America)
Stay safe from recruitment fraud! The only way to apply for a position at Boeing is via our Careers website. Learn how to protect yourself from recruitment fraud - Recruitment Fraud Warning
Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
EEO is the law
Boeing EEO Policy
Request an Accommodation
Applicant Privacy
Boeing Participates in E - Verify
E-Verify (English)
E-Verify (Spanish)
Right to Work Statement
Right to Work (English)
Right to Work (Spanish)
Auto-ApplyData Engineer II
Seattle, WA jobs
We are looking for a Data Engineer to join the Infrastructure Automation team. Our success depends on our world-class network and hardware infrastructure; we're handling massive scale and rapid integration of emergent technologies. Our goal is to become “The Infrastructure Platform” for the world. The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure.
As a Senior DE you will create solutions to integrate with multi heterogeneous data sources, aggregate and retrieve data in a fast and safe mode, curate data that can be used in reporting, analysis, machine learning models and ad-hoc data requests. You should have excellent business and communication skills to be able to work with business owners, Product teams and Tech leaders to gather infrastructure requirements, design data infrastructure, build up data pipelines and data-sets to meet business needs. You will be responsible for designing, developing, and operating a data service platform using Python, Airflow, and SQL to build the various ETL, analytics, and data quality components. You'll automate deployments using AWS CodeDeploy, AWS CodePipeline, AWS Cloud Development Kit (CDK), and AWS Cloud Formation. You will work with AWS services like Redshift, Glue, S3, IAM, CloudWatch, and more.
Responsibilities
* Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using Python, SQL and AWS big data technologies.
* Explore and learn the latest AWS technologies to provide new capabilities and increase efficiencies of the team.
* Designing and implementing complex pipelines and other Data Engineering solutions.
* Work closely with business owners, developers, Business Intelligence Engineer to explore new data sources and deliver the data.
* Create extensible designs and easy to maintain solutions with the long term vision in mind
* Improve tools, processes, scale existing solutions, create new solutions as required based on team and stakeholder needs
* You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.
* A good candidate can partner with business owners directly to understand their requirements and provide data which can help them observe patterns and spot anomalies.
Required Skills & Experience
* 5+ years of related experience.
* Expertise with data modeling, warehousing and building ETL pipelines
* Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS* Should have experience developing complex and a variety of reports.
* A good candidate has strong analytical skills and enjoys working with large complex data sets.
* Expert knowledge of SQL
Preferred
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
Compensation: $75/hour
We look forward to reviewing your application. We encourage everyone to apply - even if every box isn't checked for what you are looking for or what is required.
PDSINC, LLC is an Equal Opportunity Employer.
Data Engineer IV
Seattle, WA jobs
Data Engineer IV Job ID: 25-10029 Pay rate range: $85/hour. to $90/hr. on W2 Onsite Must Have AWS Services Programming Language skills (Preferred Python or Scala) Data Technologies REQUIRED SKILLS: * - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
* Experience with Apache Spark / Elastic Map Reduce
* Experience with non-relational databases/data stores (object storage, document or key-value stores, graph databases, column-family databases)
* Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
* Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence
* Experience working on and delivering end-to-end projects independently
Years of Experience: 3+
Degree or Certification: Bachelor's Degree
Job Description:
Client is looking for a Data Engineer to join the Infrastructure Automation team.
Client has over 70 million customers, and developers all over the world rely on our storage, compute, and virtualized services via Client Web Services.
Our success depends on our world-class network and hardware infrastructure; we're handling massive scale and rapid integration of emergent technologies.
Our goal is to become "The Infrastructure Platform" for the world. The Infrastructure Automation team is responsible for delivering the software that powers our infrastructure.
Responsibilities
* As a Data Engineer, you will be working in one of the world's largest and most complex data warehouse environments.
* You will be developing and supporting the analytic technologies that give our customers timely, flexible, and structured access to their data.
* You will be responsible for designing and implementing a platform using third-party and in-house reporting tools, modeling metadata, building reports and dashboards in Oracle BI Enterprise Edition (OBIEE).
* You will work with business customers in understanding the business requirements and implementing solutions to support analytical and reporting needs.
Required Skills & Experience
* 7+ years of related experience.
* Very Strong development experience with notable BI reporting tools (Oracle BI Enterprise Edition (OBIEE)).
* Should have experience developing complex and a variety of complex reports.
* A good candidate has strong analytical skills and enjoys working with large, complex data sets.
* Good knowledge of SQL
* A good candidate can partner with business owners directly to understand their requirements and provide data that can help them observe patterns and spot anomalies.
Preferred
Strong OBIEE reporting experience
SQL skills
* Job details
*
Data Engineer with TS/SCI
Washington jobs
Job Opportunity: Our team is seeking a talented Top Secret cleared Data Engineer to work in our NW Washington DC location.
This role supports DHS's Office of Intelligence and Analysis (I&A). I&A is responsible for developing DHS-wide intelligence through managing the collection, analysis and fusion of intelligence throughout the entire Department. I&A disseminates intelligence throughout DHS, to the other members of the United States Intelligence Community, and to first responders at the state, local, and tribal level.
What You'll Get To Do as a Data Engineer:
Create and maintain data pipelines and transformation flows in a cloud environment
Data management/mapping among multiple distinct data sources
Cloud management and server administration of domain services
Big Data infrastructure services and cross domain data transfer
You'll Bring These Qualifications:
Active Top Secret clearance with SCI Eligibility
BS degree in a related scientific or engineering discipline from an accredited college or university and ten (10) to fourteen (14) years of progressive experience, or an MS degree in a related scientific or engineering discipline, and eight (8) to twelve (12) years of progressive experience, or a Ph.D. degree in a related scientific or engineering discipline and four (4) to seven (7) years of progressive experience.
Familiar with ETL technologies, MapReduce, JSON/XML transformations and schemas
Familiar with AngularJS
Familiar with Apache NiFi and Java (NAR) NiFi Archives
Knowledge of Amazon Web Services (AWS Cloud)
Programming languages - Java/JEE, Javascript, Python, Groovy, Shell Script
HTTP via REST and SOAP
Datastores - HDFS, MongoDB, S3, Elastic, NoSQL, RDBMS
Build and Configuration Management Tools - Maven, Ansible, Puppet
Working knowledge with public keys and digital certificates
Linux/Unix server environments
These Qualifications Would be Nice to Have:
Master's level education
Familiarity with: jQuery, XPath, XQuery, Spark, Impala, Sqoop, Hive/Pig, Python, Gradle, Maven, PL/SQL, Unix Shell, C++/C, AngularJS, Spring, JSON, XML/XSLT/HTML, JPA/Hibernate, Spark, Accumulo, MapReduce, Storm/Kafka, HSpace, Pig, Servlet/JSP, LDAP
About OneGlobe:
OneGlobe LLC was founded in 2005 to provide quality Information Technology solutions that exceed expectations. We focus on IT System Modernization using agile software development practices and DevSecOps to deliver intuitive and maintainable systems that we help our customers improve their processes and capabilities.
We provide full service IT solutions and have the skill to identify, plan and perform cost-saving steps throughout the system lifecycle to enhance system efficiency, while optimizing the value that we deliver to our customers. Our team has the drive and right mindset to feel ownership on the projects they work. They partner with our customers to give the extra effort sometimes required to deliver success.
We provide highly competitive benefits package to include: extensive medical/dental/vision, 7% of your annual salary toward 401(k), Paid Time Off (PTO), $5K annually toward ongoing education and training, and more. We also have monthly social and tech events!
See additional positions at: **********************************
OneGlobe is a proud equal opportunity employer. We are a drug free, EEO employer committed to a diverse workforce. We will consider all qualified candidates regardless of race, color, national origin, sex, age, marital status, personal appearance, sexual orientation, gender identity, family responsibilities, disability, political affiliation or veteran status.
Auto-ApplyData Engineer III
Redmond, WA jobs
Data Engineer III Job ID: 25-12315 Pay rate range - $75/hr. to $80/hr. on W2 Onsite Must Have Strong knowledge of SQL for complex querying, optimization, and database design Python for data engineering tasks AWS Key Technical Requirements: SQL Expertise: 1. Strong knowledge of SQL for complex querying, optimization, and database design.
2. Experience with time-series databases, preferably Client Timestream.
Client Timestream:
1. Hands-on experience in designing, implementing, and managing time-series data in Client Timestream.
2. Proficiency in defining retention policies, querying, and optimizing time-series data workflows.
Python Development:
1. Proficiency in Python for data engineering tasks, including integration with AWS services.
2. Experience with Python libraries such as Pandas, SQLAlchemy, or PySpark for handling data processing and analysis.
AWS Cloud Services:
1. Deep knowledge of AWS services related to data engineering such as Client S3, Lambda, Redshift.
2. Experience with managing data pipelines, security, and automation on AWS.
Database Design & Management:
1. Experience in schema design, performance optimization, and maintenance for both relational and
NoSQL databases.
2. Understanding of best practices for cloud-based database management, including backup, disaster recovery, and monitoring.
Problem Solving & Performance Optimization:
1. Strong troubleshooting skills for performance issues in both data pipelines and databases.
2. Experience with scaling databases and pipelines to accommodate increasing data volume and complexity.
* Job details
*
Data Engineer II
Seattle, WA jobs
Data Engineer II Job ID: 25-12057 Pay rate range - $60/hr. to $65/hr. on W2 100% Onsite Must Have Design and build data reporting solutions across different visualization platforms (e.g. Quicksight) Build data infrastructure and manage other AWS resources including EC2, RDS, Redshift, EMR, etc
Ability to query, program, and perform analysis on large datasets using SQL languages and Python
Job Description:
We are seeking an experienced Data Engineer to drive the team's data strategy and vision.
The data engineer will be responsible for design and build of the data infrastructure for automated and scalable platforms to support the reporting and forecasting requirements of finance teams.
The data engineer will execute projects that build scalable data and visualization frameworks from multiple operational and financial datasets.
The Finance team currently has a multi-year roadmap that aim to deliver automated reporting solutions, accelerate predictive forecasting, and interactive analytic tools for our finance analysts.
This role requires an individual with excellent data modeling, data architecture design, analytic visualization, and software development skills.
A successful candidate will have the flexibility to execute on business intelligence projects while considering optimal and efficient data architecture.
The candidate must have the ability to quickly build internal contacts and work across multiple finance, technology, product, and business teams to drive the overall strategy.
Key job responsibilities
* Architect and develop end to end scalable data applications and data pipelines
* Design and build data reporting solutions across different visualization platforms (e.g. Quicksight)
* Establish scalable, efficient, automated processes for reporting, dashboards, data analyses and model development
* Ability to query, program, and perform analysis on large datasets using SQL languages and Python
* Build data infrastructure and manage other AWS resources including EC2, RDS, Redshift, EMR, etc.
* Collaborate with other teams to extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies
* Participate in data strategy and road map exercises, data warehouse design and implementation
About Project
Build financial systems
Transform data feeds
Will be creating data feed and transforming them and validating them
Daily Schedule
10% meeting with team and getting daily tasks
30% meeting with business teams or project teams on sprints
60% Development and implementing code
* Job details
*
Azure Data Engineer
Bellevue, WA jobs
Infosys is seeking an Azure Data Engineer - This position's primary responsibility will be to provide technical expertise and coordinate for day-to-day deliverables for the team. The chosen candidate will assist in the technical design of large business systems; builds applications, interfaces between applications, understands data security, retention, and recovery. The role holder should be able to research on technologies independently to recommend appropriate solutions & should contribute to technology-specific best practices & standards; contribute to success criteria from design through deployment, including, reliability, cost-effectiveness, performance, data integrity, maintainability, and scalability; contributes expertise on significant application components, program languages, databases, operating systems, etc.
Required Qualifications:
* Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
* At least 4 years of Information Technology experience.
* Candidate must be located within commuting distance of Bellevue, WA or be willing to relocate to the area. This position may require travel to project locations.
* Candidates authorized to work for any employer in the United States without employer-based visa sponsorship are welcome to apply. Infosys is unable to provide immigration sponsorship for this role at this time.
* Proficiency in Azure Data Factory (ADF), Spark, SQL Azure, and Databricks/Synapse.
Preferred Qualifications:
* Ability to prepare technical design documents and perform coding following best practices.
* Experience in data analysis, ETL design, and pipeline optimization.
* Familiarity with Azure DevOps and Agile SDLC.
* Excellent analytical and communication skills.
* Experience and desire to work in a global delivery environment.
Along with competitive pay, as a full-time Infosys employee you are also eligible for the following benefits:-
* Medical/Dental/Vision/Life Insurance
* Long-term/Short-term Disability
* Health and Dependent Care Reimbursement Accounts
* Insurance (Accident, Critical Illness , Hospital Indemnity, Legal)
* 401(k) plan and contributions dependent on salary level
* Paid holidays plus Paid Time Off
The job entails extensive amount of travel. The job also entails sitting as well as working at a computer for extended periods of time. Should be able to communicate by telephone, email or face to face.
Cloud Engineer
Data engineer job at People Tech Group
People Tech Group Inc. is a privately held company who does business across the globe. The company was re- formed in 2005 and is headquartered in Bellevue, Washington with offices in Chicago, Pittsburgh in the USA, and multiple offices in India. People Tech Group was created from a vision by the owner to successfully deliver to clients a cost effective full-cycle services and solutions.
Our vision is to continually refine key capabilities based on market insight, client needs, industry trends and the strategic direction of enterprise solutions. This approach has led us to provide an integrated set of services that combines industry-based consulting, top-tier managed services, and advanced technology services.
Job Description
This resource need to help in migration of application to Azure using IAAS and have ability to rewrite some components to use PAAS.
Qualifications
8+ years experience
Bachelors degree required in Computer Science.
Additional Information
All your information will be kept confidential according to EEO guidelines.