Requirements Engineer jobs at GTT Communications - 158 jobs
HW Engineer, TRX
Nokia Global 4.6
Remote
MN RAN You will join the RAN Business Unit and will enjoy our culture of innovation, openness, discipline, simplicity, accountability, and profit mindset. Your future teams are developing Nokia 3GPP mainstream radio technology products and solutions including cross-technology responsibility (Multi Radio Access Technology).
Required skills:
MS + 0 years in Electrical Engineering or Physics with coursework in RF or Microwave technology preferred. BSEE + 2 years of applicable internship experience is also acceptable.
Knowledge of RF Circuit Design and Signal processing techniques from applicable coursework or job experience is a major plus.
Good theoretical and practical understanding of basic RF Engineering and general principles, including digital modulation and demodulation.
Basic understanding of RF performance specifications and test methodology
Experience with common RF simulation and electromagnetic simulation tools is a big plus (HP ADS, CST, HFSS)
Experience with analytical simulation / analysis tools such as Matlab and Python would be very helpful.
Positive attitude, and experience working in a team environment
It would be nice if you also had
Previous Nokia co-op experience
Excellent verbal and written communication skills
Technical leadership potential is a plus.
Should be a good planner and organizer.
Adept Problem Solver
Ability to travel internationally at short notice if needed (travel overall expected to be less than 10% of the time)
MN RAN R&D, responsible for 5G BTS development, is looking for a design engineer to participate in the development of a cellular base station. Considering both experienced and those willing to explore new areas. In this job, you will:
Design, simulate, test, and document RF circuits as part of a larger TRX design team. Design responsibilities could be in any of the following areas: PWB, Power supplies, Transmitters, Receivers, Power Amplifiers, RF Synthesizers, or Antennas. Also, potential for working in the development of Signal processing algorithms and Digital PreDistortion. Opportunities to grow into roles with more responsibility, such as unit integrator and chief engineer as competencies and experience increase.
Candidates must have the ability to define hardware requirements based on system performance and reliability specifications.
Daily tasks could include circuit simulation, design, prototyping and testing.
Candidates must also be adept at technical writing and documentation tasks.
Candidates will also be required to support related project RF design activities by participating in design reviews and technical information exchange sessions.
Candidates may be required to work in close cooperation with subcontractors for related product development. Support other design engineers and specialists for design work.
$90k-124k yearly est. Auto-Apply 58d ago
Looking for a job?
Let Zippia find it for you.
HW Engineer, Emech
Nokia Global 4.6
Remote
The team you will be a part of
Our Radio Development R&D Team is looking for motivated junior hardware engineers to be a part of a team involved in the development, testing, and design maintenance of 5G/6G cellular base stations.
We are seeking engineers with up-to-date knowledge in cutting-edge technologies, including Artificial Intelligence / Machine Learning (AI/ML), Virtual Design and simulations, Automation, Data Analytics, 3D modeling and thermodynamics, Gallium Nitride (GaN) Power amplifier (PA), sustainable energy savings, and Cloud Computing, which enable faster time to market and ultimate product quality, the best in the industry.
Required skills:
BS in Mechanical Engineering;
Excellent theoretical and practical understanding of mechanical design with emphasis on EMI containment, heat transfer, cooling techniques & environmental protection;
Experience with CAD tools preferably PTC Creo 5 or higher;
Experience with thermal modeling simulation tools -- preferably Flotherm XT;
Experience with FEA simulation tools preferably ANSYS or Creo Simulate
Proficient with Microsoft Office Suite - primarily Word, Excel, and PowerPoint
It would be nice if you also had:
Previous Nokia co-op experience
Excellent verbal and written communication skills
Good time management, planning, and organizational skills
Good mechanical aptitude with experience using power and hand tools found in a machine shop
Adept Problem Solver
Works well in a team environment
Able to travel internationally
What you will learn and contribute to
Participate in developing mechanical assemblies in one or more of the following areas:
Design and verification of metal castings, sheet metal, stampings, plastic injection moldings, elastomeric gaskets, connectors, etc.
Identify or develop mechanical parts and create the information needed to support the suppliers and manufacturing sites to produce the designs for both prototypes and mass production.
Have a working knowledge of functional dimensioning, geometric tolerancing as well as worst-case and statistical tolerance analysis techniques.
Possess a basic proficiency in finite element analysis, computational fluid dynamics simulation, and general engineering analysis.
$90k-124k yearly est. Auto-Apply 12d ago
Backend Engineer
Lean On Me 3.7
Remote
Description Company Overview: Lean Tech is a rapidly expanding organization situated in Medellín, Colombia. We pride ourselves on possessing one of the most influential networks within software development and IT services for the entertainment, financial, and logistics sectors. Our corporate projections offer a multitude of opportunities for professionals to elevate their careers and experience substantial growth. Joining our team means engaging with expansive engineering teams across Latin America and the United States, contributing to cutting-edge developments in multiple industries. Currently, we are seeking a Mid Backend Engineer to join our team. Here are the challenges that our next warrior will face and the requirements we look for: Position Title: Mid Backend Engineer Location: Remote What you will be doing: The primary purpose of this role is to lead backend development efforts for fintech solutions, specifically automating accounts payable and enhancing payment processes. As a senior member of the team, you will be instrumental in designing, building, and testing new features, working closely with engineers, designers, and product managers to refine user experiences and deliver impactful features to a large customer base. The role involves leading the integration of third-party APIs using REST and Postman, and utilizing AWS services like EC2, EB, RDS, S3, and EventBridge for deploying robust backend systems. A significant focus will also be on resolving complex database performance issues in Postgres to ensure efficient data management. As part of our commitment to high code quality, you will conduct comprehensive code reviews and facilitate a smooth transition of architecture into microservices environments. Through direct user engagement, you'll address and resolve bugs, ensuring high user satisfaction and reliable product operation within the broader organization.
Lead the design, development, and testing of new backend features using Django, ensuring high-quality and scalable solutions.
Integrate with third-party services via REST while leveraging tools like Postman to enhance the functionality of fintech products.
Collaborate with cross-functional teams, including engineers, designers, and product managers, to deliver an optimal user experience.
Utilize AWS services such as EC2, EB, RDS, S3, and EventBridge to deploy and maintain robust backend systems.
Analyze and resolve complex database performance issues within Postgres to ensure efficient data management and smooth transaction processing.
Conduct comprehensive code reviews and uphold high standards for unit testing and code quality.
Guide the evolution of the architecture by transitioning existing modules into microservices environments.
Address and resolve bugs through direct engagement with users, ensuring smooth product operation and enhanced user satisfaction.
Required Skills & Experience:
5+ years of professional experience in web application development with a focus on backend systems using Python and Django.
Proficient in Python scripting and automation. Advanced expertise in deploying and maintaining backend systems using AWS services, including EC2, RDS, and S3.
Proven experience in integrating third-party APIs using REST and Postman.
Strong skills in identifying and resolving complex database performance issues, particularly in Postgres.
Demonstrable experience in building scalable, performant backend systems and familiarity with microservices architecture.
Familiarity with DevOps, including basic Docker operations and understanding CI/CD processes.
Proficient in understanding and interpreting application logs, metrics dashboards, and error tracking tools.
Excellent collaboration and communication skills, essential for working alongside cross-functional teams.
Nice to haves:
Experience in an early-stage startup environment, offering adaptability and innovative problem-solving skills.
Understanding of alerting basics and escalation procedures to enhance system monitoring.
Proficiency in scripting for basic automation tasks, such as using Bash.
Solid familiarity with microservices architecture, facilitating effective module transitions.
Comprehensive understanding of Docker, including creating and managing containers.
A sense of responsibility for the availability and reliability of deployed code.
Exceptional communication skills to articulate complex technical concepts to diverse stakeholders.
Soft Skills:
Exceptional communication skills to articulate complex technical concepts effectively to diverse team members and stakeholders, fostering a robust collaborative environment.
Strong critical thinking ability to approach challenges analytically and innovate at both the technical and user experience levels, ensuring high-quality outcomes.
Proficient in collaboration, adeptly working alongside engineers, designers, and product managers to enhance the user journey and deliver impactful features.
Adept at problem solving with a keen eagerness to identify and address core issues in product performance and functionality, contributing to seamless operations.
Resilient in challenging situations, capable of quickly diagnosing and fixing issues when they arise, maintaining product reliability and user satisfaction.
Why you will love Lean Tech:
Join a powerful tech workforce and help us change the world through technology
Professional development opportunities with international customers
Collaborative work environment
Career path and mentorship programs that will lead to new levels.
Join Lean Tech and contribute to shaping the data landscape within a dynamic and growing organization. Your skills will be honed, and your contributions will play a vital role in our continued success. Lean Tech is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
$78k-117k yearly est. Auto-Apply 60d+ ago
CloudOps Engineer
LINQ 3.8
Remote
Who We Are:
We're a high-growth software company with a big mission: empowering K-12 district teams to do more with less.
At LINQ, we get K12. That's why we help districts transform K-12 school operations with best-in-class, cloud-based software solutions built to help districts return more resources to classrooms. Integrating finance, HR, nutrition management, and payments into a single, secure platform, LINQ reduces administrative burden. LINQ is trusted by 30% of school districts across the U.S. to help them operate more efficiently and serve over 1 billion meals to 17 million students annually, process payroll for 364,000 educators, administrators, and staff, and engage with millions of families through the free LINQ Connect app.
Our team? They're talented, committed, and fiercely loyal problem-solvers. At LINQ, you'll find challenging and meaningful work, a team that respects and uplifts one another, and a commitment to constant improvement. Our customers love us because we're attentive, patient, communicative, and solutions focused. They know they can count on us to not only anticipate their needs but to deliver the right answer every time.
LINQ's Values: • Act with Integrity & Build Trust: Trust is the foundation of our company. We operate with the highest standards of integrity, both internally and externally. We believe in transparency, honesty, and accountability. Building a culture where trust is earned and maintained. • Deliver Excellence: We consistently exceed our clients' expectations. In every interaction, we strive to anticipate needs, provide swift solutions, and go the extra mile to relentlessly impress our customers. We communicate clearly, consistently, and in a timely way to cultivate lasting relationships. • Embrace Challenges: We embrace a growth mindset. Challenges offer opportunities to learn, grow, and improve. Continuous learning keeps us relevant and effective to ensure our solutions remain on the leading edge of innovation. • Collaborate & Act as One Team: Diverse skills, ideas, and perspectives are our strength. Through open communication, shared goals, and a spirit of unity and mutual respect, we collaborate to achieve excellence, drive innovation, and propel our company forward as a cohesive force.
About The Team:
LINQ's Research & Development department is dedicated to driving innovation and continuous improvement across Payments, Nutrition, or Platform teams. Focused on exploring new technologies, optimizing systems, and developing forward thinking solutions, this team plays a critical role in shaping the future of LINQ. Through collaboration, strategic problem-solving, and rigorous testing, they ensure LINQ's software remains cutting-edge, efficient, and aligned with the evolving needs of our customers.
About The Role:
We are seeking a CloudOps Engineer to support our Cloud Operations team. In this role, you will contribute to the strategic direction and execution of our cloud initiatives, ensuring the health and reliability of our production environments while driving transformational change.
Primary Objectives:
To ensure the reliability, performance, and security of cloud-based infrastructure and operations. This includes managing production and non-production environments, implementing cloud strategies, and leveraging Infrastructure as Code practices to drive operational excellence.
The role aims to enhance service delivery, support transformational change within the organization, and ensure compliance with service level agreements (SLAs) and data handling policies, contributing to improved operational efficiency.
What You Will Be Doing:
Plan, build, and manage Cloud systems based with an emphasis on Terraform and AWS
Maintain core application infrastructure that supports customer-facing environments.
Rearchitect legacy systems to leverage cloud native tooling to optimize performance and deliver excellence.
Manage and own both production and non-production environments for application delivery, debugging, and testing.
Diagnose and resolve issues related to system health, reliability, and scalability to minimize customer impact.
Apply change management procedures to ensure security and operational continuity.
Author and execute operating procedures and guidelines for the cloud environment.
Ensure SLA compliance for customer systems and adherence to safe data handling procedures as defined by policy.
Utilize Infrastructure as Code tools (Terraform) for infrastructure deployment and management.
Integrate and manage monitoring and reporting tools (e.g., Prometheus.io, Grafana, Datadog, New Relic).
What We Are Looking For:
Bachelor's degree in Computer Science, Information Technology, a related field, or equivalent experience.
2-4 years of experience in cloud engineering or a related field.
Strong knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) and their services.
Strong knowledge of Infrastructure as Code concepts and automation tooling.
Experience with monitoring and reporting tools.
Experience with security best practices and data handling procedures.
Excellent problem-solving skills, attention to detail, and ability to work collaboratively in a team environment.
Effective communication skills, both verbal and written.
What will Make You Stand Out:
Deep experience with Terraform
Experience rearchitecting application infrastructure
AWS certifications
The Tech:
All 3 major cloud providers: AWS, Azure, and GCP
Azure DevOps
Terraform
Travel Requirements:
Less than 5%
Why You'll Love Working With Us
🌎Flex Your Workspace: Work remote from one of our eligible states across the US, or if you're near Austin three days in office a week!
💰Planning Your Future: Our 401(k) plan comes with a 4% employer match on
total
earnings (not
just
your base salary).
💸Performance Pays Off: Whether it's a company bonus or target sales commission, your hard work doesn't go unnoticed.
🌴Vacation
Your
Way: Our flexible Open Paid Time Off Plan lets you take the time you need, when you need it.
👶Paid Parental Leave: Take the time you need to welcome your new addition - We've got you covered!
🎉Ten Paid Corporate Holidays: Enjoy a little extra downtime to relax and recharge with the ten paid holidays each year.
❤️Giving Back: Feel good while doing good - 16 paid volunteer hours to support the causes that matter most to you. #LINQCares
🏥Benefits That Have Your Back (And Teeth, Too!): Rock-solid medical, dental and vision coverage. Pick your vibe: a low deductible PPO and pair with an FSA or a HDHP with a sweet HSA - with contributions from LINQ. Dental perks that even cover braces for the kiddos.
💪Wellness Perks: Employer-paid Short-Term Disability, Long Term Disability, Basic Life, and AD&D insurance. Gym reimbursements and tons of extra savings on travel assistance, employee assistance, and even pet insurance options.
📚Grow With Us: Invest in yourself with professional development opportunities to keep leveling up your skills.
🎁Rewards For Referrals: Got an amazing candidate in your network? Send them our way and earn a referral bonus when they join the team!
EOE Statement/Accommodation Notice:
As a federal contractor and equal employment opportunity employer, LINQ adheres to all applicable laws and regulations regarding employment practices, ensuring that all qualified individuals receive consideration for employment
based on their qualifications. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, pregnancy, genetic information, disability, status as a protected veteran, or any other protected category under applicable federal, state, and local law.
To learn more about your rights and protections under federal law, please refer to the EEOC Equal Employment Opportunity Poster at the following link: ***************************
If you'd like to view a copy of the company's state or federally required affirmative action plans or policy statement, please email ***************. If you are an individual with a disability and would like to request a reasonable accommodation as part of the employment selection process, please contact ***************. This email address is reserved solely for job seekers with disabilities requesting accessibility assistance or an accommodation in the job application process. Please do not use this email to inquire about the status of your job application if you do not require accessibility assistance or an accommodation. Messages left for other purposes, such as solicitation, following up on an application or non-disability related technical issues, will not receive a response.
PUBLIC BURDEN STATEMENT: According to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number.
$34k-62k yearly est. Auto-Apply 43d ago
Engineer, SRE GenAI
T-Mobile 4.5
Frisco, TX jobs
At T-Mobile, we invest in YOU! Our Total Rewards Package ensures that employees get the same big love we give our customers. All team members receive a competitive base salary and compensation package - this is Total Rewards. Employees enjoy multiple wealth-building opportunities through our annual stock grant, employee stock purchase plan, 401(k), and access to free, year-round money coaches. That's how we're UNSTOPPABLE for our employees!
Job Overview
As an Engineer in Site Reliability Engineering (SRE) for AI Systems, you will help ensure the reliability, scalability, and performance of AI platforms. This role includes participating in on-call rotations, improving system observability, and supporting operations across cloud-native infrastructure.
This is a hands-on role ideal for someone with foundational SRE skills and a growth mindset to expand in GenAI and LLM infrastructure operations.
We pride ourselves on encouraging a culture of innovation, advocating for agile methodologies, and promoting transparency in all that we do. Join us in embodying the spirit of the 'Un-carrier' and make a tangible impact! Our team is dynamic where no day is the same, and we are diverse and inclusive passionate about growth and transformation. If you're up to the challenge, apply today!
Job Responsibilities:
* Participate in on-call rotations to support AI platforms and respond to production incidents with urgency and precision.
* Monitor system health and performance using tools like Grafana, Splunk, and PowerBI.
* Support cloud-native infrastructure deployments, with a focus on Azure (primary), and exposure to AWS or GCP.
* Implement runbooks and automate repetitive operational tasks to reduce toil.
* Support CI/CD pipelines and IaC deployments using Gitlab pipelines, Databricks.
* Assist in the development and enforcement of Service Level Objectives (SLOs) and real-time alerts for AI APIs and services.
* Collaborate with senior engineers to improve platform reliability and scale LLM-based applications.
Education and Work Experience:
* Bachelor's Degree Computer Science, Engineering or a related field (Required)
* 2-4 years of experience in DevOps, SRE, or cloud platform engineering.
* Hands-on experience with monitoring/logging systems such as Prometheus, Grafana, Splunk, or OpenSearch.
* Familiarity with cloud environments (preferably Azure; AWS/GCP a plus).
* Experience in scripting or automation using Python, Bash, or PowerShell.
* Basic understanding of containerization (Docker, Kubernetes) and CI/CD concepts.
* Willingness to participate in an on-call schedule and incident resolution.
* Strong solving and root cause analysis skills.
Preferred Qualifications
* Exposure to AI/ML infrastructure or LLM-based systems (e.g., OpenAI, ChatGPT, Azure OpenAI).
* Experience with infrastructure-as-code tools like Terraform or ARM templates.
* Familiarity with LLM observability or API token usage metrics.
* Passion for learning AI reliability practices and collaborating with cross-functional teams.
Knowledge, Skills and Abilities:
* Communication (Required)
* Customer Service (Required)
* Analytics (Required)
* Technical Writing (Required)
* At least 18 years of age
* Legally authorized to work in the United States
Travel:
Travel Required (Yes/No): Yes
DOT Regulated:
DOT Regulated Position (Yes/No): No
Safety Sensitive Position (Yes/No): No
Base Pay Range: $92,500 - $166,800
Corporate Bonus Target: 15%
The pay range above is the general base pay range for a successful candidate in the role. The successful candidate's actual pay will be based on various factors, such as work location, qualifications, and experience, so the actual starting pay will vary within this range.
At T-Mobile, employees in regular, non-temporary roles are eligible for an annual bonus or periodic sales incentive or bonus, based on their role. Most Corporate employees are eligible for a year-end bonus based on company and/or individual performance and which is set at a percentage of the employee's eligible earnings in the prior year. Certain positions in Customer Care are eligible for monthly bonuses based on individual and/or team performance. To find the pay range for this role based on hiring location, ******************************************** ID=REQ321838¶dox=1
At T-Mobile, our benefits exemplify the spirit of One Team, Together! A big part of how we care for one another is working to ensure our benefits evolve to meet the needs of our team members. Full and part-time employees have access to the same benefits when eligible. We cover all of the bases, offering medical, dental and vision insurance, a flexible spending account, 401(k), employee stock grants, employee stock purchase plan, paid time off and up to 12 paid holidays - which total about 4 weeks for new full-time employees and about 2.5 weeks for new part-time employees annually - paid parental and family leave, family building benefits, back-up care, enhanced family support, childcare subsidy, tuition assistance, college coaching, short- and long-term disability, voluntary AD&D coverage, voluntary accident coverage, voluntary life insurance, voluntary disability insurance, and voluntary long-term care insurance. We don't stop there - eligible employees can also receive mobile service & home internet discounts, pet insurance, and access to commuter and transit programs! To learn about T-Mobile's amazing benefits, check out *************************
Never stop growing!
As part of the T-Mobile team, you know the Un-carrier doesn't have a corporate ladder-it's more like a jungle gym of possibilities! We love helping our employees grow in their careers, because it's that shared drive to aim high that drives our business and our culture forward. By applying for this career opportunity, you're living our values while investing in your career growth-and we applaud it. You're unstoppable!
T-Mobile USA, Inc. is an Equal Opportunity Employer. All decisions concerning the employment relationship will be made without regard to age, race, ethnicity, color, religion, creed, sex, sexual orientation, gender identity or expression, national origin, religious affiliation, marital status, citizenship status, veteran status, the presence of any physical or mental disability, or any other status or characteristic protected by federal, state, or local law. Discrimination, retaliation or harassment based upon any of these factors is wholly inconsistent with how we do business and will not be tolerated.
Talent comes in all forms at the Un-carrier. If you are an individual with a disability and need reasonable accommodation at any point in the application or interview process, please let us know by emailing ApplicantAccommodation@t-mobile.com or calling **************. Please note, this contact channel is not a means to apply for or inquire about a position and we are unable to respond to non-accommodation related requests.
$92.5k-166.8k yearly Auto-Apply 14d ago
HW Engineer, Emech
Nokia 4.6
Dallas, TX jobs
The team you will be a part of
Our Radio Development R&D Team is looking for motivated junior hardware engineers to be a part of a team involved in the development, testing, and design maintenance of 5G/6G cellular base stations.
We are seeking engineers with up-to-date knowledge in cutting-edge technologies, including Artificial Intelligence / Machine Learning (AI/ML), Virtual Design and simulations, Automation, Data Analytics, 3D modeling and thermodynamics, Gallium Nitride (GaN) Power amplifier (PA), sustainable energy savings, and Cloud Computing, which enable faster time to market and ultimate product quality, the best in the industry.
Required skills:
BS in Mechanical Engineering;
Excellent theoretical and practical understanding of mechanical design with emphasis on EMI containment, heat transfer, cooling techniques & environmental protection;
Experience with CAD tools preferably PTC Creo 5 or higher;
Experience with thermal modeling simulation tools -- preferably Flotherm XT;
Experience with FEA simulation tools preferably ANSYS or Creo Simulate
Proficient with Microsoft Office Suite - primarily Word, Excel, and PowerPoint
It would be nice if you also had:
Previous Nokia co-op experience
Excellent verbal and written communication skills
Good time management, planning, and organizational skills
Good mechanical aptitude with experience using power and hand tools found in a machine shop
Adept Problem Solver
Works well in a team environment
Able to travel internationally
What you will learn and contribute to
Participate in developing mechanical assemblies in one or more of the following areas:
Design and verification of metal castings, sheet metal, stampings, plastic injection moldings, elastomeric gaskets, connectors, etc.
Identify or develop mechanical parts and create the information needed to support the suppliers and manufacturing sites to produce the designs for both prototypes and mass production.
Have a working knowledge of functional dimensioning, geometric tolerancing as well as worst-case and statistical tolerance analysis techniques.
Possess a basic proficiency in finite element analysis, computational fluid dynamics simulation, and general engineering analysis.
$80k-106k yearly est. Auto-Apply 12d ago
Cybersecurity Engineer
RPI Group 4.6
Virginia jobs
Cybersecurity Engineer Location: Dam Neck, VA (100% On-Site) Come work for a company that values innovation, collaboration, and making a difference! At RPI Group, Inc., you'll join a mission-driven team where your cybersecurity expertise directly supports national defense.
Position Summary:
RPI Group, Inc. is seeking a Cybersecurity Engineer to join our team in Dam Neck, VA. This position plays a key role in advancing cybersecurity operations by performing in-depth system hardening, vulnerability assessment, and security compliance activities in alignment with DoD requirements. The ideal candidate will have a solid foundation in cybersecurity practices and proven experience supporting both Linux and Windows environments across DoD networks.Key Responsibilities:
Apply Security Technical Implementation Guides (STIGs) to system configurations.
Conduct vulnerability assessments using Assured Compliance Assessment Solution (ACAS).
Interpret and implement Security Management policy guidance and directives.
Remediate security vulnerabilities by applying vendor patches and configuration changes.
Maintain cybersecurity posture for assigned systems in alignment with DoD requirements.
Qualifications:
Education: Bachelor of Science in Information Systems, Information Technology, Computer Science, or Computer Engineering.
Certification: Must meet DoD 8570.01-M requirements IAW DFARS ************ Baseline Certification, minimum IAT Level II.
Experience: Minimum of five (5) years of full-time professional Cybersecurity experience, including:
STIG implementation
Vulnerability assessments using ACAS
Familiarity with DoD cybersecurity policy and directives
Patch remediation on Linux and Windows systems
Clearance: Active Secret Clearance required.
Skills:
Strong analytical and troubleshooting abilities
Proficiency in both Linux and Windows environments
Effective written and verbal communication
Ability to interpret technical documentation and security policies
RPI Group, Inc. is an Equal Employment Opportunity (EEO) Employer.
$69k-97k yearly est. Auto-Apply 60d+ ago
Cybersecurity Engineer III
RPI Group 4.6
Virginia jobs
RPI Group, Inc. is seeking a Cybersecurity Engineer III to support our Navy customer in Dam Neck, VA. The selected candidate will play a key role in the implementation, assessment, and sustainment of cybersecurity controls and configurations in accordance with DoD standards and program requirements.
Key Responsibilities:
Ten (10) years of full-time professional cybersecurity experience performing system hardening with demonstrated experience in all of the following areas:
Computer security, military system specifications, security management policy guidance and directives, and DoD and cybersecurity policies
Risk Management Framework (RMF) and implementation of cybersecurity and IA boundary defense techniques, including the use of IA-enabled appliances and applications such as Firewalls, IDS, IPS, Switches/Routers, Cross Domain Solutions (CDS), Enterprise Mission Assurance Support Service (eMASS), and Endpoint Security Solution (ESS)
Performing Security Technical Implementation Guide (STIG) implementation
Conducting vulnerability assessments using the Assured Compliance Assessment Solution (ACAS) tool
Remediating vulnerability findings, including applying vendor patches on both Linux and Windows operating systems
Minimum Requirements:
Bachelor of Science in Information Systems, Information Technology, Computer Science, or Computer Engineering
Minimum IAT Level III certification
Active Secret security clearance required
RPI Group, Inc. is an Equal Employment Opportunity (EEO) Employer.
$69k-97k yearly est. Auto-Apply 60d+ ago
SPOC Engineer
Inmarsat Plc 4.8
Reston, VA jobs
Inmarsat Government is a wholly-owned subsidiary of Viasat, Inc., responsible for the organization's leadership position in the U.S. government market. It is headquartered in Reston, VA, with network assets and operations around the globe. Inmarsat Government provides its services and products in accordance with a system that is registered to ISO 9001:2015. Inmarsat Government, a Viasat, company continues to deliver the world's most advanced global, mobile satellite communication services to U.S. defense, intelligence, homeland security, public safety and civilian agencies, with highly reliable, secure and affordable connectivity.
Built with government users in mind, Inmarsat Government solutions provide resilient, flexible capabilities to complement government satellite resources, anytime, anywhere. Leveraging an industry-leading scalable, multi-band network infrastructure, Inmarsat Government offers a suite of managed network services and end-to-end communication solutions to support users on land, at sea and in the air, even in the world's most remote regions.
Primary purpose of the position:
The SPOC Engineer is responsible for the Secure Payload Operations Center and daily operations of the payloads and associated ground stations. The SPOC Engineer is responsible for managing all SPOC activities performed during the operating shift environment. These activities may be under guidance or instruction from management/support personnel or conducted directly by the individual and will include routine on-station operational tasks on the satellites and Ground Control System (GCS), special activities in response to non-nominal conditions, off-line support functions, together with first level trouble shooting and anomaly resolution of the satellites and GCS.
The Inmarsat Government SPOC is a highly automated environment, and the emphasis of the SPOC Payload Controller function is based on supervision of executing automated procedures rather than in their manual execution. However, comprehension and understanding of what each script is executing for both space and ground components is still critical. This position requires the staff member to perform regular satellite operations and to respond quickly at any time of the day or night to operational anomalies arising in the payload or ground control system. The SPOC Engineer is expected to exercise all operations in a professional and disciplined manner, executing routine procedures to the letter. The SPOC Engineer is expected to execute their day-to-day responsibilities from the Reston HQ, as well as provide rotating on-call support in addition to shift coverage.
Key responsibilities of the position:
System Development responsibilities include.
- Establishing the real time control environment at the SPOC
- Validating procedure in a development environment that includes the payload simulator
- Executing the intersegment validation of the SPOC
- Participating in the intersegment validation events between the SPOC, Satellite Access Sites, London SCC, and external NOC(s).
- Develop and mature standard operating procedures and tools
- Maintain proficiency of payload and ground operating systems and tasks
- Identify, evaluate, and analyze various anomaly conditions and use logical reasoning to resolve
On a shift working basis, be responsible for ensuring that SPOC in-orbit payloads and associated Ground Control System are maintained in an operational status by:
- Monitoring the satellites and Ground Control System to verify operational status and to detect and report anomalous conditions that could adversely affect operations.
- Executing scheduled commands on spacecraft and GCS according to procedure, operational notes, and daily activity schedule.
- Taking effective actions, within given procedures, guidelines or specific instructions, to eliminate or minimize the effect of anomalous conditions on the satellites and GCS.
- Operating Satellite Control Center (SCC) equipment, I4S computer systems and Ground Control System (GCS) remote tools according to procedures, work practices, or user guides.
- Working in a highly automated environment the emphasis of the SCC Spacecraft Controller function is placed on the supervision of executing automated procedures rather than in their manual execution.
To assist spacecraft and flight dynamics analyst staff in investigations and data gathering in response to routine, anomalous or non-routine events on the satellites or ground equipment infrastructure. This includes being part of an “incident team” during contingency operations.
Participate in regular spacecraft training and simulation sessions.
Maintain an accurate record of all SPOC and GCS activities in electronic logbooks.
Provide support to both external and in-house hardware and software support groups as required to troubleshoot or test the ground system.
Provide with support in writing and testing I4S automated procedures.
Provide the Spacecraft (S/C) Analyst Group with support in writing satellite health and performance reports.
To liaise with all remote TT&C Station personnel, supporting engineering groups, contractors/vendors and other relevant personnel in ensuring that real-time activities conducted by them, or with them, are in accordance with operational requirements and authorizations.
To coordinate with the Network Operations Center (NOC) on all matters affecting the payload of the satellites, together with areas of the GCS which are of mutual utilization.
To liaise with members of SPOC Operations management, Spacecraft Analysis, or Flight Dynamics in all relevant matters, including real-time and off-line activities and tasks.
Essential knowledge and skills:
Completed (or are pursuing) a BS in a related (electrical, systems, computer, aerospace, etc.) engineering discipline
5+ years' experience working in an operations environment
Required knowledge and skills:
Proficiency in Windows and Linux operating systems
Previous experience of satellite operations and satellite control systems.
Previous experience in satellite communications payloads and performance budgets. Familiarity with Transponded link budgets, G/T budgets, EIRP Budgets, Gain Loss Budgets, and Complex Channelization plans.
Required Security Clearance:
TS Collateral Clearance required.
Willingness/ability to obtain TS/SCI once hired is required, if you don't already have one.
$67k-90k yearly est. Auto-Apply 10d ago
Mid-Level Data & Analytics Engineer
Lean On Me 3.7
Remote
Description Company Overview:Lean Tech is a rapidly expanding organization situated in Medellín, Colombia. We pride ourselves on possessing one of the most influential networks within software development and IT services for the entertainment, financial, and logistics sectors. Our corporate projections offer many opportunities for professionals to elevate their careers and experience substantial growth. Joining our team means engaging with expansive engineering teams across Latin America and the United States, contributing to cutting-edge developments in multiple industries.We are seeking a Mid-Level Data & Analytics Engineer to join our client's team and play a key role in building and maintaining scalable data solutions. This position focuses on designing and managing data pipelines using Snowflake, DBT, Python, and GCP, while enabling business teams with reliable insights through Power BI.Position Title: Mid-Level Data & Analytics EngineerLocation: Remote - LATAMWhat you will be doing:
Design, build, and optimize ELT pipelines for data ingestion and transformation.
Develop and maintain Snowflake data warehouse models to enable reliable reporting.
Implement and manage transformations in DBT.
Create and support Python-based data integrations within GCP for shuttling raw data into Snowflake.
Collaborate closely with data leaders and stakeholders to ensure accuracy, performance, and scalability.
Ensure data quality through validation, monitoring, and troubleshooting of pipelines.
Document data models, transformations, and system integrations.
Support BI and reporting needs by enabling downstream data availability for Power BI dashboards. Plus
Contribute to improving processes, proactively addressing gaps, and ensuring alignment with business expectations.
Requirements & Qualifications:To excel in this role, you should possess:
3+ years in data engineering or backend systems with a strong data focus.
Hands-on experience with:
Snowflake (data modeling and warehouse optimization).
DBT (data transformation layer).
Python (for building data pipelines and integrations).
GCP (data ingestion/storage workflows).
Strong SQL skills for querying, modeling, and performance tuning.
Experience working with ELT data patterns.
Good To Have:
Experience with Power BI (building dashboards, reports, or supporting stakeholders).
Exposure to iPaaS solutions (Digibee) - secondary, not a primary responsibility. Very big plus
Familiarity with additional BI tools (Tableau, Looker, etc.).
Experience presenting technical outcomes to customers or external stakeholders.
Knowledge of other cloud platforms (AWS, Azure).
Familiarity with AI and ML applications.
Exposure to data lake, streaming, or real-time processing architectures.
Soft Skills:
Strong communication skills with the ability to explain technical concepts clearly.
Proactive, detail-oriented, and capable of independently managing tasks while aligning with team objectives.
Strong problem-solving mindset.
Sense of urgency in handling tasks and meeting deadlines.
Self-sufficient and able to work independently.
Reliable and dependable team member.
Passionate about data and data-driven decision-making.
Identifies and seeks out automation opportunities within existing processes.
Why you will love Lean Tech:
Join a powerful tech workforce and help us change the world through technology
Professional development opportunities with international customers
Collaborative work environment
Career path and mentorship programs that will lead to new levels.
Join Lean Tech and contribute to shaping the data landscape within a dynamic and growing organization. Your skills will be honed, and your contributions will be vital to our continued success. Lean Tech is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
$94k-137k yearly est. Auto-Apply 60d+ ago
Mid Level Microsoft Fabric Data Engineer
Lean On Me 3.7
Remote
Description Company Overview:Lean Tech is a rapidly expanding organization situated in Medellín, Colombia. We pride ourselves on possessing one of the most influential networks within software development and IT services for the entertainment, financial, and logistics sectors. Our corporate projections offer many opportunities for professionals to elevate their careers and experience substantial growth. Joining our team means engaging with expansive engineering teams across Latin America and the United States, contributing to cutting-edge developments in multiple industries. Position Title: Mid Level Microsoft Fabric Data EngineerLocation: Remote (LATAM) What you will be doing:Seeking for an experienced Data Engineer responsible for designing, building, and maintaining end-to-end analytics pipelines in Microsoft Fabric and Azure. Key tasks include creating performant ETL/ELT workflows with PySpark and SQL, managing telemetry and time-series data in ADX with KQL, and implementing hot/cold storage strategies. The engineer will own production pipelines, ensure data quality, apply CI/CD and infrastructure-as-code, and establish monitoring. Collaboration with product owners, analysts, and BI engineers is essential, along with mentoring juniors and driving platform improvements. Candidates need 3-6+ years of data engineering experience, strong SQL and KQL/ADX expertise, and recent Fabric/Azure experience; Power BI knowledge is a plus but not the focus.
Design, implement, and maintain end-to-end ETL/ELT data pipelines using Microsoft Fabric components (Data Factory, Synapse Data Engineering), Fabric lakehouses/warehouses and OneLake to deliver analytics-ready datasets for downstream reporting and analysis.
Own data modeling and transformations across Spark (PySpark) and SQL workloads, producing performant, cost‑efficient schemas and semantic models optimized for large-scale analytics.
Develop and apply advanced SQL and Kusto Query Language (KQL) techniques to tune query performance, optimize aggregations and time‑series analyses, and ensure efficient access to multi‑GB/TB datasets.
Build and tune Azure Data Explorer (Kusto) solutions: define ingestion patterns (e.g., queued/batched ingestion), retention and hot/cold storage strategies, partitioning and materialized views for gold-layer analytics.
Implement storage and file-layout best practices (Parquet/Delta formats, effective partitioning and compaction strategies) in OneLake to support high-throughput time‑series data and reduce small‑file overhead.
Implement development best practices for data engineering artifacts: author IaC/CI‑CD pipelines, apply Git-based workflows, automate testing and automate deployments for repeatable, auditable releases.
Establish observability and monitoring for data workloads: configure metrics, alerts, capacity planning and cost‑monitoring to maintain pipeline reliability and resource efficiency (excluding run‑book/on‑call commitments where not in scope).
Ensure data quality, lineage, security and compliance by implementing validation checks, access controls, documentation and collaborating with data governance stakeholders on cataloging and classification.
Collaborate with product owners, analysts and BI developers to translate business requirements into data solutions and deliver data‑ready semantic/data models for reporting and analytics consumption.
Mentor and guide junior engineers, contribute to team best practices, and drive architectural decisions and technical improvements across Fabric and Azure data platforms.
Requirements & QualificationsTo excel in this role, you should possess:
3-6 years of professional data engineering experience, including at least 2 years hands‑on with Microsoft Fabric Data Engineering (pipelines, notebooks, lakehouse/warehouse, OneLake)
Advanced, hands‑on experience with Microsoft Fabric for data engineering workloads and OneLake integration
Proven ability to design, build and maintain end‑to‑end ETL/ELT pipelines using Fabric, Azure Data Factory and/or Synapse Data Engineering
Strong Spark experience (PySpark and/or Scala) and practical familiarity with Delta/Parquet formats for large‑scale transformations and storage
Advanced SQL/T‑SQL development skills with demonstrated performance tuning and optimization for multi‑GB/TB datasets
Intermediate data modeling and transformation skills focused on delivering analytics‑ready datasets and optimized query performance for reporting workloads
Advanced experience with Azure Data Explorer (Kusto/ADX) and Kusto Query Language (KQL), including complex queries, aggregations, time‑series analysis and query performance tuning
Practical experience with ADX ingestion and telemetry patterns (e.g., queued/batched ingestion), retention strategies and use of materialized views for gold‑layer outputs
Knowledge of storage partitioning and file layout best practices for time‑series data (partitioning, compaction and small‑file handling) when using Parquet/Delta in OneLake
Familiarity with Synapse Data Engineering, Azure Data Factory, and related Fabric runtimes (lakehouses and data warehouses) as part of the platform architecture
Proven track record with large‑scale data movement and ETL/ELT pipeline reliability and performance
Practical experience integrating data outputs with BI tools and semantic models; Power BI integration experience is beneficial, though the primary focus is data engineering
Experience applying development best practices: Git, CI/CD for data pipelines, infrastructure‑as‑code patterns, automated testing and deployment for data artifacts
Skills in monitoring and operating production pipelines-observability, alerting, incident response, capacity planning and cost optimization-while on‑call rotation is not expected
Strong English communication skills, experience working in Agile teams, and the ability to collaborate with product owners, analysts and BI developers
Demonstrated ability to mentor junior engineers, contribute to team best practices, and influence architectural and technical improvements across Fabric/Azure data platforms
Working familiarity with adjacent Azure services (Azure Data Lake, Databricks, Logic Apps) and an understanding of data governance/catalog concepts (OneLake/Purview) is desirable
Nice to have:
Practical exposure to telemetry ingestion technologies such as Azure Event Hubs and the Data Collector API, and experience integrating these sources with ADX or Fabric ingestion patterns
Experience designing and executing performance benchmarking and load‑testing for large ETL/ELT workloads to validate throughput, latency and cost trade‑offs
Familiarity with data‑testing practices and frameworks for pipelines and notebooks (unit and integration testing) to strengthen CI/CD reliability
Proven ability to produce clear technical documentation and runbooks, and to present technical designs and trade‑offs to non‑technical stakeholders
Relevant Microsoft/Azure certifications or formal training in Fabric/Azure data engineering, Spark or ADX (desirable but not required)
Experience with performance and cost‑profiling methodologies for storage and computation to inform optimization and capacity planning decisions.
Manage end-to-end pipelines, ensure performance, cost optimization, and production readiness.
Partner with product owners, analysts, and BI teams to deliver analytics-ready datasets and models.
Apply structured thinking to data modeling, performance tuning, and large-scale data movement.
Implement monitoring, observability, alerts, and incident response for reliable pipelines.
Enforce validation, security, access controls, and compliance standards.
Balance performance, cost, and complexity when choosing storage, partitioning, and ingestion strategies.
Contribute within Agile teams, actively supporting planning, delivery, and production deployment.
Soft Skills:
Strong written and verbal English skills; able to document and present to technical and non-technical stakeholders.
Excellent problem-solving and analytical skills.
Demonstrates a willingness to learn and grow.
Highly motivated to build upon existing work and set new benchmarks.
Takes responsibility and drives projects forward with a strong sense of accountability.
Why you will love Lean Tech:
Join a powerful tech workforce and help us change the world through technology.
Professional development opportunities with international customers.
Collaborative work environment.
Career path and mentorship programs that will lead to new levels.
Join Lean Tech and contribute to shaping the data landscape within a dynamic and growing organization. Your skills will be honed, and your contributions will be vital to our continued success. Lean Tech is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
$94k-137k yearly est. Auto-Apply 60d+ ago
Mid-Level Data Engineer
Lean On Me 3.7
Remote
Description Company OverviewLean Tech is a rapidly growing technology company based in Medellín, Colombia, with a strong presence across Latin America and the United States. Recognized for its influential network within the software development and IT services industries, Lean Tech delivers solutions for the entertainment, financial, and logistics sectors. The company is dedicated to fostering a culture that emphasizes professional growth, collaboration, and technical excellence, providing numerous career advancement opportunities for its employees.Lean Tech is distinguished by its modern engineering teams and its commitment to leveraging advanced technologies-particularly within the Microsoft Azure ecosystem-to deliver innovative data and analytics solutions. The organization encourages participation in Agile methodologies and values expertise in areas such as ETL/ELT pipeline design, data integration, data quality assurance, and automation. With strengths in tools including Azure Fabric, Azure Data Factory, Azure Data Lake, SQL Server, and Python, Lean Tech empowers its teams to drive impactful projects that support advanced analytics and machine learning initiatives. As an equal opportunity employer, Lean Tech is committed to diversity and inclusion, creating an environment where every team member can thrive.Position OverviewThis role is instrumental in designing, building, and optimizing data pipelines within a Microsoft Azure environment to enable critical business analytics and machine learning initiatives. Responsibilities include development of ETL/ELT pipelines, data integration, and direct management of Azure Data Lake, Azure Fabric, Azure Data Factory, and SQL Server resources. The position plays a key part in supporting data scientists and BI teams by ensuring that data systems are structured for advanced analytics, forecasting, and machine learning, and by integrating APIs-particularly REST APIs-across the Microsoft ecosystem.You will contribute to data pipeline orchestration and monitoring using tools such as Azure Data Factory (ADF), Azure Databricks, Azure Functions, and Logic Apps, while maintaining data quality through validation, monitoring, troubleshooting, and performance tuning. Familiarity with medallion architecture and modern data warehouse practices is expected, along with strong proficiency in SQL and Python, particularly using the Azure SDK for Python. Automation of data collection, processing, and reporting is required to drive efficiency and data integrity.The role supports ongoing data platform re-architecture efforts, documentation of data models and pipelines, and effective management of data security. You will collaborate within Agile teams, participate in ceremonies such as sprint planning and daily stand-ups, and utilize Agile tools like Jira and Confluence.This position offers the opportunity to make a significant impact on Lean Tech's data landscape while collaborating across diverse teams in a dynamic, growth-focused organization. Unique challenges include working at scale with Azure cloud services, supporting enterprise data systems modernization, and fostering continuous improvement in data engineering practices.What You Will Be Doing
Design, build, and optimize advanced ETL/ELT pipelines and data integration workflows to enable robust, reliable, and scalable data ingestion, transformation, and delivery across the Microsoft Azure ecosystem.
Leverage Azure Data Factory, Azure Fabric, Azure Data Lake, and SQL Server to modernize, migrate, and maintain data systems in alignment with evolving business requirements.
Implement and maintain medallion architecture (bronze/silver/gold layers) to effectively structure, standardize, and organize data for downstream analytical and business use cases.
Establish data quality assurance processes by validating, monitoring, troubleshooting, and performance tuning data pipelines with Azure orchestration and monitoring tools such as ADF, Azure Databricks, Azure Functions, and Logic Apps.
Integrate and manage REST APIs and data flows within the Microsoft environment to support seamless data orchestration and automation initiatives.
Develop and utilize advanced SQL and Python (including Azure SDK for Python) to process, automate, and enhance data infrastructure and analytics workflows.
Document data models, pipeline architecture, and integrations to support transparency, knowledge sharing, and ongoing platform scalability.
Contribute to ongoing re-architecture initiatives to ensure the data platform's scalability, modernization, and alignment with business strategy.
Participate in Agile ceremonies and utilize Agile tools (Jira, Confluence) to collaborate effectively with Data Scientists, BI teams, and stakeholders.
Implement systems and best practices to monitor data quality, ensure data integrity, and manage data security across all pipelines and platforms.
Automate data collection, processing, and reporting tasks to improve operational efficiency and reduce manual intervention.
Stay current with evolving Azure data and analytics technologies, modern data warehouse practices, and industry trends to continuously improve systems and workflows.
Required Skills & Experience
Bachelor's degree in Computer Science, Engineering, or a related field
3+ years of experience in data engineering or backend systems with a strong data focus
Advanced expertise in designing, building, and optimizing ETL/ELT pipelines for data integration
Strong proficiency in SQL and Python for data processing and automation; experience with the Azure SDK for Python
Hands-on experience with the Microsoft Azure ecosystem, including Azure Fabric, Azure Data Factory, Azure Data Lake, and SQL Server
Practical experience with data pipeline orchestration, monitoring, and data quality assurance using tools such as Azure Data Factory, Azure Databricks, Azure Functions, and Logic Apps
Working familiarity with medallion architecture (bronze/silver/gold) and modern data warehouse practices
Applied knowledge of API integration, specifically with REST APIs, within the Microsoft ecosystem
Experience in documenting data models, pipelines, and integrations for transparency and knowledge sharing
Proficiency in version control (Git) and Agile tools (Jira, Confluence); experience participating in Agile ceremonies such as sprint planning and daily stand-ups
Strong analytical and problem-solving skills with the ability to optimize and improve large-scale data systems
Experience in implementing and maintaining data infrastructure and data sets, including data quality monitoring and ensuring data integrity and security
Automation of data collection, processing, and reporting tasks to enhance efficiency and reduce manual errors
Nice to Have
Familiarity with Azure Machine Learning (AzureML) or similar platforms for supporting advanced analytics and AI initiatives
Exposure to big data technologies such as Spark or Hadoop within the cloud ecosystem
Experience with additional cloud platforms (e.g., AWS, Google Cloud Platform) to complement Azure skills
Certifications such as Microsoft Certified: Azure Data Engineer Associate or equivalent
Competence with containerization or orchestration tools like Docker or Kubernetes
Knowledge of DevOps practices applied to data engineering workflows
Experience with data visualization or BI tools (e.g., Power BI, Tableau) for facilitating collaboration between teams
Strong communication skills for engaging with cross-functional stakeholders
Ability to adapt quickly to emerging technologies and industry best practices
Background in mentoring or supporting the professional development of team members
Soft Skills
Strong collaboration skills, demonstrated by effective teamwork with Data Scientists, BI teams, and stakeholders to align business objectives with technical solutions.
Clear and concise communication abilities, essential for documenting complex data models, pipelines, and integrations, and for participating in Agile ceremonies such as sprint planning and daily stand-ups.
Adaptability and willingness to embrace new tools and technologies within the Microsoft Azure ecosystem, with a continuous improvement mindset in a dynamic work environment.
Strong problem-solving approach, applied to troubleshooting, performance tuning, and ensuring data quality and integrity in large-scale data systems.
Proven organizational and time management skills, necessary for balancing multiple priorities and automating data collection, processing, and reporting tasks efficiently.
Commitment to accountability and knowledge sharing, fostering transparency within cross-functional engineering initiatives.
Why You Will Love Working with Lean TechJoin a powerful tech workforce and help us change the world through technology
Professional development opportunities with international customers
Collaborative work environment
Career path and mentorship programs that will lead to new levels.
Join Lean Tech and contribute to shaping the data landscape within a dynamic and growing organization. Your skills will be honed, and your contributions will play a vital role in our continued success. Lean Tech is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
$94k-137k yearly est. Auto-Apply 60d+ ago
Data Engineer Business Intelligence
Fusion Connect 4.2
Remote
At Fusion Connect, we're not just offering jobs - we're building careers that empower, inspire, and grow with you.
If you're passionate about making an impact, collaborating with forward-thinking teams, and being part of a company that values your voice, you're in the right place.
We're thrilled you're exploring this opportunity with us - take a look, imagine the possibilities, and let's shape the future of connectivity together.
Summary:
We are seeking a skilled Data Engineer to join our Business Intelligence (BI) team. In this role, you will play a crucial part in building and optimizing the infrastructure that powers data-driven decision-making across the organization. You will work closely with BI analysts and business stakeholders to ensure reliable, timely, and actionable insights through scalable data pipelines and robust data models.
Job Description:
Key Responsibilities
Design, build, and maintain scalable ETL/ELT pipelines to support BI and analytics use cases.
Collaborate with BI analysts and business stakeholders to understand data needs and translate them into technical requirements.
Develop and manage data models and data marts that enable efficient reporting and analysis.
Ensure data quality, integrity, and governance across all BI datasets.
Optimize data workflows for performance, scalability, and cost-efficiency in cloud and/or on-prem environments.
Integrate data from multiple sources including internal systems (ERP, CRM, operational databases) and third-party APIs.
Monitor and troubleshoot data pipelines and workflows to ensure timely data availability.
Implement and maintain metadata management, data lineage, and documentation for transparency and compliance.
Support the BI team in developing dashboards, reports, and self-service analytics tools.
Required Qualifications
Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field.
3+ years of experience in data engineering, preferably supporting BI or analytics functions.
Proficient in SQL and experience with other programming languages (e.g., Python).
Hands-on experience with ETL tools and data integration patterns.
Experience with data warehousing platforms
Familiarity with BI tools such as Tableau and Power BI.
Strong understanding of data modeling techniques
Experience working in cloud environments (AWS, GCP, or Azure).
Preferred Qualifications
Experience with CI/CD pipelines for data workflows.
Familiarity with data governance frameworks and privacy regulations (e.g., GDPR, CCPA).
Understanding of agile methodologies and experience working in cross-functional teams.
Strong problem-solving skills and the ability to communicate technical concepts to non-technical audiences.
$93k-135k yearly est. Auto-Apply 60d+ ago
Staff Data Engineer - Cloud Data Platform
Calix 4.8
Remote
Please note that all emails from Calix will come from a @calix.com email address. If you receive a communication that you think may not be from Calix, please report it to us at **************************. that could be based anywhere in the United States.
Calix is leading a service provider transformation to deliver a differentiated subscriber experience around the Smart Home and Business, while monetizing their network using Role based Cloud Services, Telemetry, Analytics, Automation, and the deployment of Software Driven Adaptive networks.
As part of a high performing global team, the right candidate will play a significant role as Calix Cloud Data Engineer involved in architecture design, implementation, technical leadership in data ingestion, extraction, transformation and analytics.
Responsibilities and Duties:
Work closely with Cloud product owners to understand, analyze product requirements and provide feedback.
Develop conceptual, logical, physical models and meta data solutions.
Design and manage an array of data design deliverables including data models, data diagrams, data flows and corresponding data dictionary documentations.
Determine database structural requirements by analyzing client operations, applications, and data from existing systems.
Technical leadership of software design in meeting requirements of service stability, reliability, scalability, and security
Guiding technical discussions within engineer group and making technical recommendations. Design review and code review with peer engineers
Guiding testing architecture for large scale data ingestion and transformations.
Customer facing engineering role in debugging and resolving field issues.
Qualifications:
This role may be required to travel and attend face-to-face meetings and Calix sponsored events.
10+ years of development experience performing Data modeling, master data management and building ETL/data pipeline implementations.
Cloud Platforms: Proficiency in both Google Cloud Platform (GCP) services (BigQuery, Dataflow, Dataproc, PubSub/Kafka, Cloud Storage) and AWS
Big Data Technologies: Knowledge of big data processing frameworks such as Apache Spark ,Flink .
Programming Languages: Strong knowledge of SQL and at least one programming language (Python, Java, or Scala),DBT.
Data Visualization: Experience with BI tools such as Google Data Studio, Looker, ThoughtSpot, and using BigQuery BI Engine for optimized reporting
Problem Solving: Strong analytical and troubleshooting skills, particularly in complex data scenarios.
Collaboration: Ability to work effectively in a team environment and engage with cross-functional teams.
Communication: Proficient in conveying complex technical concepts to stakeholders.
Knowledge of data governance, security best practices, and compliance regulations in both GCP and AWS environments.
Bachelor's degree in Computer Science, Information Technology or a related field.
Location:
Remote-based position located in the United States.
#LI-Remote
The base pay range for this position varies based on the geographic location. More information about the pay range specific to candidate location and other factors will be shared during the recruitment process. Individual pay is determined based on location of residence and multiple factors, including job-related knowledge, skills and experience.
San Francisco Bay Area:
156,400 - 265,700 USD Annual
All Other US Locations:
136,000 - 231,000 USD Annual
As a part of the total compensation package, this role may be eligible for a bonus. For information on our benefits click here.
$93k-130k yearly est. Auto-Apply 16d ago
FPGA Engineer I
Adtran 4.5
Dallas, TX jobs
Welcome!
Our Growth is Creating Great Opportunities! Our team is expanding, and we want to hire the most talented people we can. Continued success depends on it! Once you've had a chance to explore our current open positions, apply to the ones you feel suit you best and keep track of both your progress in the selection process, and new postings that might interest you!
Thanks for your interest in working on our team!
Job Description
The FPGA Engineer will be a member of a team responsible for designing digital devices from conception to production including but not limited to feasibility studies, device technology choices, component selection, device architecture development, design specifications, design and test development, high level verification methods, production testing, lab support, and defect resolution.
Duties and Responsibilities
Work independently to deliver correct engineering solutions with little guidance from more senior designers
Provide technical support to project team members
Adhere to department procedures regarding computer file structure, design file management, device specifications, coding guidelines, synthesis guidelines, and file security
Implement industry standards and proprietary product features
Follow and offer improvements to design, verification, and backend methodologies
Contribute to specification, design, and test reviews at the IP, module, or chip level
Maintain FPGA/IP development goals while working with project management guidance
Ensure ASIC/FPGA project specific design requirements will be met on schedule
Other projects and assignments may result and be assigned to accommodate the changing needs of the department and the Company
QualificationsBasic Qualifications
Bachelor of Science (B.S.) in Electrical or Computer Engineering or related field is required
Legally authorized to work in the United States
Knowledge in high-speed digital/analog circuit design
Experience with building scalable simulation/verification environments
Knowledge of PLLs, clock and reset architectures
Knowledge of physical implementation methods and requirements of high-speed electronic systems
Knowledge of OS: Windows and Linux/CentOS based environments
Able to implement and validate 3rd party IP solutions
Experience with Script in shell/Tcl to aide in analysis/processes/automation
Limited travel is expected in this position
Preferred Qualifications (Optional)
Able to support existing products that are released into production
Willingness to assist in team building at other sites
Proficient in HDL languages, Verilog, VHDL, and SystemVerilog
Polarion competency
QualificationsBasic Qualifications
BS or MS Degree in Electrical or Computer Engineering or related field is required
Legally authorized to work in the United States
Operates at the equivalent of an engineer with 10+ years of relevant experience (8+ with MS)
Has participated in at least 5 full telecom/datacom product design life cycles
Experience in design or test with large FPGA or ASIC designs
Knowledge in high-speed digital/analog circuit design
Experience with building scalable simulation/verification environments
Knowledge of PLLs, clock and reset architectures
Knowledge of physical implementation methods and requirements of high-speed electronic systems
Advanced knowledge of OS: Windows and Linux/CentOS based environments
Proficient in HDL languages, Verilog, VHDL, and SystemVerilog
Able to implement and validate 3rd party IP solutions
Experience with Script in shell/Tcl to aide in analysis/processes/automation
Limited travel is expected in this position
Preferred Qualifications (Optional)
Capable of representing the Engineering department within company forums (interpersonal with excellent communication skills)
Able to provide assistance with new hire candidate interviews and hiring decisions
Able to support existing products that are released into production
Willingness to assist in team building at other sites
$72k-90k yearly est. Auto-Apply 60d+ ago
Forward Deployed Engineer
Forerunner Technologies 4.0
Remote
Hi! We're Forerunner.
We believe that climate adaptation is a necessity, not a luxury, and communities deserve access to powerful software that helps them plan for the future. The challenge of climate change is complex - it implicates how municipalities plan, manage capital, and communicate to both residents and higher levels of government. Forerunner helps local communities do these things better by empowering them to access, understand, and mobilize local-level flood risk data at scale.
About the role
As the first Forward Deployed Engineer at Forerunner, you'll pioneer this role while bridging the gap between our platform and client systems, creating seamless integrations that help customers fully leverage our solutions. Beyond just integration work, you'll be establishing best practices and defining processes for this function within our ecosystem. You'll work directly with government agencies to understand their technical needs, build custom integrations, and automate data workflows. Your work will enable communities to better manage and understand a variety of different environmental risks, such as fire and flood, and your work will provide permitting and grant management support to facilitate mitigation and prevention within the built environment against climate change.
What you'll do Integration Development
Design and implement robust API integrations between Forerunner's platform and customer systems like permit management software (Accela, Laserfiche, Citizenserve)
Translate customer requirements into technical specifications and solutions that align with Forerunner's product roadmap
Create reusable, scalable integration frameworks that can be applied across multiple clients and systems
Build and maintain monitoring systems to ensure integrations function reliably
Develop and maintain automated tests for integration solutions to ensure reliability and stability
Data Management & Automation
Develop scripts and processes to automate the import and export of critical data for Customer Success and GIS teams
Design and implement data transformation pipelines to ensure consistency across systems
Create innovative solutions for complex data migration challenges
Build internal tools to improve efficiency for customer-facing teams
Cross-Functional Collaboration
Work closely with the Forerunner platform engineering team to integrate your solutions into the main application
Collaborate with Product to understand Forerunner's core functionality, roadmap, and future goals for customer integrations
Partner with Customer Success, Sales, and GIS teams to deliver comprehensive solutions to clients
Document integration requirements and processes for both internal and external stakeholders
Support Sales and Customer Success with technical guidance during the sales and implementation process
Customer Technical Partnership
Collaborate with customers to understand their technical infrastructure and data workflows
Provide technical guidance to customers with varying levels of IT sophistication
Establish best practices for customer technical onboarding and integration
Help translate complex technical concepts to non-technical stakeholders
What we're looking for Required Skills
Experience with API design, implementation, and integration (REST services, different authentication protocols)
Strong JavaScript/TypeScript programming skills
Proficiency with Bash/shell scripting, Git/GitHub, and SQL (Postgres/MySQL)
Experience automating common tasks with reusable scripts
Ability to clearly document technical integrations and train users
Strong communication skills with both technical and non-technical audiences
Patient, customer-focused approach to problem-solving
Strong candidates may also have
Experience with batch job processing technologies (Airflow, AWS Batch)
Familiarity with AWS, Docker, GraphQL, Node.js, Express, SFTP, Postman
Background in geospatial technology (Mapbox, ESRI products, Leaflet)
Experience migrating ad-hoc scripts into standardized scripts within a larger platform
Exposure to architecture design for integration systems and batch job processing
Background in government software or permit management systems
Experience with test automation for integration solutions
Our team has a lot to offer.
Forerunner is an early-stage company with an interdisciplinary team. We are motivated by seemingly intractable problems and work hard to support one another.
We're mission-driven. Our mission orients our business strategy and pushes us to work with a sense of urgency. We're in the weeds -- our entire team spends time engaging with our partners in the field to better understand the real-world problems they face.
We're a small team. As a member of our growing team, you will quickly become an invaluable part of our organization and have a meaningful impact on our business and technical direction. We welcome unique perspectives and backgrounds. Your opinion matters and we hope you'll share it.
We're a remote-first organization, with offices in NYC, San Francisco, and Portland, ME. Many members of our team work remotely, but we have small offices in San Francisco, NYC, and Portland, ME. We offer schedule flexibility for all of our employees and ask for accountability in return.
We want you to be happy and healthy. Forerunner offers competitive compensation, health/vision/dental coverage, and commuter benefits. We also have a quarterly wellness reimbursement so that you can have flexibility in defining what "health" means to you.
The salary range for this position is $150,000-$180,000. This posting may span more than one career level. Compensation is determined based on a variety of factors, including an individual's skills, experience, and qualifications.
$150k-180k yearly Auto-Apply 56d ago
Staff AI Engineer
Nokia Global 4.6
Remote
The Nokia CNS AI R&D (AIRD) team's mission is to invent and deploy the next generation of intelligent, generative AI, and ML technologies that will define the future of Nokia. This team represents a major investment and consolidation of our AI efforts, created to move from siloed features to a unified and powerful product & platform portfolio. We are the future powerhouse for driving the of Nokia's future business growth and building a suit of technologies that is recognized externally as best-in-class.
6+ years of relevant machine learning experience
MS or PhD in Computer Science or Engineering, Mathematics, or a related field.
Experience with machine learning, optimization algorithms, deep-learning techniques
Experience with machine learning frameworks (e.g., TensorFlow, PyTorch).
Experience with search engines and vector databases, along with their underlying algorithms.
Experience with big data frameworks and technologies such as Spark, Kafka, Cassandra.
Design, develop, and deploy advanced AI/ML models and algorithms to analyze and interpret complex data.
Design and implement machine learning models to improve a wide range of applications including search, forecasting, text mining and more.
Develop and implement agentic-based systems for a wide range of applications including anomaly detection, root-case analysis and more.
Optimize existing machine learning models and pipelines for performance, scalability, and resource efficiency.
$113k-148k yearly est. Auto-Apply 41d ago
Senior Data Engineer
Chorus 3.7
Austin, TX jobs
About Us
Chorus Innovations is a fast-growing healthcare technology company with an adaptive platform as a service (PaaS) that enables the rapid development of mobile, desktop, text messaging and interactive voice applications. Chorus' platform currently powers transformative healthcare, biomedical research and community health projects across institutions and public health agencies nationwide. We believe every community should have world-class health technology so they can thrive.
Our mission is to improve the lives of people and communities by democratizing technology development in health and research.
About the Role
We're seeking a Senior Data Engineer to help design and build the infrastructure that powers data-driven decision-making and workflow automation across our healthcare platform. You'll work closely with internal teams, partners, and stakeholders to define and deliver clean, reliable, and scalable data pipelines, with a strong focus on interoperability with EHR systems and healthcare APIs.
This is a high-impact role where you'll influence architecture, data modeling, integration strategy, and client engagement in a regulated and data-rich environment with a team of engineers dedicated to building the highest quality systems.
What You'll Be Doing
As a Senior Data Engineer at Chorus, you'll be flexing your skills across, and providing real impact.
You'll be:
Owning and evolving internal data pipelines that support analytics, operations, and end applications
Enabling self-service analytics and operational insights via semantic layers and tools (e.g. Metabase, Looker, Tableau, Power BI, etc…)
Designing and maintaining integrations with external healthcare systems (EHRs, FHIR-based APIs, HL7 interfaces, Slack, Google, etc.)
Collaborating on the development and maintenance of our central Data Warehouse, ensuring clean schema, lineage, and query performance
Collaborating with product, engineering, and client delivery teams to understand and translate customer domain knowledge into structured data solutions
Building workflow automation to support care management, reporting, and operational tasks.
Championing data quality, observability, and governance, especially in regulated healthcare environments in team
Mentoring engineers and contribute to a culture of knowledge sharing, technical excellence, and continuous improvement
What We're Looking For
8+ years of experience in data engineering or similar roles
Strong experience with modern data pipeline tools (e.g., Airflow, dbt, dlt, Dagster)
Proficiency in SQL and Python (or equivalent data engineering languages)
Experience integrating with EHRs or healthcare APIs (e.g., FHIR, HL7, SMART on FHIR).
Experience working with HIPAA-compliant data practices, security models, and systems (e.g. encryption, access controls, audit logging, data segregation, data retention, breach detection)
Experience with data modeling and designing data warehouses (e.g., Snowflake, BigQuery, Redshift, or Postgres-based solutions)
Strong communication skills with the ability to work with both technical and non-technical stakeholders
Prior experience in client-facing or consultative roles is a plus
Nice to Have
Experience with healthcare data standards (LOINC, SNOMED, ICD-10, etc.)
Exposure to workflow engines, rule engines, or care coordination platforms
Experience structuring data architecture for MLOps usecases
Familiarity with infrastructure-as-code and cloud-native tools (e.g., Terraform, AWS/GCP/Azure)
Familiarity with container orchestration tools (e.g. Kubernetes, OpenShift)
Background in building data platforms in high-compliance environments
Why You'll Love Working with Us:
Engage with an exciting, fast-growing, and diverse group of coworkers dedicated to improving lives by breaking down technology barriers.
Work in a culturally rich environment that values diversity and inclusivity, ensuring a welcoming and supportive atmosphere for all employees
Full medical, dental, vision and life insurance benefits
401K matching
Company stock options
Unlimited paid vacation
Performance bonuses
Company support for career and skill development
Fun company events and outings
Chorus is an equal opportunity employer with a commitment to diversity and inclusion
Compensation Disclosure
The anticipated base salary for this position is $160,000 to $175,000 based on the Los Angeles market. The base salary offered may vary depending on several factors, including the applicant's qualifications, relevant experience, specific skills, level of education, certifications, and the geographic location where the role will be performed.
Please note that the listed compensation details apply to the base salary only and do not include additional elements of the total compensation package, such as performance bonuses, stock options, or benefits. The final compensation package will be determined by evaluating the candidate's qualifications in relation to market standards and internal pay equity.
Chorus Innovations values diversity as a core principle of the work we do and the communities we serve. We are committed to equal employment opportunity, and we do not discriminate on the basis of race, color, religious creed, citizenship, marital status, age, national origin, ancestry, physical or mental disability, medical condition, genetic information, sex, sexual orientation, military and veteran status, gender, gender identity, gender expression, or any other protected characteristic.
Disability Accommodation for Applicants to Chorus Innovations
At Chorus Innovations, we are committed to fostering an inclusive and accessible workplace environment for all individuals, including job applicants with disabilities. We recognize the value that individuals with diverse abilities bring to our organization and are dedicated to providing equal opportunities for all applicants.
If you require accommodations at any stage of the job application process due to a disability, please inform us of your needs. We will do our best to work with you to ensure that reasonable accommodations are made to facilitate your participation in the recruitment process.
Please note that accommodations will be provided in a manner that respects your privacy and confidentiality. Disclosing a disability or requesting accommodations will not affect your candidacy or opportunities for employment with Chorus Innovations.
To request accommodations or discuss your needs further, please contact **********************. We encourage you to reach out as early as possible to ensure that appropriate arrangements can be made.
Thank you for considering Chorus Innovations as a potential employer. We look forward to the opportunity to work together and to creating an inclusive workplace where all individuals can thrive.
$160k-175k yearly Auto-Apply 60d+ ago
Senior Data Engineer (Developer)
Bravotech 4.2
Dallas, TX jobs
Salary: Up to $145K Join Our Innovative MediaLab Team as a Senior Data Engineer Are you passionate about designing cutting-edge data solutions and eager to contribute to a dynamic, collaborative environment? We are seeking a seasoned Senior Data Engineer to join our Shared Services MediaLab team. In this role, you'll be at the forefront of building scalable, robust data infrastructures that power our strategic initiatives, from real-time analytics to advanced AI applications.
As a Senior Data Engineer, you will architect, develop, and maintain sophisticated data pipelines and infrastructure across cloud-native platforms such as Azure and AWS. You will work closely with data scientists, business analysts, IT teams, and stakeholders to deliver solutions that enable data-driven decision-making and support emerging AI and ML projects. Your expertise will help ensure our data environment is secure, scalable, and optimized for future growth.
What You'll Do
Data Architecture & Infrastructure
Design and deploy enterprise-grade data architectures supporting structured, semi-structured, and unstructured data across multiple cloud platforms (Azure, AWS).
Implement scalable data lakes and data warehouses optimized for both batch and real-time workloads.
Develop and maintain data mesh architectures to facilitate self-service analytics while ensuring robust data governance and security.
Architect cloud-native solutions utilizing serverless computing, containerization, and microservices.
Data Pipeline Development
Build and orchestrate reliable, fault-tolerant data pipelines using modern ELT methodologies and tools like Apache Airflow, Azure Data Factory, and AWS Glue.
Develop real-time streaming solutions with Apache Kafka, Apache Pulsar, and cloud-native services to support live data processing needs.
Implement automated data quality frameworks with monitoring, alerting, and auto-remediation capabilities.
Create CI/CD pipelines for data workflows, incorporating automated testing, deployment, and rollback procedures.
AI & Advanced Analytics Integration
Embed machine learning workflows into data pipelines, enabling feature engineering, model training, and large-scale inference.
Support MLOps practices with model versioning, A/B testing, and automated retraining pipelines.
Build infrastructure to support generative AI initiatives, including vector databases and retrieval-augmented generation (RAG) systems.
Collaborate with data scientists and developers to produce scalable ML models and ensure efficient inference.
Data Governance & Security
Establish comprehensive data governance frameworks, including data lineage, metadata management, and cataloging.
Ensure compliance with privacy laws (GDPR, CCPA) by implementing data masking, encryption, and strict access controls.
Maintain audit trails for data processing activities and model predictions.
Performance & Monitoring
Optimize data processing performance via query tuning, indexing, and resource management.
Implement observability strategies, including metrics, logging, and distributed tracing for all data pipelines.
Conduct root cause analyses and resolve data quality or system performance issues swiftly.
Define and maintain SLAs for data freshness, accuracy, and system uptime.
Collaboration & Leadership
Work closely with cross-functional teams to gather requirements and deliver impactful solutions.
Provide technical mentorship to junior engineers and analysts.
Lead technical design reviews and contribute to strategic technology planning.
Document best practices, data architectures, and system workflows; lead knowledge-sharing initiatives.
Qualifications
Minimum of 7+ years of hands-on experience in data engineering, with a proven track record of building and maintaining large-scale production data systems.
Strong experience working directly with internal clients and stakeholders.
Extensive expertise in database development and management.
Bachelor's degree in Computer Science, Information Technology, or a related field (Master's preferred).
Proven experience in the following technical areas:
Cloud platforms: Azure & AWS (data services, infrastructure management)
Data frameworks: Apache Spark (PySpark, Scala, Java), Hadoop ecosystem, Databricks
Real-time processing: Kafka, Pulsar, Kinesis, Event Hubs
Data storage: SQL, NoSQL, NewSQL (PostgreSQL, MongoDB, Cassandra, Snowflake)
Data modeling and warehousing: dimensional modeling, star/snowflake schemas, SCD
ETL/ELT tools: Azure Data Factory, Fabric Dataflow, AWS Glue, dbt
Infrastructure as Code: Terraform, Bicep, CloudFormation
Containerization & Orchestration: Docker, Kubernetes
Data lakehouse architectures: Delta Lake, Apache Iceberg, Apache Hudi
MLOps tools and workflows: MLflow, Kubeflow, SageMaker
Vector databases and embedding techniques for AI applications
Observability tools: DataDog, Splunk, Prometheus, Grafana
CI/CD pipelines: Azure DevOps, GitHub Actions, Jenkins
Security best practices for cloud and data environments
Strong programming skills in Python, C#, and/or Scala.
Deep understanding of distributed systems, fault tolerance, and high-availability architectures.
Excellent problem-solving, communication, and collaboration skills.
Preferred Certifications & Additional Qualifications
Microsoft Certified: Azure Data Engineer Associate (DP-203)
Microsoft Certified: Fabric Data and Analytics Engineer
AWS Certified Data Engineer - Associate
Databricks Certified Data Engineer
SnowPro Advanced Data Engineer & Architect
Experience with graph databases, knowledge graphs, and compliance frameworks in data privacy.
What We Offer
Exciting opportunities to work on innovative AI-driven data projects.
Collaboration with a talented, motivated team of professionals.
Opportunities for professional growth and skills development.
A dynamic environment that encourages innovation and continuous learning.
Ready to shape the future of data at our company? Apply now and become a key driver of our data-driven success!
Note: This position is based in-office. Candidates must be authorized to work locally.
$145k yearly 60d+ ago
Senior DevOps Engineer
Cellebrite DI Ltd. 4.0
Vienna, VA jobs
About Cellebrite: Cellebrite's (Nasdaq: CLBT) mission is to enable its global customers to protect and save lives by enhancing digital investigations and intelligence gathering to accelerate justice in communities around the world. Cellebrite's AI-powered Digital Investigation Platform enables customers to lawfully access, collect, analyze and share digital evidence in legally sanctioned investigations while preserving data privacy. Thousands of public safety organizations, intelligence agencies and businesses rely on Cellebrite's digital forensic and investigative solutions-available via cloud, on-premises and hybrid deployments-to close cases faster and safeguard communities.
To learn more, visit us at ******************* ****************************************** and find us on social media @Cellebrite.
What is your mission?
As a Senior DevOps Engineer, you'll take charge of automating how we deploy infrastructure and applications. You'll also lead efforts to make our processes better, helping engineering and product teams work faster and smarter - without compromising service quality. Your work supports Cellebrite's mission to protect lives, accelerate justice and ensure data privacy.
Responsibilities:
* Design, develop, modify automation processes to deploy large-scale, enterprise-grade applications.
* Serve as the go-to expert for CI/CD processes supporting Cellebrite's cloud products.
* Partner with R&D, Product Management and Architecture teams to deliver key operational metrics and feedback that support high product availability.
* Investigate reported and suspected outages and other issues.
* Oversee documentation of issues and resolutions. Maintain knowledge base of known issues.
* Produce, oversee production of, and maintain documentation, such as architecture diagrams, System Designs, and Runbooks.
* Work alongside fellow SREs, DevOps Engineers, and developers to solve problems and investigate anomalies.
* Participate in an on-call rotation to help maintain our 99.95% uptime SLO.
RequirementsRequirements:
* 7+ years of experience working with AWS is required, with a proven track record of building complex infrastructure.
* Strong experience with networking, containerization (Kubernetes) and Linux system administration.
* Deep understanding of Infrastructure as Code (Terraform, Ansible, Cloud Formation etc.)
* Strong experience with Kubernetes preferably with AWS EKS.
* Experience with scaling and maintaining high availability production systems.
* Experience with DevOps release, versioning, build management, automation scripts, CI tools such as GitHub actions.
* Excellent verbal and written communication skills in English.
* Serve as a guide to the team and provide the best practices as a DevOps Tech Lead.
* Must be a U.S. Citizen
Since you'll be part of globally distributed team, communication skills and a can-do attitude are a must!
Personal Characteristics