Software Engineer
Rochester, NY jobs
A leading innovator in advanced laser technology is seeking a Software Engineer to join their growing Rochester engineering hub. This is a rare opportunity to work hands-on with cutting-edge scientific systems while building software that directly impacts real-world research, industrial applications, and next-generation engineered products.
In this role, you'll collaborate closely with engineers, scientists, operations teams and business stakeholders to design, develop, and maintain a wide range of internal and product-level software tools. Your work will span intuitive user interfaces, automation solutions, data and KPI dashboards, and software that interacts with highly precise hardware systems.
What you'll work on:
• Building Python-based software for internal tools and product control systems
• Creating user-facing interfaces that make complex technology simple to operate
• Interfacing software with hardware, instruments, and electronic components
• Designing dashboards and analytics to support operational decision-making
• Contributing to automation, motion control, and system-integration projects
• Maintaining high code standards through documentation and version control
• Partnering with multi-disciplinary teams to define scope, requirements, and delivery timelines
What makes this opportunity exciting:
• Work in a highly respected, innovation-driven environment shaping the future of photonics
• Engage with a diverse mix of engineering, R&D, and technology functions daily
• See your work directly influence scientific and commercial applications
• Join a collaborative team that values curiosity, creativity, and continuous learning
• Enjoy the benefits of an on-site role within one of Rochester's strongest high-tech communities
Ideal background:
• Solid software development experience with strong Python skills
• Experience building customer-facing or GUI-based software
• Familiarity with hardware or instrument interfacing, embedded systems, or automation is a plus
• Exposure to analytics, dashboards, or BI tools is helpful
• Bachelor's degree in a technical field; advanced degree welcome but not required
Compensation: Competitive salary up to $95,000, based on experience.
At Underdog, we make sports more fun.
Our thesis is simple: build the best products and we'll build the biggest company in the space, because there's so much more to be built for sports fans. We're just over five years in, and we're one of the fastest-growing sports companies ever, most recently valued at $1.3B. And it's still the early days.
We've built and scaled multiple games and products across fantasy sports, sports betting, and prediction markets, all united in one seamless, simple, easy to use, intuitive and fun app.
Underdog isn't for everyone. One of our core values is give a sh*t. The people who win here are the ones who care, push, and perform. If that's you, come join us.
Winning as an Underdog is more fun.
About the role
Build the backbone of Underdog's data ecosystem-design and deploy internal APIs and workflows that power seamless communication across our services.
Create robust, scalable microservices to serve real-time, high-volume data needs for our internal teams, enabling faster decision-making and better fan experiences.
Own greenfield projects that shape the future of our business-architecting backend systems built for scale, resilience, and speed.
Partner cross-functionally with engineers, product owners, and data scientists to turn complex challenges into elegant, scalable solutions.
Safeguard system reliability by building smart monitoring, alerting, and logging tools that keep our infrastructure healthy and performant.
Drive impactful technical initiatives from start to finish in a fast-moving, high-stakes environment where your leadership makes a real difference.
Champion engineering excellence by leading code reviews, sharing best practices, and helping level up the team through mentorship and feedback.
Who you are
A Backend Engineer with at least 3 years of experience building microservices heavy backend systems and APIs for internal applications on a cloud environment (e.g. AWS, GCP, Azure)
Advanced proficiency with Go
Experience building and scaling backend systems and designing enterprise scale APIs for external and internal applications
Highly focused on delivering results for internal and external stakeholders in a fast-paced, entrepreneurial environment
Excellent communication skills with ability to influence and collaborate with stakeholders
Familiarity with containerization and orchestration technologies such as Docker, Kubernetes, or ECS
Experience with DevOps practices such as CI/CD pipelines, and infrastructure-as-code tools (e.g. Terraform, CDK)
Even better if you have
Strong interest in sports
Prior experience in the sports betting industry
Our target starting base salary range for this position is between $135,000 and $165,000, plus pre-IPO equity. Our comp range reflects the full scale of expected compensation for this role. Offers are calibrated based on experience, skills, impact, and geographies. Most new hires land in the lower half of the band, with the opportunity to advance toward the upper end over time.
What we can offer you:
Unlimited PTO (we're extremely flexible with the exception of the first few weeks before & into the NFL season)
16 weeks of fully paid parental leave
Home office stipend
A connected virtual first culture with a highly engaged distributed workforce
5% 401k match, FSA, company paid health, dental, vision plan options for employees and dependents
#LI-REMOTE
This position may require sports betting licensure based on certain state regulations.
Underdog is an equal opportunity employer and doesn't discriminate on the basis of creed, race, sexual orientation, gender, age, disability status, or any other defining characteristic.
California Applicants: Review our CPRA Privacy Notice here.
Auto-ApplyConnectivity Engineer
Remote
The Connectivity Engineer Provides technical support to customers, focusing on connectivity with healthcare information systems. Delivers consistent, high-quality, and responsive support to customers. Document case updates, investigation findings, actions taken, next steps, and resolutions in H1. Collaborates with team members and partners with customers to investigate and resolve problems while building product knowledge through training and OJT.
At this level, you will be responsible for:
Solving technical problems efficiently
Providing exceptional customer service
Maintaining accurate and detailed documentation
Collaborating effectively with team members and customers
Continuously improving knowledge and contributing to the knowledge base
The working hours for this full-time remote position is 10 am - 7 pm ET / 11 am - 8 pm ET.
Summary of Duties and Responsibilities
• Provide technical support to external customers and internal colleagues, focusing on connectivity and interoperability with healthcare information systems.
• Perform investigations, troubleshooting, and root cause analysis using diagnostic tools.
• Deliver consistent, high-quality, and responsive support to customers.
• Take ownership of escalated cases, provide regular updates, and expedite resolutions.
• Escalate issues to appropriate expert resources and management as necessary.
• Document case updates, investigation findings, actions taken, next steps, and resolutions in the CRM system (Salesforce/H1).
• Follow established support processes to ensure compliance with quality and regulatory requirements.
• Collaborate with team members and partner with customers to investigate and resolve problems.
• Dispatch Field Service Engineers for work that cannot be done remotely, providing specific guidance on actions needed.
• Refer to the shared Knowledge Base for problem resolution and improved understanding.
• Maintain and improve product knowledge through training and on-the-job learning.
• Ensure quality performance and customer satisfaction for connectivity projects.
• Build in-depth knowledge of products, focusing on interoperability and networking.
• Communicate product reliability issues and provide improvement suggestions.
• Identify opportunities for continuous improvement and maintain proficiency in relevant technologies.
• Contribute content and assist in maintaining the shared knowledge base.
• Participate in technical conversations during customer meetings.
• Adhere to and support the Quality Policy as well as all Quality System procedures and guidelines.
• Able to work flexible work schedules as required for staff coverage during customer and staff operating hours.2
• Occasional travel may be necessary (10%).
• Perform other duties and projects as assigned, to meet company and department objectives.
Functional Competencies
• Troubleshooting
• Customer Focus
• Daily administrative workflow completion
• Teamwork
• Knowledge Building
• Relationship Building
Skills, Knowledge, Abilities
• Ability to work under minimal supervision from home office
• Organization and Time Management
• Detail Oriented
• Written and Verbal Communication skills
• Customer Service and Interpersonal Relations
• Intermediate Computer and Technology Literacy
• Analytical Assessment and Problem Solving
• Knowledge of network technologies (e.g., TCP/IP) and remote access tools (e.g., VPN, Remote Desktop)
• Understanding of Medical Imaging Technology, DICOM, HL7, RIS/LIS, and PACS systems
Physical Demands
The physical requirements described here are representative of those that must be met by an employee to successfully perform the essential functions of this job.
• Sit; use hands to handle or feel objects, tools, or controls.
• Stand; walk; reach with hands and arms; and stoop, kneel, crouch, or crawl.
• Lifting/moving and carrying products weighing up to 40 pounds.
• Other (please specify): View computer monitor, use keyboard/mouse, assemble, and plug cables related to computer peripherals.
Qualifications & Education
• A four-year degree in a related technical discipline is preferred.
• Associate of Science degree in electronics or a related technical discipline in addition to an equivalent blend of education and experience is acceptable.
Note: Minimal job requirements of this position may be changed as Hologic products, technology or this function evolves.
Why join Hologic?
We are committed to making Hologic the destination for top talent. For you to succeed, we want to enable you with the tools and knowledge required and so we provide comprehensive training when you join as well as continued development and training throughout your career.
The annualized base salary range for this role is $65,300 - $102,200 and is bonus eligible. Final compensation packages will ultimately depend on factors including relevant experience, skillset, knowledge, geography, education, business needs and market demand. From a benefits perspective, you will have a access to benefits such as medical and dental insurance, ESPP, 401(k) plan, vacation, sick leave and holidays, parental leave, wellness program and many more!
Agency and Third-Party Recruiter Notice:
Agencies that submit a resume to Hologic must have a current executed Hologic Agency Agreement executed by a member of the Human Resource Department. In addition, Agencies may only submit candidates to positions for which they have been invited to do so by a Hologic Recruiter. All resumes must be sent to the Hologic Recruiter under these terms or they will not be considered.
Hologic, Inc. is proud to be an Equal Opportunity Employer inclusive of disability and veterans.
#US-remote
#LI-MG3
Auto-Apply
What is PerfectServe?
PerfectServe offers Best in KLAS assets in three categories: clinical communications, scheduling, and patient engagement solutions. We have seen an 88% growth rate over the past three years and need strong team members to help us continue to grow!
PerfectServe's mission is to accelerate speed to care by optimizing provider schedules and dynamically routing messages to the right person at the right time in any care setting; advancing patient care and clinical workflows.
By joining PerfectServe, you will have the unique opportunity to come alongside us as we further our vision of putting all of these solutions together to provide optimal patient outcomes and faster patient care interventions. By improving speed to care and cross-continuum communication, we save lives, reduce length of stay, minimize re-admissions, and bring joy back to caregivers.
We have an incredible portfolio of customers, with new ones recognizing the value of our solutions and joining the PerfectServe family every day.
PerfectServe is seeking a skilled and forward-thinking Platform Engineer to help design, build, and optimize our cloud and AI-driven automation capabilities. The ideal candidate has deep experience with containerized infrastructure-especially Kubernetes-along with a strong grasp of emerging cloud and automation technologies. You'll play a key role in developing secure, scalable, and efficient systems that streamline operations and elevate the performance of our healthcare-focused platform.
Essential Functions
Technical Design & Architecture: Lead the design and planning of cloud-native systems, including automation-driven components, ensuring alignment with business goals, scalability needs, and security best practices.
Cloud Implementation: Own the deployment of cloud solutions with a strong emphasis on AWS services, Infrastructure as Code, and platform automation.
AI-Driven Automation: Implement and support AI-based automation tools and services, such as intelligent workflow automation, event-driven decisioning, anomaly detection, and auto-remediation systems.
Optimization & Performance: Continuously tune cloud and automation workloads to improve reliability, performance, and cost efficiency for internal teams and end users.
Support, Observability & Recovery: Build and maintain robust monitoring, alerting, and recovery processes for cloud and automation pipelines.
Continuous Learning: Stay up to date on new automation frameworks, AI-assisted DevOps tools, cloud technologies, and platform best practices-bringing new efficiencies and innovations to the team.
Qualifications
Hands-on Kubernetes Administrator experience, including ArgoCD and Helm.
Strong proficiency with Terraform and Infrastructure as Code development.
Experience supporting complex, distributed SaaS platforms.
Proven expertise with AWS cloud services; AWS certifications are a strong plus.
Experience integrating AI-driven automation tools-such as automated runbooks, AI-assisted monitoring, AIOps platforms, or automated incident response workflows.
Familiarity with event-driven architectures, rule engines, or cloud-native automation services (e.g., AWS Step Functions, Bedrock Agents, Lambda) is a plus.
Strong understanding of platform security, IAM, and best practices around securing automation pipelines.
Experience with CI/CD, DevOps, and Internal Developer Platforms (IDP).
Solution Architect Professional and/or Kubernetes certifications are highly desirable.
Why Join PerfectServe?
At PerfectServe, we are transforming healthcare communication and collaboration to help clinicians deliver better care. You'll work with a dedicated and mission-driven team in an environment that values growth, transparency, and innovation.
**Please do not use AI tools to generate your application materials. We value authentic, personal communication and want to understand your unique voice and perspective.**
We offer a salary range of $120,000-140,000 per year, with compensation tailored to your background, strengths, and potential to grow within the team.
The salary range listed for this role reflects our commitment to pay transparency and is based on market data, internal equity, and the scope of responsibilities. compensation will be determined by a combination of factors, including the candidate's experience, skills, and the specific team or product area they support.
We regularly review compensation across the company to ensure fairness and consistency. If you are a current employee and have questions about how your compensation aligns with our ranges, we encourage you to speak with your manager or People Operations.
Benefits:
Remote first work environment
Health, Dental, Vision, Life and Disability Insurance options available day one.
401K - with match and immediately vested.
17 company holidays, 2 floating holidays plus competitive paid time off policy
Internal Advancement Opportunities
PerfectServe offers unified healthcare communication solutions to help physicians, nurses, and care team members provide exceptional patient care. PerfectServe's cloud-based solutions enhance patient safety and reduce provider burnout by automating workflows, speeding time to treatment, optimizing shift schedules, empowering nurse mobility, and engaging patients in their own care.
Auto-ApplyCybersecurity Engineer
Remote
At Veracyte, we offer exciting career opportunities for those interested in joining a pioneering team that is committed to transforming cancer care for patients across the globe. Working at Veracyte enables our employees to not only make a meaningful impact on the lives of patients, but to also learn and grow within a purpose driven environment. This is what we call
the Veracyte way
- it's about how we work together, guided by our values, to give clinicians the insights they need to help patients make life-changing decisions.
Our Values:
We Seek A Better Way: We innovate boldly, learn from our setbacks, and are resilient in our pursuit to transform cancer care
We Make It Happen: We act with urgency, commit to quality, and bring fun to our hard work
We Are Stronger Together: We collaborate openly, seek to understand, and celebrate our wins
We Care Deeply: We embrace our differences, do the right thing, and encourage each other
The Position:
As a Cybersecurity Engineer at Veracyte, you will play a pivotal role in securing our digital infrastructure, monitoring for threats, and implementing cutting-edge cybersecurity solutions. You will work collaboratively with our IT and cybersecurity teams to protect our organization from cyber threats and ensure the confidentiality, integrity, and availability of our systems and data.
Responsibilities:
Cybersecurity Infrastructure Management:
- Design, implement, and maintain security infrastructure, including firewalls, intrusion
detection/prevention systems, VPNs, and other security-related technologies.
- Conduct regular security assessments and vulnerability scans, identifying and mitigating
potential vulnerabilities.
Threat Monitoring and Incident Response:
- Monitor network and system logs for suspicious activities and potential security incidents.
- Develop and execute incident response plans to effectively manage and mitigate
cybersecurity incidents.
- Investigate security breaches and provide recommendations for improvements.
Policy and Compliance:
- Ensure compliance with industry-specific regulations and standards (e.g., GDPR, HIPAA, NIST
CSF).
- Assist in the development and enforcement of cybersecurity policies, standards, and
procedures.
- Conduct security awareness training for employees to promote a culture of security.
Security Tools and Technologies:
- Evaluate and implement advanced cybersecurity tools and technologies to enhance our
security posture.
- Stay updated on emerging threats and trends in the cybersecurity field and recommend
proactive measures.
Security Audits and Assessments:
- Collaborate with external auditors and assessors to conduct security audits and assessments.
- Work on remediation plans to address identified weaknesses.
Collaboration and Communication:
- Collaborate with cross-functional teams to integrate security into the development and
deployment processes.
- Communicate cybersecurity risks and strategies to technical and non-technical stakeholders.
Who You Are:
Bachelor's degree in Computer Science, Information Security, or a related field.
Relevant industry certifications such as CEH, Pentest+, CySA+, SSCP, OSCP, or
equivalent.
5 or more years experience in cybersecurity engineering or a related cybersecurity role.
Proficiency in security tools and technologies, including firewalls, SIEM, IDS/IPS, and
antivirus solutions.
Strong understanding of network protocols, architecture, and security best
practices.
Knowledge of regulatory requirements and industry standards.
Excellent problem-solving and analytical skills.
Strong communication and teamwork skills.
#LI-Remote
The final salary offered to a successful candidate will be dependent on several factors that may include but are not limited to years of experience, skillset, geographic location, industry, education, etc. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units.
Pay range$130,000-$150,000 USDWhat We Can Offer You
Veracyte is a growing company that offers significant career opportunities if you are curious, driven, patient-oriented and aspire to help us build a great company. We offer competitive compensation and benefits, and are committed to fostering an inclusive workforce, where diverse backgrounds are represented, engaged, and empowered to drive innovative ideas and decisions. We are thrilled to be recognized as a 2024 Certified™ Great Place to Work in both the US and Israel - a testament to our dynamic, inclusive, and inspiring workplace where passion meets purpose.
About Veracyte
Veracyte (Nasdaq: VCYT) is a global diagnostics company whose vision is to transform cancer care for patients all over the world. We empower clinicians with the high-value insights they need to guide and assure patients at pivotal moments in the race to diagnose and treat cancer. Our Veracyte Diagnostics Platform delivers high-performing cancer tests that are fueled by broad genomic and clinical data, deep bioinformatic and AI capabilities, and a powerful evidence-generation engine, which ultimately drives durable reimbursement and guideline inclusion for our tests, along with new insights to support continued innovation and pipeline development. For more information, please visit **************** or follow us on LinkedIn or X (Twitter).
Veracyte, Inc. is an Equal Opportunity Employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disability status. Veracyte participates in E-Verify in the United States. View our CCPA Disclosure Notice.
If you receive any suspicious alerts or communications through LinkedIn or other online job sites for any position at Veracyte, please exercise caution and promptly report any concerns to ********************
Auto-Apply
Must be legally authorized to work in the U.S. now and in the future without sponsorship
Annual Bonus Eligibility: $8,000
RTI Surgical is now Evergen!
This rebranding reflects our strategic evolution as a leading CDMO in regenerative medicine and comes at the end of a significant year for the business, including the successful acquisitions of Cook Biotech in IN. and Collagen Solutions, MN. Our new brand identity emphasizes our unique positioning as the only CDMO offering a comprehensive portfolio of allograft and xenograft biomaterials at scale.
Evergen is a global industry-leading contract development and manufacturing organization (CDMO) in regenerative medicine. As the only regenerative medicine company that offers a differentiated portfolio of allograft and xenograft biomaterials at scale, Evergen is headquartered in Alachua, FL, and has manufacturing facilities in West Lafayette, IN., Eden Prairie and Glencoe, MN., Neunkirchen, DE., Glasgow, UK., and Marton, NZ.
Read more about this change and Evergen's commitment to advancing regenerative medicine here: ************************
We are seeking a highly skilled and motivated AI/ML Engineer 2 to lead the design, development, and deployment of end-to-end Artificial Intelligence and Machine Learning solutions. This role requires deep expertise in the Data Cloud Architectures, coupled with proficiency in cloud ML services (e.g., Microsoft Azure AI/ML, Snowflake Cortex) and open-source frameworks (e.g., PyTorch, TensorFlow, Hugging Face). The ideal candidate will be a technical leader capable of architecting scalable, high-performance, and cost-efficient AI applications directly integrated within our data ecosystem.
Join Evergen's Data Team as an AI/ML Engineer 2 and help build the AI backbone of our organization. You'll design and deploy scalable machine learning and Generative AI solutions using Snowflake, Azure, and modern open-source frameworks.
You'll work across teams to turn complex problems into automated, intelligent workflows improving decision-making and driving efficiency in Donor Services, Commercial, QA, Operations, and beyond.
If you're excited about building production-ready ML pipelines, experimenting with LLMs, and shaping the future of AI at a growing life sciences company, this role puts you at the center of it.
In this role, you will play a central part in delivering innovative AI and Generative AI solutions that power Evergen's transformation. As part of Evergen's Data Team, you will work across a modern cloud and data ecosystem Snowflake, Azure, and leading open-source frameworks to design, build, and operationalize intelligent systems that scale with the business.
You'll collaborate with data engineers, architects, and cross-functional partners to unify and enrich data, automate decision-making, and develop production-ready ML and GenAI applications. We'll rely on you to bring strong technical leadership, hands-on development skills, and a passion for solving complex problems with elegant, scalable solutions.
You will spend your time on key responsibilities that include:
Researching, designing, and implementing scalable AI/ML and Generative AI systems that align with business needs.
Building and optimizing end-to-end ML pipelines using Snowpark, Azure ML, and modern MLOps frameworks.
Enhancing data pipelines, feature engineering workflows, and storage patterns to ensure accuracy, performance, and reliability.
Monitoring, evaluating, and improving model performance, quality, and cost-efficiency.
Deploying AI solutions into production environments and ensuring they are maintainable, governed, and built for long-term scalability.
Required Skills & Qualifications
Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related technical field (or equivalent industry experience).
3+ years of hands-on software development experience with strong proficiency in Python and modern ML frameworks.
3+ years of experience building and deploying AI/ML models using cloud platforms such as Snowflake Snowpark/SnowML, Azure ML, or equivalent cloud-native ML environments.
Proven experience designing and operationalizing ML pipelines, including data preparation, feature engineering, training, deployment, and monitoring.
Strong understanding of Generative AI, LLMs, prompt engineering, embeddings, and vector-based search.
Experience with MLOps practices and tools (CI/CD, model lifecycle management, versioning, monitoring).
Solid knowledge of SQL and modern data engineering best practices; ability to work with large, complex datasets.
Familiarity with Snowflake Native Apps, UDFs, stored procedures, or similar data cloud extensibility frameworks.
Experience integrating AI/ML capabilities into production applications and enterprise environments.
Strong problem-solving skills, with the ability to translate business needs into scalable technical solutions.
Excellent communication skills and comfort working in a cross-functional, fast-moving environment.
Preferred Skills
Experience with Microsoft Azure's broader AI ecosystem (Cognitive Search, OpenAI Services, etc.).
Background in life sciences, healthcare, or regulated industries.
Experience working with Databricks, dbt, or orchestration tools like Dagster or Airflow.
Familiarity with vector databases, retrieval-augmented generation (RAG), or advanced GenAI architectures.
Leadership & Collaboration:
Provide technical mentorship and guidance to junior developers and data scientists.
Collaborate with Data Architects, Data Scientists, and Business Stakeholders to translate complex business requirements into technical AI solutions.
We're looking for someone who pairs strong technical expertise with the mindset of a top performer innovative, adaptable, and driven by purpose. You bring a passion for creating meaningful impact through AI, and you thrive in environments where collaboration, clear communication, and forward thinking are essential. Your ability to lead through influence and work seamlessly across teams will set you apart.
More about Evergen:
Evergen provides customers across a diverse set of market segments with leading-edge expertise, scale, and flexibility across end-to-end services including design, development, regulatory support, verification and validation, manufacturing, and supply chain management.
Evergen is rooted in a steadfast commitment to quality, integrity, and patient safety with a focus on five key values:
Accountable: We own our actions and decisions.
Agile: We embrace change to stay ahead of the curve and evolve to drive innovation and growth.
Growth Mindset : We embrace challenges as opportunities for continuous learning.
Customer-Centric: We prioritize customers at every touch point.
Inclusive: We thrive on the richness of our diversity and ensure every voice is heard, respected, and celebrated.
At Evergen, we are committed to fostering an inclusive workplace where we embrace the richness of our diversity and ensure that every voice is heard, respected, and celebrated. We believe that by embracing diversity and promoting inclusivity, we not only uphold our values but also strengthen our position as the CDMO of Choice in regenerative medicine solutions. We recognize that cultivating a growth mindset is essential to our success, and we are dedicated to continuous learning and improvement in our diversity, equity, and inclusion efforts. Through accountability and action, we strive to create an environment where individuals can thrive, innovate, and contribute their unique perspectives to drive our collective success.
Montagu Private Equity (“Montagu”), a leading European private equity firm, acquired RTI in 2020 and has supported the transformation of the company to its next level of potential.
#LI-Remote
Auto-Apply
6-7 years of experience in AI/ML engineering, data science, or applied machine learning.
Strong programming skills in Python and familiarity with ML frameworks:
TensorFlow, PyTorch, Keras, JAX, Hugging Face
Experience with:
NLP, CV, predictive modeling, or generative AI
Model deployment and serving (TensorFlow Serving, TorchServe, FastAPI, Triton)
Data processing tools (Pandas, NumPy, Spark)
Strong understanding of:
Machine learning fundamentals, deep learning architectures, and statistics
Cloud platforms: AWS/GCP/Azure
CI/CD, MLOps, versioning tools (MLflow, DVC, Airflow)
Familiarity with distributed training, vector databases (FAISS, Pinecone, Chroma), and GPU acceleration is a plus.
Nice-to-Have
Experience with prompt engineering, multi-agent systems, or RAG frameworks.
Knowledge of LLM fine-tuning (LoRA, QLoRA, PEFT).
Experience with real-time inference, model optimizers, quantization.
Certifications in AI/ML or cloud technologies.
Experience in domain-specific AI (healthcare, fintech, retail, etc.).
Compensation, Benefits and Duration
Minimum Compensation: USD 45,000
Maximum Compensation: USD 160,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyGCP Engineer - Springfield, MO
Remote
Description for Internal Candidates
Key Responsibilities:
Infrastructure Automation & Management:
Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using GCP services.
Automate cloud infrastructure provisioning, scaling, and monitoring using Infrastructure as Code (IaC) tools such as Terraform or Google Cloud Deployment Manager.
Manage and optimize GCP resources such as Compute Engine, Kubernetes Engine, Cloud Functions, and BigQuery to support development teams.
CI/CD Pipeline Management:
Build, maintain, and enhance continuous integration and continuous deployment (CI/CD) pipelines to ensure seamless and automated code deployment to GCP environments.
Integrate CI/CD pipelines with GCP services like Cloud Build, Cloud Source Repositories, or third-party tools like Jenkins
Ensure pipelines are optimized for faster build, test, and deployment cycles.
Monitoring & Incident Management:
Implement and manage cloud monitoring and logging solutions using Dynatrace and GCP-native tools like Stackdriver (Monitoring, Logging, and Trace).
Monitor cloud infrastructure health and resolve performance issues, ensuring minimal downtime and maximum uptime.
Set up incident management workflows, implement alerting mechanisms, and create runbooks for rapid issue resolution.
Security & Compliance:
Implement security best practices for cloud infrastructure, including identity and access management (IAM), encryption, and network security.
Ensure GCP environments comply with organizational security policies and industry standards such as GDPR/CCPA, or PCI-DSS.
Conduct vulnerability assessments and perform regular patching and system updates to mitigate security risks.
Collaboration & Support:
Collaborate with development teams to design cloud-native applications that are optimized for performance, security, and scalability on GCP.
Work closely with cloud architects to provide input on cloud design and best practices for continuous integration, testing, and deployment.
Provide day-to-day support for development, QA, and production environments, ensuring availability and stability.
Cost Optimization:
Monitor and optimize cloud costs by analyzing resource utilization and recommending cost-saving measures such as right-sizing instances, using preemptible VMs, or implementing auto-scaling.
Tooling & Scripting:
Develop and maintain scripts (using languages like Python, Bash, or PowerShell) to automate routine tasks and system operations.
Use configuration management tools like Ansible, Chef, or Puppet to manage cloud resources and maintain system configurations.
Required Qualifications & Skills:
Experience:
3+ years of experience as a DevOps Engineer or Cloud Engineer, with hands-on experience in managing cloud infrastructure.
Proven experience working with Google Cloud Platform (GCP) services such as Compute Engine, Cloud Storage, Kubernetes Engine, Pub/Sub, Cloud SQL, and others.
Experience in automating cloud infrastructure with Infrastructure as Code (IaC) tools like Terraform, Cloud Deployment Manager, or Ansible.
Technical Skills:
Strong knowledge of CI/CD tools and processes (e.g., Jenkins, GitLab CI, CircleCI, or GCP Cloud Build).
Proficiency in scripting and automation using Python, Bash, or similar languages.
Strong understanding of containerization technologies (Docker) and container orchestration tools like Kubernetes.
Familiarity with GCP networking, security (IAM, VPC, Firewall rules), and monitoring tools (Stackdriver).
Cloud & DevOps Tools:
Experience with Git for version control and collaboration.
Familiarity with GCP-native DevOps tools like Cloud Build, Cloud Source Repositories, Artifact Registry, and Binary Authorization.
Understanding of DevOps practices and principles, including Continuous Integration, Continuous Delivery, Infrastructure as Code, and Monitoring/Alerting.
Security & Compliance:
Knowledge of security best practices for cloud environments, including IAM, network security, and data encryption.
Understanding of compliance and regulatory requirements related to cloud computing (e.g., GDPR/CCPA, HIPAA, or PCI).
Soft Skills:
Strong problem-solving skills with the ability to work in a fast-paced environment.
Excellent communication skills, with the ability to explain technical concepts to both technical and non-technical stakeholders.
Team-oriented mindset with the ability to work collaboratively with cross-functional teams.
Certifications (Preferred):
Google Professional Cloud DevOps Engineer certification (preferred).
Other GCP certifications such as Google Professional Cloud Architect or Associate Cloud Engineer are a plus.
DevOps certifications like Certified Kubernetes Administrator (CKA) or AWS/GCP DevOps certification are advantageous.
Compensation, Benefits and Duration
Minimum Compensation: USD 64,000
Maximum Compensation: USD 224,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyPlatform Engineer - Las Vegas, NV
Remote
Platform Engineer ( Combination of Release Engineer, Monitor Kibanna, Dynatrace, Apigee)
Cloud Eningeer - Strong experience with core AWS services: VPC, EC2, S3, IAM, Route 53, Backups, EKS, SSO, MSK, Security Hub, GuardDuty, and related security/compliance tooling.
• Advanced networking knowledge, including hands-on experience with AWS Transit Gateway (TGW) and Direct Connect - setup, troubleshooting, and hybrid connectivity.
• Observability & Monitoring: Experience with Amazon CloudWatch, Grafana, OpenTelemetry (OTEL), and proactive alerting and telemetry configuration.
• Infrastructure as Code & Automation: Proficient in Terraform, GitLab CI/CD, and Shell scripting for end-to-end automation.
• Container Orchestration: Solid understanding of Kubernetes (EKS) and Istio service mesh for traffic management, security, and observability.
• Security & Patching: Experienced in patching and vulnerability remediation in cloud-native and containerized environments, aligned with best practices and compliance.
• Programming & Scripting: Hands-on Python development experience for scripting, automation, and infrastructure tooling.
• Problem-solving mindset with the ability to work efficiently in fast-paced, Agile/Scrum environments and collaborate across cross-functional teams.
Work in fast paced environments
Compensation, Benefits and Duration
Minimum Compensation: USD 33,000
Maximum Compensation: USD 118,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyBI Engineer (Power BI) - Dallas, TX
Remote
As a BI Engineer specializing in Power BI, you will play a key role in the design, development, and deployment of modern reporting and analytical solutions on the Snowflake and Power BI platform. You will be instrumental in migrating existing reports and dashboards from OBIEE to Power BI, ensuring data accuracy, performance, and user satisfaction. Your expertise in data modeling, ETL/ELT processes, and Power BI development will be critical to the success of this strategic initiative.
Responsibilities:
Collaborate with business stakeholders, data analysts, and other team members to understand reporting requirements and translate them into robust Power BI solutions.
Design and develop scalable and efficient data models in Power BI, leveraging best practices for performance and usability.
Participate in the migration of existing reports and dashboards from OBIEE to Power BI, ensuring data integrity and functional parity.
Develop and maintain ETL/ELT processes to load and transform data from Snowflake and other relevant data sources into Power BI datasets.
Create interactive and visually compelling dashboards and reports in Power BI that empower end-users with self-service analytics capabilities.
Implement and maintain data quality checks and validation processes to ensure the accuracy and reliability of reporting data.
Optimize Power BI report performance and troubleshoot any performance bottlenecks.
Contribute to the development and maintenance of technical documentation for Power BI solutions.
Stay up-to-date with the latest Power BI features and best practices and recommend innovative solutions to enhance our reporting capabilities.
Participate in testing, deployment, and ongoing support of Power BI applications.
Work closely with the Snowflake team to ensure seamless integration between the data warehouse and Power BI.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field.
Proven experience 6-10+ as a BI Engineer or similar role with a strong focus on Power BI development.
Hands-on experience in designing and developing data models within Power BI Desktop.
Proficiency in writing DAX queries for calculations and data manipulation within Power BI.
Experience with data extraction, transformation, and loading (ETL/ELT) processes.
Familiarity with data warehousing concepts and principles.
Strong SQL skills and experience working with relational databases (experience with Snowflake is a significant plus).
Experience migrating reporting solutions from legacy BI platforms (OBIEE experience is highly desirable).
Excellent analytical and problem-solving skills with a keen attention to detail.
Strong communication and collaboration skills with the ability to effectively interact with both technical and business stakeholders.
Experience with version control systems (e.g., Git).
Preferred Skills:
Experience with advanced Power BI features such as Power BI Service administration, dataflows, and Power BI Embedded.
Knowledge of scripting languages such as Python for data manipulation or automation.
Familiarity with cloud data platforms (Snowflake preferred).
Experience with agile development methodologies.
Understanding of financial services or investment management data.
Compensation, Benefits and Duration
Minimum Compensation: USD 38,000
Maximum Compensation: USD 134,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyPlatform Engineer - US
Remote
We are seeking a skilled Platform Engineer with expertise in Terraform, Golang development, AWS, SDLC automation, and Kubernetes to join our dynamic team. As a Platform Engineer, you will play a crucial role in designing, building, and maintaining our infrastructure and automation solutions, ensuring reliability, scalability, and security across our cloud environment.
Key Responsibilities:
Design, implement, and maintain infrastructure as code (IaC) solutions using Terraform for AWS environments.
Develop and optimize automation scripts and tools in Golang to enhance our SDLC processes.
Manage and deploy containerized applications using Kubernetes, ensuring high availability and performance.
Collaborate with cross-functional teams to define infrastructure requirements and streamline deployment pipelines.
Implement monitoring, logging, and alerting systems to ensure proactive management of our cloud infrastructure.
Participate in troubleshooting and resolving infrastructure-related issues in production and non-production environments.
Stay updated with industry trends and best practices to continuously improve our infrastructure and deployment processes.
Skills and Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience).
5+ years of experience as a Golang developer, with a strong understanding of software development lifecycle (SDLC) practices.
Proven experience with Terraform in large-scale AWS environments, designing and maintaining infrastructure as code.
Deep knowledge of Kubernetes, including deployment, scaling, and management of containerized applications.
Proficiency in AWS services and solutions, including EC2, S3, IAM, VPC, and CloudFormation.
Experience in designing and implementing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or similar.
te Excellent problem-solving and analytical skills, with a proactive approach to identifying and addressing challenges.
Strong communication skills and the ability to work effectively in a collaborative team environment.
Preferred Qualifications:
AWS certification (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer).
Experience with other cloud providers (Azure, Google Cloud Platform).
Familiarity with monitoring tools such as Prometheus, Grafana, ELK Stack, etc.
Prior experience with Agile methodologies and working in Agile teams.
Auto-ApplyBackend/API Engineer - Node.js - Onshore
Remote
API & Backend Development
Design, develop, and maintain RESTful and/or GraphQL APIs using Node.js.
Build scalable, secure, and performant backend services and microservices.
Integrate third-party systems, internal services, and databases.
Use best practices for error handling, logging, caching, and performance optimization.
Architecture & System Design
Participate in architectural planning and technical design discussions.
Implement clean, maintainable, well-documented code following industry standards.
Contribute to system design decisions around scalability, reliability, and security.
Database & Data Layer
Work with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Redis).
Design efficient schemas, write optimized queries, and implement ORM/ODM solutions.
Collaboration & Deployment
Collaborate with frontend engineers, QA, product managers, and DevOps teams.
Participate in code reviews, sprint planning, and technical documentation.
Support CI/CD processes and deployment to cloud environments (AWS, GCP, Azure).
Monitoring & Maintenance
Implement monitoring, logging, and alerting using tools like Prometheus, Grafana, Datadog, or similar.
Diagnose and resolve performance bottlenecks, outages, and production issues.
Ensure reliability, security, and compliance standards across backend systems.
Qualifications
Required
3-5+ years of hands-on backend development experience.
Strong proficiency in Node.js, JavaScript/TypeScript, and modern backend frameworks (e.g., Express, NestJS, Fastify).
Experience building and maintaining APIs at scale.
Solid understanding of microservices, serverless architectures, and distributed systems.
Experience with cloud platforms (AWS, Azure, or GCP).
Strong knowledge of databases (SQL and NoSQL).
Experience with Git, CI/CD pipelines, and automated testing.
Familiarity with API documentation tools (Swagger/OpenAPI).
Preferred
Experience with event-driven architectures (Kafka, RabbitMQ, SNS/SQS).
Knowledge of containerization and orchestration (Docker, Kubernetes).
Experience with OAuth2, JWT, and security best practices.
Familiarity with performance tuning and application profiling.
Experience with infrastructure-as-code (Terraform, CloudFormation).
Compensation, Benefits and Duration
Minimum Compensation: USD 44,000
Maximum Compensation: USD 154,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyJava Springboot Engineer- Dallas, TX
Remote
About the Role:
We are seeking a highly skilled and passionate Senior Java Engineer to join our dynamic engineering team. You will play a critical role in designing, developing, and maintaining high-performance, scalable, and resilient applications within a challenging and rewarding environment.
Key Responsibilities:
Develop and maintain high-performance, scalable Java applications.
Implement threading and concurrency solutions to enhance application performance.
Design and develop microservices using Spring Boot framework.
Ensure system reliability and fault tolerance through Hystrix and circuit breaker patterns.
Implement observability practices, including SLI/SLO metrics.
Develop event-driven architecture solutions using Kafka and other messaging platforms.
Deploy and manage applications on Kubernetes and AWS cloud platform services.
Collaborate with cross-functional teams to define, design, and ship new features.
Provide mentorship and guidance to junior developers.
Communicate effectively with team members and stakeholders.
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
Proven experience as a Senior Java Developer, with expertise in threading and concurrency implementation.
Strong proficiency in Java, with experience in Java 22 being a plus.
Extensive experience in microservice API implementation using Spring Boot.
Knowledge of failover handling and fault tolerance mechanisms such as Hystrix and circuit breakers.
Experience in observability, including SLI/SLO metrics.
Hands-on experience with event-driven architecture implementation using Kafka, with familiarity with other messaging platforms being a plus.
Proficiency in Kubernetes and AWS cloud platform services.
Excellent communication and interpersonal skills.
Strong problem-solving and analytical abilities.
Ability to work independently and as part of a team.
Compensation, Benefits and Duration
Minimum Compensation: USD 40,000
Maximum Compensation: USD 142,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyConversational Engineer - US
Remote
Experience working with NLP and Machine Learning
Expert experience working with GenAI and LLM tools (i.e. GPT, BERT)
Tangible experience working with ETL, relational databases, NoSQL or MongoDB.
Worked on Vertex AI and good experience on Voice AI conversation
5+ on Python
Nuance IVR/Microsoft
Auto-ApplyGCP Engineer - Springfield, MO
Remote
We are seeking a skilled GCP Engineer to design, implement, and manage scalable cloud solutions on the Google Cloud Platform. The ideal candidate will have hands-on experience with GCP services, strong DevOps or infrastructure-as-code skills, and a solid understanding of cloud-native application deployment and management.
Key Responsibilities:
Design and implement cloud infrastructure solutions using GCP services (e.g., Compute Engine, Cloud Run, GKE, Cloud Functions).
Develop automation scripts using Terraform, Deployment Manager, or Cloud Build for infrastructure provisioning and CI/CD.
Implement and manage cloud security, IAM policies, and VPC networking.
Monitor and optimize GCP workloads for performance, availability, and cost.
Support cloud-native application deployments using containers (e.g., Docker, Kubernetes).
Collaborate with software engineers, architects, and DevOps teams to deliver scalable and reliable systems.
Set up and manage observability tools (e.g., Cloud Logging, Cloud Monitoring, Stackdriver, Prometheus/Grafana).
Required Skills and Qualifications:
3+ years of experience with Google Cloud Platform.
Proficiency in GCP core services: Compute, Networking, IAM, Storage, BigQuery, and Pub/Sub.
Strong experience in Infrastructure as Code (IaC) with Terraform or GCP Deployment Manager.
Hands-on experience with Docker and Kubernetes (GKE preferred).
Understanding of CI/CD pipelines and tools such as Jenkins, GitLab CI, Cloud Build, etc.
Strong knowledge of cloud security best practices and compliance standards.
Preferred Qualifications:
Google Cloud Professional Certification (e.g., Cloud Engineer, DevOps Engineer, or Architect).
Experience with hybrid cloud or multi-cloud deployments.
Familiarity with serverless architectures (Cloud Functions, Cloud Run).
Experience with Cloud SQL, Firestore, or other GCP-managed databases.
Familiarity with data engineering or AI/ML services on GCP is a plus.
Education:
Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
Compensation, Benefits and Duration
Minimum Compensation: USD 48,000
Maximum Compensation: USD 168,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyAWS Gen AI / ML Engineer - Plano, TX
Remote
We are seeking an AWS Gen AI / ML Engineer to design, deploy, and optimize cloud-native machine-learning systems that power our next-generation predictive-automation platform. You will blend deep ML expertise with hands-on AWS engineering, turning data into low-latency, high-impact insights. The ideal candidate commands statistics, coding, and DevOps-and thrives on shipping secure, cost-efficient solutions at scale.
Objectives of this role
Design and productionize cloud ML pipelines (SageMaker, Step Functions, EKS) that advance predictive-automation roadmap
Integrate foundation models via Bedrock and Anthropic LLM APIs to unlock generative-AI capabilities
Optimize and extend existing ML libraries / frameworks for multi-region, multi-tenant workloads
Partner cross-functionally with data scientists, data engineers, architects, and security teams to deliver end-to-end value
Detect and mitigate data-distribution drift to preserve model accuracy in real-world traffic
Stay current on AWS, MLOps, and generative-AI innovations; drive continuous improvement
Responsibilities
Transform data-science prototypes into secure, highly available AWS services; choose and tune the appropriate algorithms, container images, and instance types
Run automated ML tests/experiments; document metrics, cost, and latency outcomes
Train, retrain, and monitor models with SageMaker Pipelines, Model Registry, and CloudWatch alarms
Build and maintain optimized data pipelines (Glue, Kinesis, Athena, Iceberg) feeding online/offline inference
Collaborate with product managers to refine ML objectives and success criteria; present results to executive stakeholders
Extend or contribute to internal ML libraries, SDKs, and infrastructure-as-code modules (CDK / Terraform)
Skills and qualifications
Primary technical skills
AWS SDK, SageMaker, Lambda, Step Functions
Machine-learning theory and practice (supervised / deep learning)
DevOps & CI/CD (Docker, GitHub Actions, Terraform/CDK)
Cloud security (IAM, KMS, VPC, GuardDuty)
Networking fundamentals
Java, Springboot, JavaScript/TypeScript & API design (REST, GraphQL)
Linux administration and scripting
Bedrock & Anthropic LLM integration
Secondary / tool skills
Advanced debugging and profiling
Hybrid-cloud management strategies
Large-scale data migration
Impeccable analytical and problem-solving ability; strong grasp of probability, statistics, and algorithms
Familiarity with modern ML frameworks (PyTorch, TensorFlow, Keras)
Solid understanding of data structures, modeling, and software architecture
Excellent time-management, organizational, and documentation skills
Growth mindset and passion for continuous learning
Preferred qualifications
10+ years of Software Experience
3+ years in an ML-engineering or cloud-ML role (AWS focus)
Proficient in Python (core), with working knowledge of Java or R
Outstanding communication and collaboration skills; able to explain complex topics to non-technical peers
Proven record of shipping production ML systems or contributing to OSS ML projects
Bachelor's (or higher) in Computer Science, Data Engineering, Mathematics, or a related field
AWS Certified Machine Learning - Specialty and/or AWS Solutions Architect - Associate a strong plus
Compensation, Benefits and Duration
Minimum Compensation: USD 48,000
Maximum Compensation: USD 168,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is available for independent contractors
No applications will be considered if received more than 120 days after the date of this post
Auto-ApplyCloud Engineer
Fort Myers, FL jobs
Department: IS Information Technology Svcs Work Type: Full Time Shift: Shift 1/8:00:00 AM to 4:30:00 PM Minimum to Midpoint Pay Rate:$37.72 - $49.03 / hour Lee Health is seeking an innovative Cloud Engineer to design, build, and maintain secure, scalable, and high-performing cloud systems that power our digital and AI transformation. In this role, youll architect and manage cloud infrastructure across private, public, and hybrid environmentsenabling advanced data management, automation, and application deployment. Collaborating with IT, AI, and data science teams, youll play a pivotal role in building and optimizing cloud environments for artificial intelligence, machine learning, and cognitive computing initiatives.
If youre passionate about driving innovation through cloud engineering and emerging technologies, this is an exceptional opportunity to shape the next generation of healthcare technology.
What Youll Do
* Design & Build: Develop scalable, secure, and cost-effective cloud architectures using AWS, Azure, or GCP.
* Migrate & Modernize: Lead cloud migration projects for data, systems, and applications while ensuring minimal disruption.
* Automate & Optimize: Implement Infrastructure-as-Code (IaC), automate deployment pipelines, and streamline configuration management.
* Innovate with AI: Partner with data science and AI teams to develop infrastructure for AI/ML experimentation and production deployment.
* Ensure Compliance & Security: Maintain adherence to data privacy standards, IAM, encryption, and regulatory compliance (HIPAA, HITRUST).
* Monitor & Improve: Oversee cloud utilization, troubleshoot performance issues, and develop disaster recovery strategies.
* Collaborate & Lead: Act as a bridge between development and infrastructure teams, fostering a DevSecOps culture and continuous improvement mindset.
Qualifications
Education:
* Bachelors degree in Computer Science, Engineering, or related field required.
* Masters degree in Computer Science, Machine Learning, or related discipline preferred.
Experience:
* 2+ years designing, building, and supporting private or hybrid cloud environments.
* Proven experience with DevOps and automation (IaC, CI/CD, scripting in PowerShell, Python, or Shell).
* Strong grasp of cloud security, IAM, networking, and monitoring tools.
* Working knowledge of machine learning frameworks (TensorFlow, PyTorch) and data engineering concepts.
* Experience supporting AI/ML model deployment, fine-tuning, or integration is a plus.
* Familiarity with healthcare data standards (HIPAA, HL7, FHIR) preferred.
Certifications (one or more required):
* Microsoft Certified Azure AI Fundamentals or AI Engineer Associate
* AWS Certified Machine Learning Engineer
* Microsoft Azure Security Certification
Preferred Certifications:
* HITRUST Practitioner, NIST AI RMF Architect, or Artificial Intelligence Engineer Certification
Why Join Lee Health
At Lee Health, innovation meets compassion. As we advance our journey toward smarter, data-driven healthcare, youll have the opportunity to work with emerging technologies, contribute to transformative AI initiatives, and make a lasting impact on patient care. Join a health system recognized for excellence, collaboration, and continuous learning.
Engineer - Boiler
Albany, NY jobs
Department/Unit: Facilities Mgmt Adm Work Shift: Day (United States of America) Salary Range: $51,755.37 - $77,633.06 Salary Range: $21.78 - $34.85 Performs activities related to the operation, maintenance and repair of boilers and auxiliary plant equipment.
Essential Duties and Responsibilities
* Performs activities related to the operation, maintenance and repair of boilers and auxiliary plant equipment.
* Performs and assumes all responsibilities as the Stationary Engineer including, but not limited to, the following:
* Operates high-pressure boilers, auxiliary equipment, and controls in an efficient and safe manner. Continuously exercises plant safety in all areas of the power plant.
* Observes flow meters, gauges, dials and recorders. Starts, stops, adjusts, and regulates the equipment in response to observations and accepted engineering practice.
* Periodically records readings and abnormal occurrences in the engineering logbook.
* Checks equipment while in operation and makes adjustments and repairs as necessary to maintain plant operations.
* Performs chemical test on water samples during shift and records results in order to maintain proper boiler water chemistry.
* Responsible for performing periodic preventive maintenance, inspections, housekeeping, preservations, repairs to auxiliary equipment, boiler room equipment, and the physical plant spaces.
* Subject to working alone as boiler plant operator isolated from other buildings. This working environment is subject to high temperatures and noise from operating equipment.
* Available for shifts other than the ones scheduled, to cover for absence of relief shift, sickness, vacations, and boiler plant maintenance.
* Records all discrepancies during the after hours in the engineering logbook.
* Maintains engineer radio and pager at all times.
* Required to contact Foreman/Chief Engineer for any and all discrepancies related to the Power Plant operations.
Qualifications
* High School Diploma/G.E.D. - required
* 4-6 years with oil-fired high-pressure boilers, chillers, air conditioning systems and auxiliary boiler plant equipment - required
* 1-3 years at AMC as an Engineer - preferred
* Requires the ability to make sound judgments and act independently and rational with maintenance problems.
Physical Demands
* Standing - Constantly
* Walking - Constantly
* Sitting -
* Lifting - Constantly
* Carrying - Constantly
* Pushing - Constantly
* Pulling - Constantly
* Climbing - Constantly
* Balancing - Constantly
* Stooping - Constantly
* Kneeling - Constantly
* Crouching - Constantly
* Crawling - Frequently
* Reaching - Constantly
* Handling - Constantly
* Grasping - Constantly
* Feeling - Constantly
* Talking - Frequently
* Hearing - Frequently
* Repetitive Motions - Constantly
* Eye/Hand/Foot Coordination - Constantly
Working Conditions
* Extreme cold - Occasionally
* Extreme heat - Occasionally
* Humidity - Occasionally
* Wet - Occasionally
* Noise - Frequently
* Hazards - Frequently
* Temperature Change - Occasionally
* Atmospheric Conditions - Occasionally
* Vibration - Frequently
Thank you for your interest in Albany Medical Center!
Albany Medical is an equal opportunity employer.
This role may require access to information considered sensitive to Albany Medical Center, its patients, affiliates, and partners, including but not limited to HIPAA Protected Health Information and other information regulated by Federal and New York State statutes. Workforce members are expected to ensure that:
Access to information is based on a "need to know" and is the minimum necessary to properly perform assigned duties. Use or disclosure shall not exceed the minimum amount of information needed to accomplish an intended purpose. Reasonable efforts, consistent with Albany Med Center policies and standards, shall be made to ensure that information is adequately protected from unauthorized access and modification.
Auto-ApplyEngineering Opportunities
Miami, FL jobs
Job Description
Gaumard Scientific Company, a global leader in healthcare education simulation, is seeking talented Electrical, Software, Mechanical, Biomedical, and Advanced Systems Engineers to join our innovative team at our headquarters in Miami, FL.
At Gaumard, we empower hospitals, universities, governments, NGOs, and emergency services around the world with cutting-edge simulation technology for healthcare education and training. Our advanced patient simulators, educational software, AI technology, and mixed reality solutions enable customers to accelerate safer patient care and improve healthcare outcomes. Gaumard products support training across virtually all clinical areas, including emergency medicine, nursing, obstetrics, pediatrics, pre-hospital care, and surgery.
Globally recognized for groundbreaking innovations, Gaumard has built on a distinguished legacy of over 75 years, including these recent milestones:
2004: Pioneered fully tetherless technology with the HAL simulator family.
2014: Launched VICTORIA , the world's most advanced labor and delivery simulator.
2017: Introduced Super TORY , the most sophisticated newborn simulator.
2018: Released Pediatric HAL , the first pediatric simulator featuring dynamic facial expressions.
2022: Developed HAL S5301, integrating robotics and artificial intelligence for realistic clinical scenarios and team-based training.
2024: Introduced PHOEBE , our most advanced trauma simulator yet, featuring state-of-the-art robotic motor movements and AI-enhanced speech.
Gaumard designs and manufactures products in-house at our Miami, FL facility and distributes them globally through direct representatives and a large network of authorized distributors across 70 countries.
Join the Gaumard Family and Enjoy:
Competitive salary
Comprehensive health benefits (medical, vision, dental) with family coverage options
Generous Paid Time Off (PTO) and paid holidays, with additional PTO earned based on tenure
Retirement plan with 100% employer match (up to 5% of employee contributions)
The satisfaction of contributing to meaningful healthcare education solutions at a respected, longstanding company.
Hours:
Standard hours are Monday through Friday, 8:00 AM to 4:30 PM. Availability for occasional Saturdays or extended hours may be required based on business needs.
Gaumard Scientific is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: Gaumard Scientific is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at Gaumard Scientific are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. Gaumard Scientific will not tolerate discrimination or harassment based on any of these characteristics. Gaumard Scientific encourages applicants of all ages. Gaumard Scientific will provide reasonable accommodation for employees who have protected disabilities consistent with local law.
Engineer - Boiler
New Scotland, NY jobs
Department/Unit:
Facilities Mgmt Adm
Work Shift:
Day (United States of America)
Salary Range:
$51,755.37 - $77,633.06Salary Range: $21.78 - $34.85 Performs activities related to the operation, maintenance and repair of boilers and auxiliary plant equipment.
Essential Duties and Responsibilities
Performs activities related to the operation, maintenance and repair of boilers and auxiliary plant equipment.
Performs and assumes all responsibilities as the Stationary Engineer including, but not limited to, the following:
Operates high-pressure boilers, auxiliary equipment, and controls in an efficient and safe manner. Continuously exercises plant safety in all areas of the power plant.
Observes flow meters, gauges, dials and recorders. Starts, stops, adjusts, and regulates the equipment in response to observations and accepted engineering practice.
Periodically records readings and abnormal occurrences in the engineering logbook.
Checks equipment while in operation and makes adjustments and repairs as necessary to maintain plant operations.
Performs chemical test on water samples during shift and records results in order to maintain proper boiler water chemistry.
Responsible for performing periodic preventive maintenance, inspections, housekeeping, preservations, repairs to auxiliary equipment, boiler room equipment, and the physical plant spaces.
Subject to working alone as boiler plant operator isolated from other buildings. This working environment is subject to high temperatures and noise from operating equipment.
Available for shifts other than the ones scheduled, to cover for absence of relief shift, sickness, vacations, and boiler plant maintenance.
Records all discrepancies during the after hours in the engineering logbook.
Maintains engineer radio and pager at all times.
Required to contact Foreman/Chief Engineer for any and all discrepancies related to the Power Plant operations.
Qualifications
High School Diploma/G.E.D. - required
4-6 years with oil-fired high-pressure boilers, chillers, air conditioning systems and auxiliary boiler plant equipment - required
1-3 years at AMC as an Engineer - preferred
Requires the ability to make sound judgments and act independently and rational with maintenance problems.
Physical Demands
Standing - Constantly
Walking - Constantly
Sitting -
Lifting - Constantly
Carrying - Constantly
Pushing - Constantly
Pulling - Constantly
Climbing - Constantly
Balancing - Constantly
Stooping - Constantly
Kneeling - Constantly
Crouching - Constantly
Crawling - Frequently
Reaching - Constantly
Handling - Constantly
Grasping - Constantly
Feeling - Constantly
Talking - Frequently
Hearing - Frequently
Repetitive Motions - Constantly
Eye/Hand/Foot Coordination - Constantly
Working Conditions
Extreme cold - Occasionally
Extreme heat - Occasionally
Humidity - Occasionally
Wet - Occasionally
Noise - Frequently
Hazards - Frequently
Temperature Change - Occasionally
Atmospheric Conditions - Occasionally
Vibration - Frequently
Thank you for your interest in Albany Medical Center!
Albany Medical is an equal opportunity employer.
This role may require access to information considered sensitive to Albany Medical Center, its patients, affiliates, and partners, including but not limited to HIPAA Protected Health Information and other information regulated by Federal and New York State statutes. Workforce members are expected to ensure that:
Access to information is based on a “need to know” and is the minimum necessary to properly perform assigned duties. Use or disclosure shall not exceed the minimum amount of information needed to accomplish an intended purpose. Reasonable efforts, consistent with Albany Med Center policies and standards, shall be made to ensure that information is adequately protected from unauthorized access and modification.
Auto-Apply