Post job

Requirements Engineer jobs at RWJBarnabas Health - 137 jobs

  • AI Engineer II

    Caresource 4.9company rating

    Remote

    The AI Engineer II is responsible for software development related to generative AI solutions. The role will work closely with cross-functional teams to ensure the successful delivery of solutions within the defined scope, timeline, and quality standards. Essential Functions: Define and oversee the architecture for Generative AI platforms, including LLMs, vector databases, and inference pipelines. Design and develop complex software systems, ensuring scalability, reliability, and maintainability. Develop and maintain a strong understanding of modern generative AI tools and concepts including OpenAI, Llama, Python, LangChain, vectorization, embeddings, semantic search, RAG, IaC, and Streamlit. Rapid development of proof of concepts to evaluate new technology or value adding concepts. Drive innovation and teamwork in a fast-paced, dynamic environment through a hands-on, imaginative approach and a self-motivated, curious mindset that explores the art of the possible. Embrace AI assisted software development including the use of GitHub Copilot and internally developed tools. Collaborate with leadership to systematically evaluate currently deployed services; develop and manage plans to optimize delivery and support mechanism. Apply creative thinking in problem solving and identifying opportunities for improvement. Identify technical risks and propose effective mitigation strategies to ensure project success. Collaborate with product managers to prioritize and schedule project deliverables based on business objectives and resource availability. Provide accurate and timely progress updates to project stakeholders, highlighting achievements, challenges, and proposed solutions. Stay up to date with the latest industry trends, technologies, and frameworks, and evaluate their potential application in the organization. Perform any other job related duties as requested. Education and Experience: Bachelor's degree in computer science, Information Technology, or a related field required Equivalent years of relevant work experience may be accepted in lieu of required education Three (3) years of experience working in medium to large operating environment required Experience with Agile methodologies required Experience with cloud technologies including containers and serverless preferred Competencies, Knowledge and Skills: Strong analytical, evaluative and problem-solving abilities Knowledge of healthcare and managed care is preferred Critical listening and thinking skills Strong knowledge of best practices relative to application development or infrastructure standards Licensure and Certification: Cloud certification preferred Working Conditions: General office environment; may be required to sit or stand for extended periods of time Travel is not typically required Compensation Range: $83,000.00 - $132,800.00 CareSource takes into consideration a combination of a candidate's education, training, and experience as well as the position's scope and complexity, the discretion and latitude required for the role, and other external and internal data when establishing a salary level. In addition to base compensation, you may qualify for a bonus tied to company and individual performance. We are highly invested in every employee's total well-being and offer a substantial and comprehensive total rewards package. Compensation Type (hourly/salary): Salary Organization Level Competencies Fostering a Collaborative Workplace Culture Cultivate Partnerships Develop Self and Others Drive Execution Influence Others Pursue Personal Excellence Understand the Business This is not all inclusive. CareSource reserves the right to amend this job description at any time. CareSource is an Equal Opportunity Employer. We are dedicated to fostering an environment of belonging that welcomes and supports individuals of all backgrounds.#LI-GM1
    $83k-132.8k yearly Auto-Apply 3d ago
  • Job icon imageJob icon image 2

    Looking for a job?

    Let Zippia find it for you.

  • Backend Engineer

    Underdog Pharmaceuticals 4.2company rating

    Remote

    At Underdog, we make sports more fun. Our thesis is simple: build the best products and we'll build the biggest company in the space, because there's so much more to be built for sports fans. We're just over five years in, and we're one of the fastest-growing sports companies ever, most recently valued at $1.3B. And it's still the early days. We've built and scaled multiple games and products across fantasy sports, sports betting, and prediction markets, all united in one seamless, simple, easy to use, intuitive and fun app. Underdog isn't for everyone. One of our core values is give a sh*t. The people who win here are the ones who care, push, and perform. If that's you, come join us. Winning as an Underdog is more fun. About the role Build the backbone of Underdog's data ecosystem-design and deploy internal APIs and workflows that power seamless communication across our services. Create robust, scalable microservices to serve real-time, high-volume data needs for our internal teams, enabling faster decision-making and better fan experiences. Own greenfield projects that shape the future of our business-architecting backend systems built for scale, resilience, and speed. Partner cross-functionally with engineers, product owners, and data scientists to turn complex challenges into elegant, scalable solutions. Safeguard system reliability by building smart monitoring, alerting, and logging tools that keep our infrastructure healthy and performant. Drive impactful technical initiatives from start to finish in a fast-moving, high-stakes environment where your leadership makes a real difference. Champion engineering excellence by leading code reviews, sharing best practices, and helping level up the team through mentorship and feedback. Who you are A Backend Engineer with at least 3 years of experience building microservices heavy backend systems and APIs for internal applications on a cloud environment (e.g. AWS, GCP, Azure) Advanced proficiency with Go Experience building and scaling backend systems and designing enterprise scale APIs for external and internal applications Highly focused on delivering results for internal and external stakeholders in a fast-paced, entrepreneurial environment Excellent communication skills with ability to influence and collaborate with stakeholders Familiarity with containerization and orchestration technologies such as Docker, Kubernetes, or ECS Experience with DevOps practices such as CI/CD pipelines, and infrastructure-as-code tools (e.g. Terraform, CDK) Even better if you have Strong interest in sports Prior experience in the sports betting industry Our target starting base salary range for this position is between $135,000 and $165,000, plus pre-IPO equity. Our comp range reflects the full scale of expected compensation for this role. Offers are calibrated based on experience, skills, impact, and geographies. Most new hires land in the lower half of the band, with the opportunity to advance toward the upper end over time. What we can offer you: Unlimited PTO (we're extremely flexible with the exception of the first few weeks before & into the NFL season) 16 weeks of fully paid parental leave Home office stipend A connected virtual first culture with a highly engaged distributed workforce 5% 401k match, FSA, company paid health, dental, vision plan options for employees and dependents #LI-REMOTE This position may require sports betting licensure based on certain state regulations. Underdog is an equal opportunity employer and doesn't discriminate on the basis of creed, race, sexual orientation, gender, age, disability status, or any other defining characteristic. California Applicants: Review our CPRA Privacy Notice here.
    $135k-165k yearly Auto-Apply 58d ago
  • Connectivity Engineer

    Hologic 4.4company rating

    Remote

    The Connectivity Engineer Provides technical support to customers, focusing on connectivity with healthcare information systems. Delivers consistent, high-quality, and responsive support to customers. Document case updates, investigation findings, actions taken, next steps, and resolutions in H1. Collaborates with team members and partners with customers to investigate and resolve problems while building product knowledge through training and OJT. At this level, you will be responsible for: Solving technical problems efficiently Providing exceptional customer service Maintaining accurate and detailed documentation Collaborating effectively with team members and customers Continuously improving knowledge and contributing to the knowledge base The working hours for this full-time remote position is 10 am - 7 pm ET / 11 am - 8 pm ET. Summary of Duties and Responsibilities • Provide technical support to external customers and internal colleagues, focusing on connectivity and interoperability with healthcare information systems. • Perform investigations, troubleshooting, and root cause analysis using diagnostic tools. • Deliver consistent, high-quality, and responsive support to customers. • Take ownership of escalated cases, provide regular updates, and expedite resolutions. • Escalate issues to appropriate expert resources and management as necessary. • Document case updates, investigation findings, actions taken, next steps, and resolutions in the CRM system (Salesforce/H1). • Follow established support processes to ensure compliance with quality and regulatory requirements. • Collaborate with team members and partner with customers to investigate and resolve problems. • Dispatch Field Service Engineers for work that cannot be done remotely, providing specific guidance on actions needed. • Refer to the shared Knowledge Base for problem resolution and improved understanding. • Maintain and improve product knowledge through training and on-the-job learning. • Ensure quality performance and customer satisfaction for connectivity projects. • Build in-depth knowledge of products, focusing on interoperability and networking. • Communicate product reliability issues and provide improvement suggestions. • Identify opportunities for continuous improvement and maintain proficiency in relevant technologies. • Contribute content and assist in maintaining the shared knowledge base. • Participate in technical conversations during customer meetings. • Adhere to and support the Quality Policy as well as all Quality System procedures and guidelines. • Able to work flexible work schedules as required for staff coverage during customer and staff operating hours.2 • Occasional travel may be necessary (10%). • Perform other duties and projects as assigned, to meet company and department objectives. Functional Competencies • Troubleshooting • Customer Focus • Daily administrative workflow completion • Teamwork • Knowledge Building • Relationship Building Skills, Knowledge, Abilities • Ability to work under minimal supervision from home office • Organization and Time Management • Detail Oriented • Written and Verbal Communication skills • Customer Service and Interpersonal Relations • Intermediate Computer and Technology Literacy • Analytical Assessment and Problem Solving • Knowledge of network technologies (e.g., TCP/IP) and remote access tools (e.g., VPN, Remote Desktop) • Understanding of Medical Imaging Technology, DICOM, HL7, RIS/LIS, and PACS systems Physical Demands The physical requirements described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. • Sit; use hands to handle or feel objects, tools, or controls. • Stand; walk; reach with hands and arms; and stoop, kneel, crouch, or crawl. • Lifting/moving and carrying products weighing up to 40 pounds. • Other (please specify): View computer monitor, use keyboard/mouse, assemble, and plug cables related to computer peripherals. Qualifications & Education • A four-year degree in a related technical discipline is preferred. • Associate of Science degree in electronics or a related technical discipline in addition to an equivalent blend of education and experience is acceptable. Note: Minimal job requirements of this position may be changed as Hologic products, technology or this function evolves. Why join Hologic? We are committed to making Hologic the destination for top talent. For you to succeed, we want to enable you with the tools and knowledge required and so we provide comprehensive training when you join as well as continued development and training throughout your career. The annualized base salary range for this role is $65,300 - $102,200 and is bonus eligible. Final compensation packages will ultimately depend on factors including relevant experience, skillset, knowledge, geography, education, business needs and market demand. From a benefits perspective, you will have a access to benefits such as medical and dental insurance, ESPP, 401(k) plan, vacation, sick leave and holidays, parental leave, wellness program and many more! Agency and Third-Party Recruiter Notice: Agencies that submit a resume to Hologic must have a current executed Hologic Agency Agreement executed by a member of the Human Resource Department. In addition, Agencies may only submit candidates to positions for which they have been invited to do so by a Hologic Recruiter. All resumes must be sent to the Hologic Recruiter under these terms or they will not be considered. Hologic, Inc. is proud to be an Equal Opportunity Employer inclusive of disability and veterans. #US-remote #LI-MG3
    $65.3k-102.2k yearly Auto-Apply 45d ago
  • Cybersecurity Engineer

    Veracyte 4.6company rating

    Remote

    At Veracyte, we offer exciting career opportunities for those interested in joining a pioneering team that is committed to transforming cancer care for patients across the globe. Working at Veracyte enables our employees to not only make a meaningful impact on the lives of patients, but to also learn and grow within a purpose driven environment. This is what we call the Veracyte way - it's about how we work together, guided by our values, to give clinicians the insights they need to help patients make life-changing decisions. Our Values: We Seek A Better Way: We innovate boldly, learn from our setbacks, and are resilient in our pursuit to transform cancer care We Make It Happen: We act with urgency, commit to quality, and bring fun to our hard work We Are Stronger Together: We collaborate openly, seek to understand, and celebrate our wins We Care Deeply: We embrace our differences, do the right thing, and encourage each other The Position: As a Cybersecurity Engineer at Veracyte, you will play a pivotal role in securing our digital infrastructure, monitoring for threats, and implementing cutting-edge cybersecurity solutions. You will work collaboratively with our IT and cybersecurity teams to protect our organization from cyber threats and ensure the confidentiality, integrity, and availability of our systems and data. Responsibilities: Cybersecurity Infrastructure Management: - Design, implement, and maintain security infrastructure, including firewalls, intrusion detection/prevention systems, VPNs, and other security-related technologies. - Conduct regular security assessments and vulnerability scans, identifying and mitigating potential vulnerabilities. Threat Monitoring and Incident Response: - Monitor network and system logs for suspicious activities and potential security incidents. - Develop and execute incident response plans to effectively manage and mitigate cybersecurity incidents. - Investigate security breaches and provide recommendations for improvements. Policy and Compliance: - Ensure compliance with industry-specific regulations and standards (e.g., GDPR, HIPAA, NIST CSF). - Assist in the development and enforcement of cybersecurity policies, standards, and procedures. - Conduct security awareness training for employees to promote a culture of security. Security Tools and Technologies: - Evaluate and implement advanced cybersecurity tools and technologies to enhance our security posture. - Stay updated on emerging threats and trends in the cybersecurity field and recommend proactive measures. Security Audits and Assessments: - Collaborate with external auditors and assessors to conduct security audits and assessments. - Work on remediation plans to address identified weaknesses. Collaboration and Communication: - Collaborate with cross-functional teams to integrate security into the development and deployment processes. - Communicate cybersecurity risks and strategies to technical and non-technical stakeholders. Who You Are: Bachelor's degree in Computer Science, Information Security, or a related field. Relevant industry certifications such as CEH, Pentest+, CySA+, SSCP, OSCP, or equivalent. 5 or more years experience in cybersecurity engineering or a related cybersecurity role. Proficiency in security tools and technologies, including firewalls, SIEM, IDS/IPS, and antivirus solutions. Strong understanding of network protocols, architecture, and security best practices. Knowledge of regulatory requirements and industry standards. Excellent problem-solving and analytical skills. Strong communication and teamwork skills. #LI-Remote The final salary offered to a successful candidate will be dependent on several factors that may include but are not limited to years of experience, skillset, geographic location, industry, education, etc. Base pay is one part of the Total Package that is provided to compensate and recognize employees for their work, and this role may be eligible for additional discretionary bonuses/incentives, and restricted stock units. Pay range$130,000-$150,000 USDWhat We Can Offer You Veracyte is a growing company that offers significant career opportunities if you are curious, driven, patient-oriented and aspire to help us build a great company. We offer competitive compensation and benefits, and are committed to fostering an inclusive workforce, where diverse backgrounds are represented, engaged, and empowered to drive innovative ideas and decisions. We are thrilled to be recognized as a 2024 Certifiedâ„¢ Great Place to Work in both the US and Israel - a testament to our dynamic, inclusive, and inspiring workplace where passion meets purpose. About Veracyte Veracyte (Nasdaq: VCYT) is a global diagnostics company whose vision is to transform cancer care for patients all over the world. We empower clinicians with the high-value insights they need to guide and assure patients at pivotal moments in the race to diagnose and treat cancer. Our Veracyte Diagnostics Platform delivers high-performing cancer tests that are fueled by broad genomic and clinical data, deep bioinformatic and AI capabilities, and a powerful evidence-generation engine, which ultimately drives durable reimbursement and guideline inclusion for our tests, along with new insights to support continued innovation and pipeline development. For more information, please visit **************** or follow us on LinkedIn or X (Twitter). Veracyte, Inc. is an Equal Opportunity Employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disability status. Veracyte participates in E-Verify in the United States. View our CCPA Disclosure Notice. If you receive any suspicious alerts or communications through LinkedIn or other online job sites for any position at Veracyte, please exercise caution and promptly report any concerns to ********************
    $130k-150k yearly Auto-Apply 10d ago
  • AWS Gen AI / ML Engineer - Plano, TX

    Photon Group 4.3company rating

    Remote

    We are seeking an AWS Gen AI / ML Engineer to design, deploy, and optimize cloud-native machine-learning systems that power our next-generation predictive-automation platform. You will blend deep ML expertise with hands-on AWS engineering, turning data into low-latency, high-impact insights. The ideal candidate commands statistics, coding, and DevOps-and thrives on shipping secure, cost-efficient solutions at scale. Objectives of this role Design and productionize cloud ML pipelines (SageMaker, Step Functions, EKS) that advance predictive-automation roadmap Integrate foundation models via Bedrock and Anthropic LLM APIs to unlock generative-AI capabilities Optimize and extend existing ML libraries / frameworks for multi-region, multi-tenant workloads Partner cross-functionally with data scientists, data engineers, architects, and security teams to deliver end-to-end value Detect and mitigate data-distribution drift to preserve model accuracy in real-world traffic Stay current on AWS, MLOps, and generative-AI innovations; drive continuous improvement Responsibilities Transform data-science prototypes into secure, highly available AWS services; choose and tune the appropriate algorithms, container images, and instance types Run automated ML tests/experiments; document metrics, cost, and latency outcomes Train, retrain, and monitor models with SageMaker Pipelines, Model Registry, and CloudWatch alarms Build and maintain optimized data pipelines (Glue, Kinesis, Athena, Iceberg) feeding online/offline inference Collaborate with product managers to refine ML objectives and success criteria; present results to executive stakeholders Extend or contribute to internal ML libraries, SDKs, and infrastructure-as-code modules (CDK / Terraform) Skills and qualifications Primary technical skills AWS SDK, SageMaker, Lambda, Step Functions Machine-learning theory and practice (supervised / deep learning) DevOps & CI/CD (Docker, GitHub Actions, Terraform/CDK) Cloud security (IAM, KMS, VPC, GuardDuty) Networking fundamentals Java, Springboot, JavaScript/TypeScript & API design (REST, GraphQL) Linux administration and scripting Bedrock & Anthropic LLM integration Secondary / tool skills Advanced debugging and profiling Hybrid-cloud management strategies Large-scale data migration Impeccable analytical and problem-solving ability; strong grasp of probability, statistics, and algorithms Familiarity with modern ML frameworks (PyTorch, TensorFlow, Keras) Solid understanding of data structures, modeling, and software architecture Excellent time-management, organizational, and documentation skills Growth mindset and passion for continuous learning Preferred qualifications 10+ years of Software Experience 3+ years in an ML-engineering or cloud-ML role (AWS focus) Proficient in Python (core), with working knowledge of Java or R Outstanding communication and collaboration skills; able to explain complex topics to non-technical peers Proven record of shipping production ML systems or contributing to OSS ML projects Bachelor's (or higher) in Computer Science, Data Engineering, Mathematics, or a related field AWS Certified Machine Learning - Specialty and/or AWS Solutions Architect - Associate a strong plus Compensation, Benefits and Duration Minimum Compensation: USD 48,000 Maximum Compensation: USD 168,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • Hadoop Platform Engineer | Onsite | Dallas, TX

    Photon Group 4.3company rating

    Remote

    Required Skills: Platform Engineering: Cluster Management: Expertise in design, implement, and maintain Hadoop clusters in large volume, including components such as HDFS, YARN, and MapReduce. Collaborate with data engineers and data scientists to understand data requirements and optimize data pipelines. Administration and Monitoring: Experience in administering and monitoring Hadoop clusters to ensure high availability, reliability, and performance. Experience in troubleshooting and resolving issues related to Hadoop infrastructure, data ingestion, data processing, and data storage. Security Implementation: Experience in Implementing and managing security measures within Hadoop clusters, including authentication, authorization, and encryption. Backup and Disaster Recovery: Collaborate with cross-functional teams to define and implement backup and disaster recovery strategies for Hadoop clusters. Performance Optimization: Experience in optimizing Hadoop performance through fine-tuning configurations, capacity planning, and implementing performance monitoring and tuning techniques. Automation and DevOps Collaboration: Work with DevOps teams to automate Hadoop infrastructure provisioning, deployment, and management processes. Technology Adoption and Recommendations: Stay up to date with the latest developments in the Hadoop ecosystem. Recommend and implement new technologies and tools that enhance the platform. Documentation: Experience in documenting Hadoop infrastructure configurations, processes, and best practices. Technical Support and Guidance: Provide technical guidance and support to other team members and stakeholders. Admin: User Interface Design: Relevant for designing interfaces for tools within the Hadoop ecosystem that provide self-service capabilities, such as Hadoop cluster management interfaces or job scheduling dashboards. Role-Based Access Control (RBAC): Important for controlling access to Hadoop clusters, ensuring that users have appropriate permissions to perform self-service tasks. Cluster Configuration Templates: Useful for maintaining consistent configurations across Hadoop clusters, ensuring that users follow best practices and guidelines. Resource Management: Important for optimizing resource utilization within Hadoop clusters, allowing users to manage resources dynamically based on their needs. Self-Service Provisioning: Pertinent for features that enable users to provision and manage nodes within Hadoop clusters independently. Monitoring and Alerts: Essential for monitoring the health and performance of Hadoop clusters, providing users with insights into their cluster's status. Automated Scaling: Relevant for automatically adjusting the size of Hadoop clusters based on workload demands. Job Scheduling and Prioritization: Important for managing data processing jobs within Hadoop clusters efficiently. Self-Service Data Ingestion: Applicable to features that facilitate users in ingesting data into Hadoop clusters independently. Query Optimization and Tuning Assistance: Relevant for providing users with tools or guidance to optimize and tune their queries when interacting with Hadoop-based data. Documentation and Training: Important for creating resources that help users understand how to use self-service features within the Hadoop ecosystem effectively. Data Access Control: Pertinent for controlling access to data stored within Hadoop clusters, ensuring proper data governance. Backup and Restore Functionality: Applicable to features that allow users to perform backup and restore operations for data stored within Hadoop clusters. Containerization and Orchestration: Relevant for deploying and managing applications within Hadoop clusters using containerization and orchestration tools. User Feedback Mechanism: Important for continuously improving self-service features based on user input and experience within the Hadoop ecosystem. Cost Monitoring and Optimization: Applicable to tools or features that help users monitor and optimize costs associated with their usage of Hadoop clusters. Compliance and Auditing: Relevant for ensuring compliance with organizational policies and auditing user activities within the Hadoop ecosystem. Data Engineering: ETL (Extract, Transform, Load) Processes: Proficiency in designing and implementing ETL processes for ingesting, transforming, and loading data into Hadoop clusters. Experience with tools like Apache NiFi Data Modeling and Database Design: Understanding of data modeling principles and database design concepts. Ability to design and implement effective data storage structures in Hadoop. SQL and Query Optimization: Strong SQL skills for data extraction and analysis from Hadoop-based data stores. Experience in optimizing SQL queries for efficient data retrieval. Streaming Data Processing: Familiarity with real-time data processing and streaming technologies, such as Apache Kafka and Spark Streaming. Experience in designing and implementing streaming data pipelines. Data Quality and Governance: Knowledge of data quality assurance and governance practices. Implementing measures to ensure data accuracy, consistency, and integrity. Workflow Orchestration: Experience with workflow orchestration tools (e.g., Apache Airflow) to manage and schedule data processing workflows. Automating and orchestrating data pipelines. Data Warehousing Concepts: Understanding of data warehousing concepts and best practices. Integrating Hadoop-based solutions with traditional data warehousing systems. Version Control: Proficiency in version control systems (e.g., Git) for managing and tracking changes in code and configurations. Collaboration with Data Scientists: Collaborate effectively with data scientists to understand analytical requirements and support the deployment of machine learning models. Data Security and Compliance: Implementing security measures within data pipelines to protect sensitive information. Ensuring compliance with data security and privacy regulations. Data Catalog and Metadata Management: Implementing data catalog solutions to manage metadata and enhance data discovery. Enabling metadata-driven data governance. Big Data Technologies Beyond Hadoop: Familiarity with other big data technologies beyond Hadoop, such as Apache Flink or Apache Beam. Data Transformation and Serialization: Expertise in data serialization formats (e.g., Avro, Parquet) and transforming data between formats. Data Storage Optimization: Optimizing data storage strategies for cost-effectiveness and performance. Desired Skills: Problem-Solving and Analytical Thinking: Strong analytical and problem-solving skills to troubleshoot complex issues in Hadoop clusters. Ability to analyze data requirements and optimize data processing workflows. Collaboration and Teamwork: Collaborative mindset to work effectively with cross-functional teams, including data engineers, data scientists, and DevOps teams. Ability to provide technical guidance and support to team members. Adaptability and Continuous Learning: Ability to adapt to changes in technology and industry trends within the Hadoop ecosystem and willingness to continuously learn and upgrade skills to stay current. Performance Monitoring and Tuning: Proactive approach to performance monitoring and tuning, ensuring optimal performance of Hadoop clusters. Ability to analyze and address performance bottlenecks. Security Best Practices: knowledge of security best practices within the Hadoop ecosystem. Capacity Planning: Skill in capacity planning to anticipate and scale Hadoop clusters according to data processing needs. Automation and Scripting: Strong scripting skills for automation (e.g., Python, Ansible) beyond shell scripting. Familiarity with configuration management tools for infrastructure automation. Monitoring and Observability: Experience in setting up comprehensive monitoring and observability tools for Hadoop clusters. Ability to proactively identify and address potential issues. Networking Skills: Understanding of networking concepts relevant to Hadoop clusters. Skills: Technical Proficiency: Experience with Hadoop and Big Data technologies, including Cloudera CDH/CDP, Data Bricks, HD Insights, etc. Strong understanding of core Hadoop services such as HDFS, MapReduce, Kafka, Spark, Hive, Impala, HBase, Kudu, Sqoop, and Oozie. Proficiency in RHEL Linux operating systems, databases, and hardware administration. Operations and Design: Operations, design, capacity planning, cluster setup, security, and performance tuning in large-scale Enterprise Hadoop environments. Scripting and Automation: Proficient in shell scripting (e.g., Bash, KSH) for automation. Security Implementation: Experience in setting up, configuring, and managing security for Hadoop clusters using Kerberos with integration with LDAP/AD. Problem Solving and Troubleshooting: Expertise in system administration and programming skills for storage capacity management, debugging, and performance tuning. Collaboration and Communication: Collaborate with cross-functional teams, including data engineers, data scientists, and DevOps teams. Provide technical guidance and support to team members and stakeholders. Skills: On-prem instance Hadoop config, performance, tuning Ability to manage very large clusters and understand scalability Interfacing with multiple teams Many teams have self service capabilities, so should have this experience managing this with multiple teams across large clusters.Hands-on and strong understanding of Hadoop architecture Experience with Hadoop ecosystem components - HDFS, YARN, MapReduce & cluster management tools like Ambari or Cloudera Manager and related technologies. Proficiency in scripting, Linux system administration, networking, and troubleshooting skills Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Strong experience in designing, implementing, and administering Hadoop clusters in a production environment. Proficiency in Hadoop ecosystem components such as HDFS, YARN, MapReduce, Hive, Spark, and HBase. Experience with cluster management tools like Apache Ambari or Cloudera Manager. Solid understanding of Linux/Unix systems and networking concepts. Strong scripting skills (e.g., Bash, Python) for automation and troubleshooting. Knowledge of database concepts and SQL. Experience with data ingestion tools like Apache Kafka or Apache NiFi. Familiarity with data warehouse concepts and technologies. Understanding of security principles and experience implementing security measures in Hadoop clusters. Strong problem-solving and troubleshooting skills, with the ability to analyze and resolve complex issues. Excellent communication and collaboration skills to work effectively with cross-functional teams. Relevant certifications such as Cloudera Certified Administrator for Apache Hadoop (CCAH) or Hortonworks Certified Administrator (HCA) are a plus. Compensation, Benefits and Duration Minimum Compensation: USD 50,000 Maximum Compensation: USD 175,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is not available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • Platform Engineer - Las Vegas, NV

    Photon Group 4.3company rating

    Remote

    Platform Engineer ( Combination of Release Engineer, Monitor Kibanna, Dynatrace, Apigee) Cloud Eningeer - Strong experience with core AWS services: VPC, EC2, S3, IAM, Route 53, Backups, EKS, SSO, MSK, Security Hub, GuardDuty, and related security/compliance tooling. • Advanced networking knowledge, including hands-on experience with AWS Transit Gateway (TGW) and Direct Connect - setup, troubleshooting, and hybrid connectivity. • Observability & Monitoring: Experience with Amazon CloudWatch, Grafana, OpenTelemetry (OTEL), and proactive alerting and telemetry configuration. • Infrastructure as Code & Automation: Proficient in Terraform, GitLab CI/CD, and Shell scripting for end-to-end automation. • Container Orchestration: Solid understanding of Kubernetes (EKS) and Istio service mesh for traffic management, security, and observability. • Security & Patching: Experienced in patching and vulnerability remediation in cloud-native and containerized environments, aligned with best practices and compliance. • Programming & Scripting: Hands-on Python development experience for scripting, automation, and infrastructure tooling. • Problem-solving mindset with the ability to work efficiently in fast-paced, Agile/Scrum environments and collaborate across cross-functional teams. Work in fast paced environments Compensation, Benefits and Duration Minimum Compensation: USD 33,000 Maximum Compensation: USD 118,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • AI Engineer- Dallas, TX

    Photon Group 4.3company rating

    Remote

    We are seeking a highly skilled AI Engineer proficient in Python to lead the technical development of our Agentic AI platform. In this role, you will move beyond simple prompt engineering to build sophisticated autonomous systems. You will be responsible for designing the architecture that allows agents to plan multi-step tasks, access external databases via RAG, and interact with third-party software through function calling. The ideal candidate treats LLMs as a component within a larger, robust software system, prioritizing reliability, scalability, and observability. Key Responsibilities Agent Orchestration: Build and maintain complex agentic workflows using frameworks like LangGraph, CrewAI, or AutoGen. Tool & Skill Integration: Develop Python-based tools and "plugins" that agents can invoke to perform real-world actions (e.g., querying SQL databases, interacting with APIs, or executing code). Advanced RAG Pipelines: Architect and optimize Retrieval-Augmented Generation (RAG) systems using vector databases to provide agents with long-term memory and domain-specific knowledge. Reasoning & Planning Logic: Implement and fine-tune reasoning patterns such as React (Reason + Act), Chain-of-Thought, and Plan-and-Solve to improve agent reliability. System Evaluation (Evals): Build automated testing frameworks to measure agent performance, accuracy, and "drift" using tools like LangSmith or custom evaluation harnesses. Performance Optimization: Optimize for latency and cost by managing token usage, implementing intelligent caching, and selecting the right model for the right task. Technical Requirements Expert Python Skills: Deep experience in asynchronous Python, Pydantic for data validation, and FastAPI for building robust service layers. AI Frameworks: Hands-on experience with LangChain, LlamaIndex, or specialized orchestration libraries. LLM Expertise: Deep understanding of LLM capabilities (OpenAI, Anthropic, Gemini) and local model deployment (Ollama, vLLM). Data Infrastructure: Proficiency with vector databases (e.g., Pinecone, Weaviate, Milvus, or Chroma) and traditional relational databases (PostgreSQL). Engineering Best Practices: Experience with CI/CD, Docker, and monitoring tools to ensure AI agents are production-ready, not just "demo-ready." Preferred Qualifications Experience with multi-agent systems where different agents have specialized roles and hand-off protocols. Contributions to open-source AI projects or a strong portfolio of Agentic AI experiments on GitHub. Knowledge of fine-tuning techniques (LoRA, QLoRA) for specific domain tasks. Compensation, Benefits and Duration Minimum Compensation: USD 47,000 Maximum Compensation: USD 166,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is not available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 14d ago
  • Backend/API Engineer - Node.js - Onshore

    Photon Group 4.3company rating

    Remote

    API & Backend Development Design, develop, and maintain RESTful and/or GraphQL APIs using Node.js. Build scalable, secure, and performant backend services and microservices. Integrate third-party systems, internal services, and databases. Use best practices for error handling, logging, caching, and performance optimization. Architecture & System Design Participate in architectural planning and technical design discussions. Implement clean, maintainable, well-documented code following industry standards. Contribute to system design decisions around scalability, reliability, and security. Database & Data Layer Work with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Redis). Design efficient schemas, write optimized queries, and implement ORM/ODM solutions. Collaboration & Deployment Collaborate with frontend engineers, QA, product managers, and DevOps teams. Participate in code reviews, sprint planning, and technical documentation. Support CI/CD processes and deployment to cloud environments (AWS, GCP, Azure). Monitoring & Maintenance Implement monitoring, logging, and alerting using tools like Prometheus, Grafana, Datadog, or similar. Diagnose and resolve performance bottlenecks, outages, and production issues. Ensure reliability, security, and compliance standards across backend systems. Qualifications Required 3-5+ years of hands-on backend development experience. Strong proficiency in Node.js, JavaScript/TypeScript, and modern backend frameworks (e.g., Express, NestJS, Fastify). Experience building and maintaining APIs at scale. Solid understanding of microservices, serverless architectures, and distributed systems. Experience with cloud platforms (AWS, Azure, or GCP). Strong knowledge of databases (SQL and NoSQL). Experience with Git, CI/CD pipelines, and automated testing. Familiarity with API documentation tools (Swagger/OpenAPI). Preferred Experience with event-driven architectures (Kafka, RabbitMQ, SNS/SQS). Knowledge of containerization and orchestration (Docker, Kubernetes). Experience with OAuth2, JWT, and security best practices. Familiarity with performance tuning and application profiling. Experience with infrastructure-as-code (Terraform, CloudFormation). Compensation, Benefits and Duration Minimum Compensation: USD 44,000 Maximum Compensation: USD 154,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 44d ago
  • Backend Engineer - Python- Dallas, TX

    Photon Group 4.3company rating

    Remote

    We are looking for a Python Backend Engineer to join our team building Agentic AI products. You will be responsible for developing the robust, scalable infrastructure that allows autonomous agents to interact with the real world. Your work will involve creating secure execution environments, managing complex tool-integration layers, and optimizing the retrieval-augmented generation (RAG) pipelines that give our agents their "intelligence." You will bridge the gap between raw model outputs and reliable, production-ready software actions. Key Responsibilities Agent Logic & Tooling: Develop and maintain the backend "tools" (APIs, scrapers, database connectors) that AI agents use to perform tasks. Orchestration Implementation: Use frameworks like LangChain, LangGraph, or CrewAI to implement complex reasoning chains and multi-agent coordination. RAG Pipeline Engineering: Build and optimize data ingestion and retrieval systems using Vector Databases, ensuring the agent has the right context at the right time. Asynchronous Task Management: Manage long-running AI reasoning cycles using asynchronous Python (FastAPI/Asyncio) and task queues like Celery or Redis. API Architecture: Design and implement secure, high-performance REST or GraphQL APIs that serve as the interface between the agentic backend and the frontend. Safety & Guardrails: Implement backend-level validation and guardrails to ensure that autonomous agent actions remain within secure and ethical boundaries. Technical Requirements Python Expertise: 8+ years of professional experience with Python, specifically with FastAPI, Pydantic, and Asyncio. AI Frameworks: Hands-on experience with LangChain or LlamaIndex. Database Management: Proficiency in PostgreSQL and experience with Vector Databases. Cloud & DevOps: Experience deploying containerized applications using Docker and Kubernetes on AWS, Azure, or GCP. Scalability: Understanding of distributed systems and how to handle the high latency and compute requirements of LLM-based applications. Version Control: Mastery of Git and CI/CD best practices. Preferred Qualifications Knowledge of Prompt Engineering from a programmatic perspective (dynamic prompt templating). Familiarity with observability tools for AI, such as LangSmith or Arize Phoenix. Compensation, Benefits and Duration Minimum Compensation: USD 38,000 Maximum Compensation: USD 135,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is not available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 14d ago
  • GCP Engineer - Springfield, MO

    Photon Group 4.3company rating

    Remote

    We are seeking a skilled GCP Engineer to design, implement, and manage scalable cloud solutions on the Google Cloud Platform. The ideal candidate will have hands-on experience with GCP services, strong DevOps or infrastructure-as-code skills, and a solid understanding of cloud-native application deployment and management. Key Responsibilities: Design and implement cloud infrastructure solutions using GCP services (e.g., Compute Engine, Cloud Run, GKE, Cloud Functions). Develop automation scripts using Terraform, Deployment Manager, or Cloud Build for infrastructure provisioning and CI/CD. Implement and manage cloud security, IAM policies, and VPC networking. Monitor and optimize GCP workloads for performance, availability, and cost. Support cloud-native application deployments using containers (e.g., Docker, Kubernetes). Collaborate with software engineers, architects, and DevOps teams to deliver scalable and reliable systems. Set up and manage observability tools (e.g., Cloud Logging, Cloud Monitoring, Stackdriver, Prometheus/Grafana). Required Skills and Qualifications: 3+ years of experience with Google Cloud Platform. Proficiency in GCP core services: Compute, Networking, IAM, Storage, BigQuery, and Pub/Sub. Strong experience in Infrastructure as Code (IaC) with Terraform or GCP Deployment Manager. Hands-on experience with Docker and Kubernetes (GKE preferred). Understanding of CI/CD pipelines and tools such as Jenkins, GitLab CI, Cloud Build, etc. Strong knowledge of cloud security best practices and compliance standards. Preferred Qualifications: Google Cloud Professional Certification (e.g., Cloud Engineer, DevOps Engineer, or Architect). Experience with hybrid cloud or multi-cloud deployments. Familiarity with serverless architectures (Cloud Functions, Cloud Run). Experience with Cloud SQL, Firestore, or other GCP-managed databases. Familiarity with data engineering or AI/ML services on GCP is a plus. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Compensation, Benefits and Duration Minimum Compensation: USD 48,000 Maximum Compensation: USD 168,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • BI Engineer (Power BI) - Dallas, TX

    Photon Group 4.3company rating

    Remote

    As a BI Engineer specializing in Power BI, you will play a key role in the design, development, and deployment of modern reporting and analytical solutions on the Snowflake and Power BI platform. You will be instrumental in migrating existing reports and dashboards from OBIEE to Power BI, ensuring data accuracy, performance, and user satisfaction. Your expertise in data modeling, ETL/ELT processes, and Power BI development will be critical to the success of this strategic initiative. Responsibilities: Collaborate with business stakeholders, data analysts, and other team members to understand reporting requirements and translate them into robust Power BI solutions. Design and develop scalable and efficient data models in Power BI, leveraging best practices for performance and usability. Participate in the migration of existing reports and dashboards from OBIEE to Power BI, ensuring data integrity and functional parity. Develop and maintain ETL/ELT processes to load and transform data from Snowflake and other relevant data sources into Power BI datasets. Create interactive and visually compelling dashboards and reports in Power BI that empower end-users with self-service analytics capabilities. Implement and maintain data quality checks and validation processes to ensure the accuracy and reliability of reporting data. Optimize Power BI report performance and troubleshoot any performance bottlenecks. Contribute to the development and maintenance of technical documentation for Power BI solutions. Stay up-to-date with the latest Power BI features and best practices and recommend innovative solutions to enhance our reporting capabilities. Participate in testing, deployment, and ongoing support of Power BI applications. Work closely with the Snowflake team to ensure seamless integration between the data warehouse and Power BI. Qualifications: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Proven experience 6-10+ as a BI Engineer or similar role with a strong focus on Power BI development. Hands-on experience in designing and developing data models within Power BI Desktop. Proficiency in writing DAX queries for calculations and data manipulation within Power BI. Experience with data extraction, transformation, and loading (ETL/ELT) processes. Familiarity with data warehousing concepts and principles. Strong SQL skills and experience working with relational databases (experience with Snowflake is a significant plus). Experience migrating reporting solutions from legacy BI platforms (OBIEE experience is highly desirable). Excellent analytical and problem-solving skills with a keen attention to detail. Strong communication and collaboration skills with the ability to effectively interact with both technical and business stakeholders. Experience with version control systems (e.g., Git). Preferred Skills: Experience with advanced Power BI features such as Power BI Service administration, dataflows, and Power BI Embedded. Knowledge of scripting languages such as Python for data manipulation or automation. Familiarity with cloud data platforms (Snowflake preferred). Experience with agile development methodologies. Understanding of financial services or investment management data. Compensation, Benefits and Duration Minimum Compensation: USD 38,000 Maximum Compensation: USD 134,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is not available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • Platform Engineer - US

    Photon Group 4.3company rating

    Remote

    We are seeking a skilled Platform Engineer with expertise in Terraform, Golang development, AWS, SDLC automation, and Kubernetes to join our dynamic team. As a Platform Engineer, you will play a crucial role in designing, building, and maintaining our infrastructure and automation solutions, ensuring reliability, scalability, and security across our cloud environment. Key Responsibilities: Design, implement, and maintain infrastructure as code (IaC) solutions using Terraform for AWS environments. Develop and optimize automation scripts and tools in Golang to enhance our SDLC processes. Manage and deploy containerized applications using Kubernetes, ensuring high availability and performance. Collaborate with cross-functional teams to define infrastructure requirements and streamline deployment pipelines. Implement monitoring, logging, and alerting systems to ensure proactive management of our cloud infrastructure. Participate in troubleshooting and resolving infrastructure-related issues in production and non-production environments. Stay updated with industry trends and best practices to continuously improve our infrastructure and deployment processes. Skills and Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 5+ years of experience as a Golang developer, with a strong understanding of software development lifecycle (SDLC) practices. Proven experience with Terraform in large-scale AWS environments, designing and maintaining infrastructure as code. Deep knowledge of Kubernetes, including deployment, scaling, and management of containerized applications. Proficiency in AWS services and solutions, including EC2, S3, IAM, VPC, and CloudFormation. Experience in designing and implementing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or similar. te Excellent problem-solving and analytical skills, with a proactive approach to identifying and addressing challenges. Strong communication skills and the ability to work effectively in a collaborative team environment. Preferred Qualifications: AWS certification (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer). Experience with other cloud providers (Azure, Google Cloud Platform). Familiarity with monitoring tools such as Prometheus, Grafana, ELK Stack, etc. Prior experience with Agile methodologies and working in Agile teams.
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • Front End Engineer - Angular- Dallas, TX

    Photon Group 4.3company rating

    Remote

    We are seeking a highly skilled Angular Developer to build the next generation of AI-driven user interfaces. You will be responsible for creating a seamless, high-performance frontend that allows users to collaborate with autonomous agents. Your work will focus on visualizing complex agentic workflows-such as the agent's reasoning steps, tool-usage, and long-running autonomous tasks-using modern Angular features like Signals and RxJS to ensure a responsive, "live" experience. Key Responsibilities Real-time Streaming Interfaces: Implement robust handling for Server-Sent Events (SSE) and WebSockets to display real-time "token streaming" and agent status updates as they happen. Complex State Management: Utilize Angular Signals or NgRx to manage the highly dynamic states of an AI agent (e.g., Idle, Planning, Fetching Data, Executing Code, Awaiting Approval ). AI Observability UI: Build intuitive dashboards that visualize "Chain-of-Thought" reasoning, allowing users to see the references, citations, and logic used by the agent to reach a conclusion. Human-in-the-Loop (HITL) Components: Develop specialized UI components that allow users to pause, edit, or approve an agent's proposed plan before it executes. Performance Optimization: Ensure the UI remains performant even when handling large volumes of streaming data and complex visualizations. Collaboration with AI Engineers: Work closely with Backend and AI engineers to define JSON schemas and API contracts that support the unique needs of agentic interaction. Technical Requirements Angular Expertise: 8+years of experience with Angular (v16/17+ preferred). Strong mastery of Standalone Components, Signals, and the provide Router/provide HttpClient patterns. RxJS Mastery: Deep understanding of reactive programming to handle complex asynchronous data streams and event orchestration. Modern CSS/Styling: Proficiency in Tailwind CSS or SCSS to build "Generative UI" components that can adapt their layout based on the agent's output. State Management: Proven experience with NgRx, Akita, or Signal-based state management in enterprise-scale applications. API Integration: Experience working with RESTful APIs and real-time streaming protocols. Familiarity with OpenAI /Anthropic API structures is a plus. Testing: Commitment to quality through unit testing and E2E testing (Cypress ) Preferred Qualifications Experience with Canvas or SVG-based visualizations to show agent decision trees or multi-agent handoffs. Familiarity with Web Workers for handling heavy client-side processing without blocking the UI thread. A portfolio showcasing AI-integrated products or highly interactive real-time dashboards. Compensation, Benefits and Duration Minimum Compensation: USD 38,000 Maximum Compensation: USD 133,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is not available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 14d ago
  • Gen AI Engineer -McKinney, TX

    Photon Group 4.3company rating

    Remote

    Responsibilities Conduct extensive research to identify and curate relevant data sources for prompt development. Analyze, design, develop, and refine diverse prompts tailored to specific AI tasks and applications. Ensure the prompts are intuitive for a productive conversation to elicit desired responses and achieve outcomes. Optimize AI prompt generation process to enhance overall system performance. Integrate prompt designs into natural language processing (NLP) models. Ensure seamless integration of prompt engineering strategies into existing AI systems. Develop and maintain RESTful APIs using Python and relevant frameworks (e.g., Django, Flask). Design, develop, and maintain user interfaces using React and related technologies (e.g., Redux, Next.js). Conduct thorough evaluations of prompt effectiveness through data analysis and user feedback. Iterate on prompt designs based on performance metrics, aiming for continual improvement in conversational capabilities. Develop and maintain quantitative key performance indicators to evaluate the effectiveness of AI prompts. Draft and distribute reports on prompt performance to identify areas for improvement. Work closely with UX/UI designers, product managers, and other stakeholders to align prompt design with user experience goals and overall project objectives. Stay ahead of AI advancements, natural language processing, and machine learning to apply to our business objectives proactively. Document prompt design strategies, methodologies, and outcomes for internal reference and knowledge sharing. Communicate effectively with team members and stakeholders, presenting findings and recommendations related to prompt engineering. Collaborate with business stakeholders, product team, and developers to understand use cases, project requirements, and objectives. Collaborate with content creators, product teams, and data scientists to ensure prompt accuracy and alignment with company goals and user needs. Requirements Bachelor's degree, preferred in computer science, engineering, or a related field. Master's degree preferred. 3+ years of experience in software development with Proficiency in programming languages such as Python, Java, or similar is essential for implementing and integrating prompt engineering solutions into existing AI systems. 2+ years developing applications using AI, natural language processing, and speech recognition. Experience with Python and at least one Python web framework (e.g., Django, Flask). Experience with RESTful APIs and API integration. Skills in data analysis and statistical methods are valuable for assessing the performance of prompts, analyzing user interactions, and making informed adjustments to enhance the overall system. Comprehensive understanding of natural language processing techniques and tools, machine learning Principles, and AI-generated content development Conversational AI experience: Any of the following would be good: Kore.AI, Google Dialog Flow, Amelia, Yellow.AI Familiarity with popular conversational AI platforms and frameworks (e.g., TensorFlow, PyTorch, or Hugging Face Transformers) is beneficial for leveraging pre-trained models and integrating prompt engineering strategies effectively. Proven work experience as a Prompt Engineer or similar role High-level familiarity with architecture and operation of large language models Exceptional verbal and written communication skills for effective prompt design and collaboration with users, engineers, and product teams Experience in creative writing or content creation is a plus. Compensation, Benefits and Duration Minimum Compensation: USD 42,000 Maximum Compensation: USD 147,000 Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role. Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees. This position is available for independent contractors No applications will be considered if received more than 120 days after the date of this post
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • Conversational Engineer - US

    Photon Group 4.3company rating

    Remote

    Experience working with NLP and Machine Learning Expert experience working with GenAI and LLM tools (i.e. GPT, BERT) Tangible experience working with ETL, relational databases, NoSQL or MongoDB. Worked on Vertex AI and good experience on Voice AI conversation 5+ on Python Nuance IVR/Microsoft
    $78k-118k yearly est. Auto-Apply 60d+ ago
  • Backend Rails Engineer (Remote)

    Five To Thrive 3.7company rating

    Chicago, IL jobs

    The Five to Nine platform is shaking up the future of work by helping companies scale their event programs programs through data. We're looking for a Backend Rails Engineer who's passionate about transforming workplace programs and fostering joy with our users. You'll be helping our customers get recognized for their meaningful impact in workplace culture as program managers. In just three years, we've: raised from top VCs such as Kapor Capital and Morgan Stanley, we've increased our user base by over 90%, and have been featured in Vice, Business Insider, and more. Our B2B SaaS platform is helping customers like MasterClass, Upwork, and Yahoo to manage and evaluate their company events and programs. We're changing the way company programs are run - from Hispanic Heritage Month to team socials - by passing the mic back to employees to voice their feedback and giving ERG leaders and HR managers visibility into their cultural initiatives. The employee engagement space is huge and especially relevant right now - a $74B market growing 12% year over year. While companies are spending a ton on scaling their employee engagement programs, less than 10% are actually measuring the ROI of their efforts. That's where Five to Nine comes in. Our vision is to be a one-stop shop for workplace leaders to design impactful programs, connect with a community of program leaders, and access resources to create a radically transparent culture that develops, connects, and upskills teams to be the best in show workplace. Job Description WHO YOU ARE: You're a self-starter and enjoy taking ownership. You can effectively manage tasks, projects and problem-solve. You're a dynamic engineer who is an effective communicator and enjoys an environment of collaboration. You want to have a big impact. We're looking for someone who is proficient in Ruby on Rails, If the individual has experience in Rails as well, that would be great but not necessary, YOUR DAY-TO-DAY: As a Backend Rails Engineer, you'll be developing our core product, reporting to the Head of Engineering and working alongside our engineering and product teams. Qualifications Technical Requirements: Mid-range to Senior Experience of professional full-stack web development. Solid understanding of the Ruby programming language and Rails framework. Solid understanding in Javascript including experience with React Solid understanding of RESTful systems and the principles of good API design Understanding of Postgres or equivalent Previous experience maintaining production applications. Focus on writing clear, maintainable, tested code. Experience with Git, continuous integration and regular deployments. Excellent written communication skills and diligent ability to contribute to the team by performing code reviews. Additional Information If this sounds like you (and even if you don't think you meet 100% of the requirements), we'd love to hear from you. You'll be joining an exceptionally talented and motivated team, helping us drive demand for the Five to Nine platform. We are creating massive value for our customers and we're just getting started! This position is remote. All your information will be kept confidential according to EEO guidelines.
    $64k-88k yearly est. 3d ago
  • Platform Engineer - US

    Photon Group 4.3company rating

    New Jersey jobs

    Praveen Ramachandra (Messaging & Streaming) Requirements As a Lead Software Engineer at JPMorgan Chase within the Corporate and Investment Bank's Application Infrastructure Modernization team , you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. Job responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Hands-on practical experience delivering system design, application development, testing, and operational stability Experience in designing, developing, and maintaining a high-performance messaging platform using Kafka, Flink, Amazon Kinesis or similar technologies Design, implement, and maintain infrastructure as code (IaC) solutions using Terraform for AWS environments. Proficiency in Java) Proficiency in automation and continuous delivery methods Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) In-depth knowledge of the financial services industry and their IT systems Preferred qualifications, capabilities, and skills Familiarity in provisioning and public cloud native components Platforming solutions for general purpose consumption within the enterprise is a plus
    $76k-109k yearly est. Auto-Apply 60d+ ago
  • MRO Engineer

    DHD Consulting 4.3company rating

    New Jersey jobs

    The MRO Engineer is responsible for leading new equipment development, installation, and validation projects while supporting continuous improvement of existing production systems. This role serves as the technical point of contact for all engineering-related issues, ensuring optimal equipment performance, cost efficiency, and compliance with ISO 13485 standards. Key Responsibilities - Initiate and manage projects for new equipment development and existing equipment improvement. - Connect and coordinate cross-functional teams to ensure project success. - Prepare project reports, including financial evaluations such as ROI, efficiency, and cost-effectiveness. - Plan and lead equipment installation, setup, validation, and personnel training. - Develop and manage annual project plans and budgets. - Communicate effectively across all organizational levels to ensure project alignment. - Maintain an updated equipment list, project progress, and payment status; share updates with Finance and Quality Management teams. - Maintain equipment manuals, SOPs, and training documentation, and ensure training records are properly archived. - Prepare and update Standard Operating Procedures (SOPs) for both new and existing equipment. - Support the production team in resolving equipment-related issues during and outside of operating hours. - Perform business trips as needed to support project execution or vendor coordination. Qualifications & Requirements - Bilingual in Korean and English preferred - 5 years of experience in related field preferred but not required. - Intermediate understanding of ISO 13485 design and validation requirements. - Proficiency with Microsoft Office Suite - Proficiency with Computer-aided drafting (CAD) software. - Proven leadership and cross-functional collaboration skills. - Strong verbal and written communication skills. - Excellent budgeting and cost analysis abilities. - Exceptional organizational and time management skills. - Strong analytical and problem-solving capabilities. Travel: As required for project execution
    $79k-113k yearly est. 60d+ ago
  • Backend Engineer

    Allhealth Network 3.8company rating

    Remote

    All.Health is hiring a Backend Engineer to architect and build scalable systems for ingesting, processing, and managing diverse health data sources-from continuous wearable signals to episodic clinical records. You will work on the backbone of a platform that combines real-time data, AI insights, and personalized care pathways to transform healthcare delivery. You should be comfortable working with regulated health data (PHI/PII), building privacy-aware and HIPAA-compliant systems, and have strong experience working with event-driven architectures, secure APIs, and health data standards (e.g., FHIR, HL7, BLE protocols). Full-stack experience, especially building internal tools or patient/provider-facing dashboards, is a strong plus.Core Responsibilities Design and implement scalable backend services that ingest, transform, and persist diverse health data (wearable sensor streams, EHRs, user inputs).Build APIs and data pipelines for real-time and batch processing, ensuring data integrity, security, and compliance Integrate with external systems and protocols including FHIR APIs, BLE-based devices, Apple HealthKit, and Google Fit. Ensure backend services are designed for high availability, reliability, and low latency, especially for real-time clinical decision support Collaborate with security and compliance teams to ensure HIPAA-compliant handling of PHI/PII data. Work cross-functionally with product, clinical, and AI teams to align backend capabilities with user-facing experiences and clinical workflows Contribute to CI/CD pipelines, observability tooling, and data quality monitoring Support and mentor teammates across engineering functions, especially around secure coding and data access best practices Requirements Bachelor's or Master's degree in Computer Science or related 5+ years of experience delivering products end-to-end, from ideation through planning and scoping to implementation Full-stack experience, including building dashboards, admin tools, or data visualizations with React/Next.js, Tailwind, etc Proven software architecture experience, patterns of large, high-scale applications Experience integrating 3rd-party health platforms and APIs, including EHRs, remote monitoring tools, or insurance data exchanges Background in working with data warehousing, analytics platforms, or data lakes for longitudinal health data Exposure to clinical workflows, remote patient monitoring, or population health management systems Familiarity with data governance, data provenance, and auditing frameworks for regulated environments Passion for writing clean, maintainable, and testable code Excellent programming and computer science fundamentals, and a deep love for technology Ability to adapt and learn new skills coupled with a resourceful, can-do attitude Outstanding attention to detail Proficient experience with Git, Github, Jira or similar project enablement tools Technical Requirements Strong Python backend engineer (FastAPI, Flask, Django) with understanding of distributed systems, microservices, and asynchronous workflows (Celery) Solid database experience with PostgreSQL or MySQL; comfortable with Kafka or similar event-streaming systems and enforcing data contracts and idempotency Competent in test architecture using PyTest (fixtures, contract tests), Hypothesis (property-based testing), and Locust (performance/load testing) Skilled in observability and reliability engineering - instrumenting metrics, traces, and logs to measure and improve real-world system behavior Hands-on with Kubernetes, containerized CI/CD pipelines, and cloud environments (Azure, AWS, GCP) Familiar with OAuth2, TLS certificate management, and secure service communication Understands security and compliance principles (HITRUST, HIPAA, FDA 510(k)), encryption, and evidence generation for audits Experience with HL7 or FHIR interfaces - message parsing, composition, and secure data exchange in healthcare environments Systems-oriented mindset: treats testing, automation, and monitoring as first-class engineering concerns; pragmatic, collaborative, and effective across the full stack Experience with Java or other JVM-based languages preferred Experience with Rust What We're Looking For Builder mindset with a passion for delivering robust, scalable, and secure systems in healthcare Comfortable working in a fast-paced, mission-driven environment where data accuracy and integrity are paramount Willingness to get hands dirty across the stack when needed and collaborate across disciplines Curious, thoughtful, and focused on long-term maintainability and resilience Deeply motivated by improving health outcomes through technology
    $68k-86k yearly est. Auto-Apply 8d ago

Learn more about RWJBarnabas Health jobs

View all jobs