Post job

Devops Engineer jobs at TWO95 International

- 863 jobs
  • DevOps Engineer | Machine Learning Platforms

    Engine 4.8company rating

    Pittsburgh, PA jobs

    MLOps Engineer Remote | Pittsburgh, PA area On-site: 1 day/month eNGINE builds Technical Teams. We are a Solutions and Placement firm shaped by decades of interaction with Technical professionals. Our inspiration is continuous learning and engagement with the markets we serve, the talent we represent, and the teams we build. Our Consulting Workforce is encouraged to enjoy career fulfillment in the form of challenging projects, schedule flexibility, and paid training/certifications. Successful outcomes start and finish with eNGINE. Role Overview eNGINE is seeking a MLOps Engineer to manage and scale machine learning workflows from development to production. This role ensures that models are robust, maintainable, and performant in real-world environments, while collaborating closely with Data Science and Engineering teams to integrate ML solutions into digital products. Key Responsibilities Implement end-to-end ML deployment strategies to move models from development to production reliably Configure and manage scalable, cloud-based infrastructure for ML workloads Track and analyze model behavior and operational metrics to ensure consistent performance Establish automated processes for retraining, versioning, and releasing ML models Work closely with cross-functional teams to embed machine learning capabilities into applications and platforms Review and refine system architecture and pipelines to improve latency, throughput, and resource utilization Maintain documentation and operational standards for reproducible, production-ready ML systems Identify and apply new tools and technologies to streamline ML operations and reduce maintenance overhead Required Qualifications Bachelor's degree Experience deploying machine learning solutions in production environments Strong Python skills, including experience with numerical and ML libraries (NumPy, Pandas, scikit-learn) and at least one deep learning framework (PyTorch or TensorFlow) Experience with containerization and orchestration technologies such as Docker and Kubernetes Knowledge of cloud platforms (AWS, GCP, or Azure) and Infrastructure-as-Code tools Familiarity with ML workflow management or experiment tracking tools (MLflow, Kubeflow, or similar) Understanding of software engineering best practices, including version control, testing, and documentation Preferred Experience Prior involvement in building or supporting ML-driven digital products Experience optimizing ML pipelines for cost, performance, and scalability Collaborative experience with cross-functional engineering and data teams Practical exposure to monitoring, alerting, and incident response for ML systems Location & Work Environment Fully remote, with monthly on-site meetings in the Pittsburgh, PA area Next Steps For finer details on how eNGINE can enhance your career, apply today! No C2C, third-party candidates, relocation assistance, or sponsorship available for this role.
    $91k-122k yearly est. 1d ago
  • Software Engineer III[80606]

    Onward Search 4.0company rating

    New York, NY jobs

    Onward Search is partnering with a leading tech client to hire a Software Engineer III to help build the next generation of developer infrastructure and tooling. If you're passionate about making developer workflows faster, smarter, and more scalable, this is the role for you! Location: 100% Remote (EST & CST Preferred) Contract Duration: 6 months What You'll Do: Own and maintain Bazel build systems and related tooling Scale monorepos to millions of lines of code Collaborate with infrastructure teams to define best-in-class developer workflows Develop and maintain tools for large-scale codebases Solve complex problems and improve developer productivity What You'll Need: Experience with Bazel build system and ecosystem (e.g., rules_jvm_external, IntelliJ Bazel plugin) Fluency in Java, Python, Starlark, and TypeScript Strong problem-solving and collaboration skills Passion for building highly productive developer environments Perks & Benefits: Medical, Dental, and Vision Insurance Life Insurance 401k Program Commuter Benefits eLearning & Education Reimbursement Ongoing Training & Development This is a fully remote, contract opportunity for a motivated engineer who loves working in a flow-focused environment and improving developer experiences at scale.
    $90k-128k yearly est. 2d ago
  • Sr. Full Stack Developer

    Brooksource 4.1company rating

    Boston, MA jobs

    Senior Developer (Full Stack) 100% Remote 6-month contract (potential for extension) As the Senior Developer (Full Stack), you will be responsible for modernizing legacy applications and developing cloud-native solutions for the Executive Office of Education (EOE). You will design and maintain both front-end and back-end components using Node.js, Angular, and TypeScript, while supporting older Java and .NET systems during their transition. This role involves collaborating with cross-functional teams to analyze existing systems, build scalable APIs, and implement secure, high-performing applications in an AWS environment. You will also mentor junior developers and ensure best practices in architecture, testing, and documentation. Minimum Qualifications: Strong experience in TypeScript, JavaScript, HTML, and CSS Proficiency with Angular for front-end development and Node.js/Express.js for back-end services Experience with Java and/or .NET for maintaining and refactoring legacy systems Familiarity with databases such as Postgres, Snowflake, Oracle, and SQL Server Knowledge of AWS services and cloud-native development Nice to Have: Exposure to CI/CD pipelines and DevOps tools (e.g., GitHub Actions, Jenkins) Experience with ORM tools like Sequelize or Hibernate Responsibilities: Design, develop, and maintain full-stack web applications using Node.js and Angular Assess and refactor legacy applications into modern architectures Build RESTful APIs and integrate with internal/external services Collaborate with teams and mentor junior developers on modern frameworks Write unit/integration tests and perform code reviews What's In It For You: Weekly Paychecks Opportunity to lead modernization initiatives in a fully AWS-implemented environment Collaborative team culture with cutting-edge technologies
    $92k-118k yearly est. 4d ago
  • Senior Software Developer

    Robert Half 4.5company rating

    Itasca, IL jobs

    This is a new role that opened up due to growth. Looking for a strong backend engineer with deep experience building large-scale, high-volume, low-latency applications who truly understands the architectural challenges behind systems that operate at an “Amazon Prime Day” level of traffic. This person should know what to plan for, who to collaborate with, and how to step in during critical, fire-drill situations. The team is mostly remote, with a preference for candidates in Southern California or Chicago, since they meet in person quarterly for design and planning sessions. The role is about 50% coding on a high-volume backend application that powers fraud-detection JavaScript embedded across major merchant websites-collecting device data, identifying bots, detecting fraud, and supporting account protection. The frontend is minimal; expertise in Java, Spring Boot, Postgres, Oracle, and familiarity with non-relational databases (DynamoDB is fine) is essential. The team has 17 developers total, with this role joining the backend server group. Fintech or trading experience is a strong plus. Top Requirements: 5+ years of experience with Java Developing large scale applications Fin tech or trading is a plus
    $90k-117k yearly est. 4d ago
  • Senior Software Engineer

    Vernovis 4.0company rating

    Columbus, OH jobs

    Job Title: Spark 3 Developer Who We Are: Vernovis is a Total Talent Solutions company specializing in Technology, Cybersecurity, Finance & Accounting functions. At Vernovis, we help professionals achieve their career goals by matching them with innovative projects and dynamic contract opportunities across Ohio and the Midwest. Client Overview: Vernovis is partnering with a leading organization in scientific data management and innovation to modernize its big data platform. This initiative involves transitioning legacy systems, such as Cascading, Hadoop, and MapReduce, to Spark 3, optimizing for scalability and efficiency. As part of this well-established organization, your work will contribute to transforming how big data environments are managed and processed. What You'll Do: Legacy Workflow Migration: Lead the conversion of existing Cascading, Hadoop, and MapReduce workflows to Spark 3, ensuring seamless transitions. Performance Optimization: Utilize Spark 3 features like Adaptive Query Execution (AQE) and Dynamic Partition Pruning to optimize data pipelines. Collaboration: Work closely with infrastructure teams and stakeholders to ensure alignment with modernization initiatives. Big Data Ecosystem Integration: Develop solutions that integrate with platforms like Hadoop, Hive, Kafka, and cloud environments (AWS, Azure). Support Modernization Goals: Contribute to key organizational initiatives focused on next-generation data optimization and modernization. What Experience You'll Have: Spark 3 Expertise: 3+ years of experience with Apache Spark, including Spark 3.x development and optimization. Migration Experience: Proven experience transitioning from Cascading, Hadoop, or MapReduce to Spark 3. Programming Skills: Proficiency in Scala, Python, or Java. Big Data Ecosystem: Strong knowledge of Hadoop, Hive, and Kafka. Performance Tuning: Advanced skills in profiling, troubleshooting, and optimizing Spark jobs. Cloud Platforms: Familiarity with AWS (EMR, Glue, S3) or Azure (Databricks, Data Lake). The Vernovis Difference: Vernovis offers Health, Dental, Vision, Voluntary Short & Long -Term Disability, Voluntary Life Insurance, and 401K. Vernovis does not accept inquiries from Corp to Corp recruiting companies. Applicants must be currently authorized to work in the United States on a full-time basis and not violate any immigration or discrimination laws. Vernovis provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
    $89k-117k yearly est. 3d ago
  • ETL/ELT Data Engineer (Secret Clearance) - Hybrid

    Launchcode 2.9company rating

    Austin, TX jobs

    LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies! Details: Full-Time W2, Salary Immediate opening Hybrid - Austin, TX (onsite 1-2 times a week) Pay $85K-$120K Minimum Experience: 4 years Security Clearance: Active DoD Secret Clearance Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered. Job description Job Summary A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO. Why the company? Environment of Autonomy Innovative Commercial Approach People over process We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age. Required Skills: Active DoD Secret Clearance (Required) 4+ years of experience in data science, data engineering, or similar roles. Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow. Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra). Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow). Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation. CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant. Nice to Have: Familiarity with SBIR technologies and transformative platform shifts Experience working in Agile or DevSecOps environments 2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin #LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
    $85k-120k yearly 2d ago
  • Senior Data Engineer

    Concert 4.0company rating

    Nashville, TN jobs

    Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at *************** YOUR ROLE Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights. You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care. As a Senior Data Engineer you will: Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams. Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability. Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency. Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources. Work with SageMaker for machine learning data preparation and integration. Ensure data security, privacy, and compliance with industry regulations. Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs. Monitor and troubleshoot data pipelines, identifying and resolving issues promptly. Implement best practices for data engineering, including code reviews, testing, and automation. Mentor junior data engineers and share knowledge on data engineering best practices. Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies. After 3 months on the job you will have: Developed a strong understanding of Concert's data engineering infrastructure Learned the business domain and how it maps to the information architecture Made material contributions towards existing key results After 6 months you will have: Led a major initiative Become the first point of contact when issues related to the data warehouse are identified After 12 months you will have: Taken responsibility for the long term direction of the data engineering infrastructure Proposed and executed key results with an understanding of the business strategy Communicated the business value of major technical initiatives to key non-technical business stakeholders WHAT LEADS TO SUCCESS Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving. Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well. Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today. Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states. Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care. Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward RELEVANT SKILLS & EXPERIENCE Minimum of 4 years experience working as a data engineer Bachelor's degree in software or data engineering or comparable technical certification / experience Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions Strong understanding of data warehousing principles and best practices Experience with DBT for data transformation and modeling. Proficiency in SQL and at least one programming language (e.g., Python, Scala) Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams COMPENSATION Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering. LOCATION Concert is based in Nashville, Tennessee and supports a remote work environment. For further questions, please contact: ******************.
    $75k-102k yearly est. 2d ago
  • Senior Data Engineer

    Bayforce 4.4company rating

    Charlotte, NC jobs

    **NO 3rd Party vendor candidates or sponsorship** Role Title: Senior Data Engineer Client: Global construction and development company Employment Type: Contract Duration: 1 year Preferred Location: Remote based in ET or CT time zones Role Description: The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs. Key Responsibilities Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric. Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads). Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling. Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices. Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis. Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements. Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets. Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices. Requirements: Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering with strong focus on Azure Cloud. Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support. Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns. Advanced PySpark/Spark experience for complex transformations and performance optimization. Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency). Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation). Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns. Preferred Skills Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks). Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance. Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing). Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting. Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
    $70k-93k yearly est. 4d ago
  • Junior Data Engineer

    Brooksource 4.1company rating

    Columbus, OH jobs

    Contract-to-Hire Columbus, OH (Hybrid) Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes. Job Responsibilities Automate key processes and enhance data quality Improve injection processes and enhance machine learning capabilities Manage substitutions and allocations to streamline product ordering Work on logistics-related data engineering tasks Build and maintain ML models for predictive analytics Interface with various customer systems Collaborate on integrating AI models into customer service Qualifications Bachelor's degree in related field 0-2 years of relevant experience Proficiency in SQL and Python Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus). Knowledge of data science concepts. Business acumen and understanding (corporate experience or internship preferred). Familiarity with Tableau Strong analytical skills Attitude for collaboration and knowledge sharing Ability to present confidently in front of leaders Why Should You Apply? You will be part of custom technical training and professional development through our Elevate Program! Start your career with a Fortune 15 company! Access to cutting-edge technologies Opportunity for career growth Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
    $86k-117k yearly est. 4d ago
  • Data Engineer (Databricks)

    Comresource 3.6company rating

    Columbus, OH jobs

    ComResource is searching for a highly skilled Data Engineer with a background in SQL and Databricks that can handle the design and construction of scalable management systems, ensure that all data systems meet company requirements, and also research new uses for data acquisition. Requirements: Design, construct, install, test and maintain data management systems. Build high-performance algorithms, predictive models, and prototypes. Ensure that all systems meet the business/company requirements as well as industry practices. Integrate up-and-coming data management and software engineering technologies into existing data structures. Develop set processes for data mining, data modeling, and data production. Create custom software components and analytics applications. Research new uses for existing data. Employ an array of technological languages and tools to connect systems together. Recommend different ways to constantly improve data reliability and quality. Qualifications: 5+ years data quality engineering Experience with Cloud-based systems, preferably Azure Databricks and SQL Server testing Experience with ML tools and LLMs Test automation frameworks Python and SQL for data quality checks Data profiling and anomaly detection Documentation and quality metrics Healthcare data validation experience preferred Test automation and quality process development Plus: Azure Databricks Azure Cognitive Services integration Databricks Foundational model Integration Claude API implementation a plus Python and NLP frameworks (spa Cy, Hugging Face, NLTK)
    $79k-102k yearly est. 2d ago
  • Senior Servicenow Developer

    Revel It 4.3company rating

    Newark, OH jobs

    *This is a Direct Hire Fulltime role only US Citizens and Green Card holders are accepted Update: Seeking candidates willing to travel to Newark when needed, and could be once a week or every other week. Note: There is a strong possibility we'll be moving this team to our Columbus office in 2026 This Senior ServiceNow Developer position is responsible for analyzing, designing, developing, or implementing, and maintaining ServiceNow applications tailored to specifications and organizational needs. Designs, develops, deploys, and supports custom applications, integrations, and workflows within the ServiceNow platform. Collaborates with architects, developers, and cross-functional teams to deliver and support business solutions. Responsibilities: Create and refine prototypes for user testing and feedback analysis. Review and maintain technical documentation, including architecture diagrams and user guides. Conduct quality assurance testing to identify and resolve defects or issues. Troubleshoot and resolve production issues and defects. Ensure compliance with company policies, technical and security standards, and recommend ServiceNow platform governance. Mentor other developers, assist in code reviews, and oversee deployments. Contribute to the evolution of standards and best practices. Ensure uptime and stability of the ServiceNow platform. Maintain awareness of and adherence to client's compliance requirements and risk management concepts, expectations, policies and procedures and apply them to daily tasks. Deliver a consistent, high level of service within our Serving More standards. Other duties as assigned. Requirements: High School diploma or equivalent required Bachelor's in computer science, software engineering or related field experience preferred 5+ years development experience with ServiceNow 4+ years with ServiceNow modules such as ITSM, ITOM, HRSD, or CSM Familiarity with JavaScript, HTML, CSS, and other relevant technologies ServiceNow Application Developer (CAD) and/or ServiceNow System Administrator (CSA) certifications preferred Git is great, VisioStudio and API integration
    $91k-118k yearly est. 2d ago
  • MES Systems Engineer

    Venteon 3.9company rating

    Toledo, OH jobs

    This role strengthens the Operational Technology (OT) environment by improving data acquisition, system integration, and shop-floor processes across global manufacturing sites. It brings IT and cybersecurity best practices into OT environments while supporting Industrial IoT, automation, analytics, and AI-driven initiatives. KEY RESPONSIBILITIES Technical Leadership Lead OT/IT integration projects that connect plant-floor equipment with business systems to improve operational performance. Collaborate with IT, cybersecurity, engineering, maintenance, and operations teams to meet project goals. Stay current on industrial automation, IIoT, analytics, and OT cybersecurity trends. System Integration & Data Enablement Build and manage data collection frameworks for real-time monitoring and historical analysis. Support predictive maintenance, process optimization, and AI-driven insights. Maintain documentation for system architecture, configurations, and operational procedures. Information Technology & Cybersecurity Apply enterprise IT best practices across OT environments (asset management, change control, disaster recovery, etc.). Partner with cybersecurity teams on risk assessments, security controls, and incident response. Communication & Training Serve as the liaison between vendors, stakeholders, and technical teams. Deliver training to support technology adoption and effective system usage. Perform other duties as assigned. QUALIFICATIONS Bachelor's degree in Engineering, Information Systems, Industrial Technology, or related field preferred. 5+ years of technical leadership in manufacturing, industrial automation, or IT/OT integration. 10+ years in IT, OT, and/or control systems. Experience integrating OT systems with ERP/MES-Plex and Plex A&O (Mach2) strongly preferred. TRAVEL Up to 50% travel to North American plants, including Mexico. KNOWLEDGE & SKILLS Strong communication, stakeholder management, and organizational abilities. Expertise in IIoT platforms; PTC Kepware highly preferred. Understanding of SCADA, PLCs, sensors, industrial protocols, and shop-floor technologies. Experience with Industry 4.0, CMMS/EAM, predictive maintenance, and equipment performance monitoring. Familiarity with Zero Trust frameworks and cybersecurity tools (firewalls, NAC, SIEM). Experience with visualization tools such as Power BI or Ignition.
    $90k-124k yearly est. 4d ago
  • Senior Devops Engineer

    Exiger 4.0company rating

    Richmond, VA jobs

    Exiger Product and Technology is an experienced team of software professionals with a wide range of specialties and interests. We are building cognitive-computing based technology solutions to help organizations worldwide prevent compliance breaches, respond to risk, remediate major issues and monitor ongoing business activities. We are building out environments that will pass government certification. You will be working with a growing team of developers, data scientists and QA engineers on maintaining our existing services and infrastructure, while building the next generation of our engineering stack. Exiger is seeking a motivated, self-driven Infrastructure Engineer who builds microservices and data, works within a continuous integration and delivery pipeline, and embraces test automation as a discipline. Key responsibilities Development and maintenance of infrastructure as code (IaC) base Maintain/deploy Exiger microservices and dependent applications through IaC Advocate, coordinate and collaborate on internal infrastructure upgrades and maintenance Utilize logging/monitoring/alerting tools to maintain and continuously improve system health with multiple AWS deployments Develop/manage package deployments of on-prem and cloud instances of Ion Channel Development and Improvement of CI/CD and DevOps workflows using Travis CI, Docker and AWS Use GitHub for code reviews of team member pull requests Knowledge and skills Experience with cloud hosting platforms (AWS, Google, Azure) Experience with containerization (Docker) Programming languages (Python, Bash, Golang) Knowledge of database tools and infrastructure (PostgreSQL, MySQL, SQL) Knowledge of cloud native and DevOps best practices Experience with multi-account application deployment Experience with logging/monitoring (Grafana, Kibana, ELK, Splunk) We're an amazing place to work. Why? Discretionary Time Off for all employees, with no maximum limits on time off Industry leading health, vision, and dental benefits Competitive compensation package 16 weeks of fully paid parental leave Flexible, hybrid approach to working from home and in the office where applicable Focus on wellness and employee health through stipends and dedicated wellness programming Purposeful career development programs with reimbursement provided for educational certifications Exiger is revolutionizing the way corporations, government agencies and banks manage risk and compliance with a combination of technology-enabled and SaaS solutions. In recognition of the growing volume and complexity of data and regulation, Exiger is committed to creating a more sustainable risk and compliance environment through its holistic and innovative approach to problem solving. Exiger's mission to make the world a safer place to do business drives its award-winning AI technology platform, DDIQ, built to anticipate the market's most pressing needs related to evolving ESG, cyber, financial crime, third-party and supply chain risk. Exiger has won 30+ AI, RegTech and Supply Chain partner awards. Exiger's core values are courage, excellence, expertise, innovation, integrity, teamwork and trust. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Exiger's hybrid work policy is periodically reviewed and adjusted to align with evolving business needs.
    $86k-116k yearly est. Auto-Apply 60d+ ago
  • Senior DevOps Engineer II

    Extend A Care for Kids 3.5company rating

    Remote

    About Extend: Extend is revolutionizing the post-purchase experience for retailers and their customers by providing merchants with AI-driven solutions that enhance customer satisfaction and drive revenue growth. Our comprehensive platform offers automated customer service handling, seamless returns/exchange management, end-to-end automated fulfillment, and product protection and shipping protection alongside Extend's best-in-class fraud detection. By integrating leading-edge technology with exceptional customer service, Extend empowers businesses to build trust and loyalty among consumers while reducing costs and increasing profits. Today, Extend works with more than 1,000 leading merchant partners across industries, including fashion/apparel, cosmetics, furniture, jewelry, consumer electronics, auto parts, sports and fitness, and much more. Extend is backed by some of the most prominent technology investors in the industry, and our headquarters is in downtown San Francisco. What You'll Do: Create reusable infrastructure components using infrastructure-as-code technologies to allow teams to manage their own infrastructure needs Mature the CI/CD pipeline to enable teams to scale in a self-service way to help reduce deployment cost and time Designs and implements DevOps tooling that accelerates AI innovation and empowers teams to build, deploy, and monitor intelligent agentic systems. Collaborate with product engineering teams to design scalable infrastructure and deployment patterns for customer facing solutions Act as an expert in AWS technologies by providing guidance on appropriate AWS solutions to address business needs Develop, refine, and drive adherence to non-functional requirements for new product development in areas of security, reliability, and scalability Mentor and provide guidance to new engineers on best practices and designs related to CI / CD or AWS technologies Lead dynamic project efforts related to improving or enabling new technologies throughout Extend What We Are Looking For: 6+ years experience in a DevOps engineering role 3+ years experience with CDK, AWS CloudFormation, or other infrastructure-as-code systems (like Terraform) 3+ years experience or certification in AWS serverless technologies (API Gateway, Lambda, S3, DynamoDB) Experience developing and maintaining scalable backend systems and APIs using modern frameworks and cloud infrastructure Proficiency with AI technologies and agentic workflows such as AWS Bedrock, Mastra, LangChain (or similar technologies) Knowledge of best practices around security roles/policies for AWS IAM Experience working with monitoring services (Coralogix, CloudWatch, DataDog, OpenTelemetry, Grafana) Experience with CI/CD Tooling such as GitHub Actions, CodeBuild, or others Ability to perform in a high energy environment with dynamic job responsibilities and priorities Nice to Haves: Experience with AWS Cloud Development Kit(CDK) Experience with Typescript Expected Pay Range: $170,000 - $198,000 per year salaried* * The target base salary range for this position is listed above. Individual salaries are determined based on a number of factors including, but not limited to, job-related knowledge, skills and experience. Life at Extend: Working with a great team from diverse backgrounds in a collaborative and supportive environment. Competitive salary based on experience, with full medical and dental & vision benefits. Stock in an early-stage startup growing quickly. Generous, flexible paid time off policy. 401(k) with Financial Guidance from Morgan Stanley. Extend CCPA HR Notice
    $170k-198k yearly Auto-Apply 60d+ ago
  • DevOps Engineer (hybrid)

    Ursa Space Systems 3.8company rating

    Ithaca, NY jobs

    Job Description DevOps Engineer Ursa Space Systems is building ground-breaking solutions to deliver global intelligence to organizations around the world. Through our SAR/EO/RF satellite network, and data fusion expertise, Ursa Space detects real-time changes in the physical world to expand transparency. Our subscription and custom services enable you to access satellite imagery and analytic results with no geographic, political or weather-related limitations. Job Summary Ursa is looking for a skilled DevOps Engineer to join our growing team! There is a lot of cross-pollination here at Ursa Space. You will have the opportunity to work with a diverse team of highly-skilled engineers, working on a variety of projects. There are no egos here - we are looking for the best ideas and are eager to hear your input! This position will work with Amazon Web Services (AWS) cloud infrastructure, handling design, implementation, and maintenance. You will also work closely with software engineers to improve the platform and implement analytic solutions. The ideal candidate will have experience with AWS services and the use of Cloud Development Kit (CDK) and Terraform to manage resources with code. They will also have a good understanding of the containerization and orchestration of workloads. The DevOps Engineer will report directly to the Senior IT Systems Engineer. This position is exempt and not eligible for overtime pay. Ideal candidates would be in our around Ithaca, NY, where our company headquarters are located. Job Responsibilities Administer and maintain AWS infrastructure in collaboration with senior team members Write, deploy and maintain scalable infrastructure as code Fulfill company-wide needs for cloud resources Provide AWS support to engineers, scientists, and analysts Contribute to the development of IaC, CI/CD, and other engineering standards Assist other engineers with architecting cloud solutions Monitor platforms and troubleshoot technical issues Assist with migration of legacy systems to redefined architectures Coordinate with Ursa customers and vendors to facilitate exchange of data Automate repetitive tasks where possible Learn and stay updated on new technologies, products, and releases All other duties as assigned Requirements B.S. in Computer Science and/or a related field, or equivalent work experience 3-5 years of experience with AWS solutions architecture, administration, and security best practices Experience with Infrastructure as Code (CDK, Terraform, CloudFormation, or similar) Experience with building and maintaining CI/CD pipelines (CodePipeline/CodeBuild, GitLab, Github Actions, etc.) Intermediate Python programming skills with understanding of OOP principles Experience with containerization technologies (Docker, Kubernetes/EKS) Working knowledge of AWS managed services for networking, systems administration, monitoring, and security Strong problem-solving and troubleshooting skills Excellent communication and collaboration abilities Preferred Skills AWS Associate certifications (Solutions Architect, SysOps Administrator, Developer) Kubernetes tools (Helm, autoscalers, Argo Workflows, etc.) DataDog, Prometheus/Grafana, and other observability tools SQL (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Redis, etc.) ArcGIS, STAC, and other industry domain tools Experience with multiple IaC frameworks AWS Professional or Specialty certifications AWS GovCloud experience Familiarity with compliance standards (NIST, CMMC, etc.) Exposure to geospatial/satellite imagery analysis Prior start-up experience Prior platform engineering experience Located in our around Company Headquarters in Ithaca, NY Compensation Range $100,000 - $130,000 Location We are headquartered in Ithaca, NY and have a remote workforce in other locations throughout the United States. Please note: applications without a relevant cover letter or a cover letter written with AI will not be considered. In your cover letter, we would like to hear your personal voice and learn about your sincere interest in Ursa Space Systems. Benefits and Perks Competitive Compensation Discretionary PTO & Flexible Scheduling Stock Options 401(k) Match Medical, Dental and Vision Coverage for you and your dependents FSA & HSA Plans Employer-paid Life Insurance Employer-paid LTD and STD for Parental and Family Care 11 Paid Holidays Employee Resource Groups Educational Assistance Program Professional Development Opportunities And more… Company Values Use the team Figure it out and own it Aim for elegant simplicity Empower diversity & inclusivity Do the right thing Be scrappy Ursa Space Systems, Inc is an equal opportunity employer and does not discriminate on the basis of any legally protected status or characteristic. Protected veterans and individuals with disabilities are encouraged to apply. Powered by JazzHR 7x4eFIdGY6
    $100k-130k yearly 2d ago
  • DevOps Engineer (hybrid)

    Ursa Space Systems 3.8company rating

    Ithaca, NY jobs

    DevOps Engineer Ursa Space Systems is building ground-breaking solutions to deliver global intelligence to organizations around the world. Through our SAR/EO/RF satellite network, and data fusion expertise, Ursa Space detects real-time changes in the physical world to expand transparency. Our subscription and custom services enable you to access satellite imagery and analytic results with no geographic, political or weather-related limitations. Job Summary Ursa is looking for a skilled DevOps Engineer to join our growing team! There is a lot of cross-pollination here at Ursa Space. You will have the opportunity to work with a diverse team of highly-skilled engineers, working on a variety of projects. There are no egos here - we are looking for the best ideas and are eager to hear your input! This position will work with Amazon Web Services (AWS) cloud infrastructure, handling design, implementation, and maintenance. You will also work closely with software engineers to improve the platform and implement analytic solutions. The ideal candidate will have experience with AWS services and the use of Cloud Development Kit (CDK) and Terraform to manage resources with code. They will also have a good understanding of the containerization and orchestration of workloads. The DevOps Engineer will report directly to the Senior IT Systems Engineer. This position is exempt and not eligible for overtime pay. Ideal candidates would be in our around Ithaca, NY, where our company headquarters are located. Job Responsibilities Administer and maintain AWS infrastructure in collaboration with senior team members Write, deploy and maintain scalable infrastructure as code Fulfill company-wide needs for cloud resources Provide AWS support to engineers, scientists, and analysts Contribute to the development of IaC, CI/CD, and other engineering standards Assist other engineers with architecting cloud solutions Monitor platforms and troubleshoot technical issues Assist with migration of legacy systems to redefined architectures Coordinate with Ursa customers and vendors to facilitate exchange of data Automate repetitive tasks where possible Learn and stay updated on new technologies, products, and releases All other duties as assigned Requirements B.S. in Computer Science and/or a related field, or equivalent work experience 3-5 years of experience with AWS solutions architecture, administration, and security best practices Experience with Infrastructure as Code (CDK, Terraform, CloudFormation, or similar) Experience with building and maintaining CI/CD pipelines (CodePipeline/CodeBuild, GitLab, Github Actions, etc.) Intermediate Python programming skills with understanding of OOP principles Experience with containerization technologies (Docker, Kubernetes/EKS) Working knowledge of AWS managed services for networking, systems administration, monitoring, and security Strong problem-solving and troubleshooting skills Excellent communication and collaboration abilities Preferred Skills AWS Associate certifications (Solutions Architect, SysOps Administrator, Developer) Kubernetes tools (Helm, autoscalers, Argo Workflows, etc.) DataDog, Prometheus/Grafana, and other observability tools SQL (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Redis, etc.) ArcGIS, STAC, and other industry domain tools Experience with multiple IaC frameworks AWS Professional or Specialty certifications AWS GovCloud experience Familiarity with compliance standards (NIST, CMMC, etc.) Exposure to geospatial/satellite imagery analysis Prior start-up experience Prior platform engineering experience Located in our around Company Headquarters in Ithaca, NY Compensation Range $100,000 - $130,000 Location We are headquartered in Ithaca, NY and have a remote workforce in other locations throughout the United States. Please note: applications without a relevant cover letter or a cover letter written with AI will not be considered. In your cover letter, we would like to hear your personal voice and learn about your sincere interest in Ursa Space Systems. Benefits and Perks Competitive Compensation Discretionary PTO & Flexible Scheduling Stock Options 401(k) Match Medical, Dental and Vision Coverage for you and your dependents FSA & HSA Plans Employer-paid Life Insurance Employer-paid LTD and STD for Parental and Family Care 11 Paid Holidays Employee Resource Groups Educational Assistance Program Professional Development Opportunities And more… Company Values Use the team Figure it out and own it Aim for elegant simplicity Empower diversity & inclusivity Do the right thing Be scrappy Ursa Space Systems, Inc is an equal opportunity employer and does not discriminate on the basis of any legally protected status or characteristic. Protected veterans and individuals with disabilities are encouraged to apply.
    $100k-130k yearly Auto-Apply 18d ago
  • IT DevOps Engineer

    Upstart Services 4.0company rating

    Remote

    About Upstart Upstart is the leading AI lending marketplace partnering with banks and credit unions to expand access to affordable credit. By leveraging Upstart's AI marketplace, Upstart-powered banks and credit unions can have higher approval rates and lower loss rates across races, ages, and genders, while simultaneously delivering the exceptional digital-first lending experience their customers demand. More than 80% of borrowers are approved instantly, with zero documentation to upload. Upstart is a digital-first company, which means that most Upstarters live and work anywhere in the United States. However, we also have offices in San Mateo, California; Columbus, Ohio; and Austin, Texas. Most Upstarters join us because they connect with our mission of enabling access to effortless credit based on true risk. If you are energized by the impact you can make at Upstart, we'd love to hear from you! The Team As an IT DevOps Engineer, you'll play a critical role in architecting, implementing, and scaling our infrastructure; ensuring resilience, security, and optimal performance across all systems. This role requires a seasoned DevOps expert who excels in designing large-scale systems, automating complex workflows, and leading cross-functional projects that enhance the speed and efficiency of our IT development and deployment processes. You'll work closely with teams across engineering, IT, and InfoSec to drive IT and Security outcomes using best practices and foster a culture of continuous improvement. Position Location - This role is available in the following locations: Remote, San Mateo, Columbus, Austin Time Zone Requirements - This team operates on the East/West Coast time zones. Travel Requirements - This team has regular on-site collaboration sessions that occur a few days per quarter at various locations in the US. Upstart will cover all travel-related expenses for these meetups. How you'll make an impact: Lead the design and implementation of scalable, highly available, and secure infrastructure to support IT technology tools like Okta, GSuite, Slack, Jamf, and Palo Alto Firewalls. Champion a culture of continuous improvement within IT, identifying opportunities for process optimization, automation, and enhanced security across all IT technology systems Drive infrastructure automation through Infrastructure as Code (IaC) using tools such as Terraform or Ansible, ensuring reliability, repeatability, and cost efficiency. Architect and lead comprehensive observability solutions and incident response that includes logging, tracing, and metrics collection to support IT Engineering and operations, enhancing troubleshooting and performance analysis. Collaborate with InfoSec to implement security best practices and drive compliance alignment across IT technologies, including role-based access control, automated compliance monitoring, and vulnerability management. Mentor and guide junior engineers in IT DevOps practices, promoting a collaborative culture focused on innovation, performance, and security. What we're looking for: Minimum requirements: 5+ years of experience in DevOps or IT infrastructure engineering, with a proven record in architecting and managing complex, IT-centric systems. Experience leading large cross-team initiatives at companies that have gone through periods of rapid business or organizational growth Advanced skills in cloud infrastructure (preferably AWS) and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Experience in building and managing CI/CD pipelines for automated deployments, security integrations, and compliance workflows. Strong scripting skills in languages like Python and Bash for automation and integration tasks. Preferred qualifications: Extensive experience designing and implementing scalable, resilient IT infrastructure with multiple integrations across cloud providers. Experience with large sunset of the following technologies Okta, Google workspace, Jamf, Slack enterprise grid, Workato, sumilogic Excellent problem solving skills and ability to work under pressure. Relevant certifications such as AWS Certified DevOps Engineer, Okta Certified Administrator, or security certifications (e.g., CISSP). At Upstart, your base pay is one part of your total compensation package. The anticipated base salary for this position is expected to be within the below range. Your actual base pay will depend on your geographic location-with our “digital first” philosophy, Upstart uses compensation regions that vary depending on location. Individual pay is also determined by job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process. In addition, Upstart provides employees with target bonuses, equity compensation, and generous benefits packages (including medical, dental, vision, and 401k). United States | Remote - Anticipated Base Salary Range$143,700-$198,700 USD Upstart is a proud Equal Opportunity Employer. We are dedicated to ensuring that underrepresented classes receive better access to affordable credit, and are just as committed to embracing diversity and inclusion in our hiring practices. We celebrate all cultures, backgrounds, perspectives, and experiences, and know that we can only become better together. If you require reasonable accommodation in completing an application, interviewing, completing any pre-employment testing, or otherwise participating in the employee selection process, please email candidate_accommodations@upstart.com ************************************************
    $143.7k-198.7k yearly Auto-Apply 11d ago
  • Senior Devops Engineer

    Exiger 4.0company rating

    Jersey City, NJ jobs

    Exiger Product and Technology is an experienced team of software professionals with a wide range of specialties and interests. We are building cognitive-computing based technology solutions to help organizations worldwide prevent compliance breaches, respond to risk, remediate major issues and monitor ongoing business activities. We are building out environments that will pass government certification. You will be working with a growing team of developers, data scientists and QA engineers on maintaining our existing services and infrastructure, while building the next generation of our engineering stack. Exiger is seeking a motivated, self-driven Infrastructure Engineer who builds microservices and data, works within a continuous integration and delivery pipeline, and embraces test automation as a discipline. Key responsibilities Development and maintenance of infrastructure as code (IaC) base Maintain/deploy Exiger microservices and dependent applications through IaC Advocate, coordinate and collaborate on internal infrastructure upgrades and maintenance Utilize logging/monitoring/alerting tools to maintain and continuously improve system health with multiple AWS deployments Develop/manage package deployments of on-prem and cloud instances of Ion Channel Development and Improvement of CI/CD and DevOps workflows using Travis CI, Docker and AWS Use GitHub for code reviews of team member pull requests Knowledge and skills Experience with cloud hosting platforms (AWS, Google, Azure) Experience with containerization (Docker) Programming languages (Python, Bash, Golang) Knowledge of database tools and infrastructure (PostgreSQL, MySQL, SQL) Knowledge of cloud native and DevOps best practices Experience with multi-account application deployment Experience with logging/monitoring (Grafana, Kibana, ELK, Splunk) We're an amazing place to work. Why? Discretionary Time Off for all employees, with no maximum limits on time off Industry leading health, vision, and dental benefits Competitive compensation package 16 weeks of fully paid parental leave Flexible, hybrid approach to working from home and in the office where applicable Focus on wellness and employee health through stipends and dedicated wellness programming Purposeful career development programs with reimbursement provided for educational certifications Our Commitment to Diversity & Inclusion At Exiger, we know our people are the core of our excellence. The collective sum of the individual differences, life experiences, knowledge, inventiveness, innovation, self-expression, unique capabilities, and talent that our employees invest in their work represent a significant part of not only our culture, but our reputation and what we have been able to achieve as a global organization. We embrace and encourage our employees' differences in age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other characteristics that make our employees unique. These unique characteristics come together to form the fabric of our organization and our culture, and enhance our ability to serve our clients while helping them to solve their business issues. All qualified candidates will be considered in accordance with this policy. At Exiger we believe we all have a responsibility to treat others with dignity and respect at all times. All employees are expected to exhibit conduct that reflects our global commitment to diversity and inclusion in any environment while acting on behalf of, and representing, Exiger. This position is not eligible for residents of California, Colorado, or New York. Must be authorized to work in United States. Candidates must be Clearable for secret/top secret US government clearance. Exiger is revolutionizing the way corporations, government agencies and banks manage risk and compliance with a combination of technology-enabled and SaaS solutions. In recognition of the growing volume and complexity of data and regulation, Exiger is committed to creating a more sustainable risk and compliance environment through its holistic and innovative approach to problem solving. Exiger's mission to make the world a safer place to do business drives its award-winning AI technology platform, DDIQ, built to anticipate the market's most pressing needs related to evolving ESG, cyber, financial crime, third-party and supply chain risk. Exiger has won 30+ AI, RegTech and Supply Chain partner awards. Exiger's core values are courage, excellence, expertise, innovation, integrity, teamwork and trust. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law. Exiger's hybrid work policy is periodically reviewed and adjusted to align with evolving business needs.
    $91k-123k yearly est. Auto-Apply 60d+ ago
  • Senior DevOps Engineer

    Brookfield 4.3company rating

    Cleveland, OH jobs

    We Are Brookfield Properties: At Brookfield Properties, our people are the foundation of our success. The Brookfield Properties Corporate team brings together subject matter experts who lead with confidence, adaptability, and resourcefulness. The corporate group works across all sectors of Brookfield's real estate business - including housing, logistics, hospitality, office, and retail - collaborating with our best-in-class asset managers. Efficiency is at the core of what we do. We seek to simplify, standardize, automate, and optimize-creating smarter solutions and maximizing value across every facet of Brookfield's business. When you join the Brookfield Properties Corporate team, you become part of a high-performing, collaborative environment where innovation and impact thrive. We are seeking a Senior DevOps Engineer to join Brookfield Properties in Cleveland, OH or Chicago, IL. This role is a deeply technical role responsible for building the foundational infrastructure powering Brookfield's most strategic initiatives: artificial intelligence, machine learning, and enterprise-scale data platforms. Reporting to the Cloud Architect, this engineer drives multi-cloud platform design (AWS and Azure), with a focus on reliability, automation, and scalability. This position is critical in enabling Brookfield's modern data ecosystem including secure, high performance AI/ML platforms, enterprise data lakes, and intelligent applications. You'll partner with cloud, security, data engineering, and ML teams to deliver automated, compliant, and production grade platforms that power analytical and operational workloads. Role & Responsibilities: AI & ML Platform Engineering Design and build scalable cloud-native infrastructure for AI/ML platforms Automate deployment of infrastructure as code and identify and execute on other areas for automation within cloud solutions Implement end-to-end pipelines Collaborate with ML teams to tune infrastructure for performance, reproducibility, and cost-efficiency. Data Lake & Data Warehouse Infrastructure Support the architecture and operations of infrastructure supporting enterprise-scale data lakes and cloud data warehouses (Redshift, Snowflake) Automate ingestion, transformation, and lifecycle policies using IaC and orchestration tools Support big data frameworks Ensure compliance, encryption, retention, and access control are enforced across all platforms Multi-Cloud Infrastructure & Automation Design modular, reusable infrastructure-as-code across AWS and Azure Integrate security, cost optimization, DR, and compliance as code into platform blueprints Build GitOps-based deployment pipelines for infrastructure, ML services, and platform updates Implement policy-as-code for environment governance Cybersecurity Secure cloud infrastructure across AWS, Azure & GCP, embedding defense-in-depth and zero-trust principles throughout network and compute layers Implement secure networking architectures including private connectivity, encryption in transit, and segmentation of critical workloads Harden CI/CD pipelines with automated vulnerability scanning, secret management, and signed artifact verification Collaborate with Security Operations to ensure cloud telemetry, threat detection, and incident response are integrated into platform monitoring CI/CD, Monitoring & Observability Build and manage scalable CI/CD pipelines supporting data, ML, and app workloads Integrate security scanning, test automation, and artifact promotion Deploy observability tooling across ML and data pipelines Enable intelligent alerting and logging for infrastructure, pipelines, and AI services Cross-Functional Collaboration & Strategy Work with data engineers, ML scientists, software teams, and security to deliver cohesive platforms Shape strategy and future-state architecture for AI enablement and MLOps Mentor engineers on DevOps, Cloud Operations, IaC, cloud-native platforms, and data/ML workflows Continuously improve automation maturity, developer velocity, and platform resiliency Your Qualifications: 8+ years in DevOps, cloud platform engineering, or SRE roles in enterprise environments Proven experience with AWS and Azure for data platforms, ML infrastructure, and DevOps automation Hands-on with SageMaker, Azure ML, Kubeflow, MLflow or other enterprise-grade MLOps platforms IaC expertise with Terraform or ARM/Bicep is a plus Fluent in Python and/or Bash for scripting, automation, and platform integrations Experience building and operating data lakes and data warehouses in the cloud (e.g., S3/ADLS, Redshift, Snowflake) Strong skills in CI/CD pipelines and DevSecOps practices Experienced with monitoring and logging systems Understanding of security, compliance, encryption, IAM, and policy-as-code in a cloud environment Excellent collaboration and mentoring capabilities; strong communication across technical and business stakeholders Your Career @ Brookfield Properties: At Brookfield Properties, your career progression is important to us. As a successful employee, you will have the opportunity to grow within your team, department, and across the Brookfield organization. Our leadership teams are dedicated to the accomplishments of their employees. We also invest time into training and developing our people. End your job search and find your career today, at Brookfield Properties. Why Brookfield Properties? We imagine, create, and operate on a foundation of values to build a better world, together. Brookfield Properties strives to create spaces where going to work never feels routine. As a Brookfield Properties employee, you will enjoy many benefits such as 401K matching, tuition reimbursement, summer Fridays, paid maternity leave and more. There is also a generous employee referral program because we want our existing team members to help us build a more diverse workplace through their networks. Compensation & Benefits: Salary Type: Exempt Pay Frequency: Bi-weekly Annual Base Salary Range: $155,000-$170,000 Medical & Pharmacy Coverage: Yes, under Brookfield Medical Plan Dental Coverage: Yes, under Brookfield Medical Plan Vision Coverage: Yes, under Brookfield Medical Plan Retirement: 401(k) Insurance: Employer-paid life & short/long term disability Brookfield Properties is an equal opportunity employer, and we foster an inviting, inclusive and collaborative environment. We are proud to create a diverse environment and are proud to be an equal opportunity employer. We are grateful for your interest in this position, however, only candidates selected for pre-screening will be contacted. #BPUS
    $155k-170k yearly Auto-Apply 16d ago
  • Remote Software Engineer - Bioinformatics

    Kforce 4.8company rating

    San Diego, CA jobs

    Kforce's client, a growing Biotechnology/Pharmaceutical technology company, is seeking a remote Software Engineer to work as part of the Research Software team in Functional Genomics. We are working directly with the Hiring Manager on this search assignment. This position is 100% remote. The company offers a competitive compensation package including base salary, annual bonus and Stock/RSU's. The Software Engineer will work as part of a multidisciplinary and highly innovative team designing and implementing tools to support their drug discovery research efforts. Work will include Docker-based web services delivering rich data-driven user-interfaces for the interrogation of complex biological and chemical data. Responsibilities: * Develop tools for various research teams that help medicinal chemists, biologists and computational biologists better capture and leverage in-house and external data in their research * Maintain legacy Java software and migrate legacy Java software into contemporary technologies (Java or Node/Typescript) * Create new backend services to accommodate our expanding infrastructure * Enhance institutional data access using tools like RESTful API's and modern web UI/UX frameworks * Work directly with end-users to troubleshoot and/or design and prioritize feature enhancements to existing tools* BS or MS degree in Computer Science, Computer Engineering, Biomedical, Biotechnology or related field preferred * At least 3+ years of software development experience * Proficient with Java, JavaScript or typescript would be ideal * Experience with Node.js, Git and Python are a plus * Knowledge of Java GUI frameworks like Swing, AWT is a plus * Knowledge of web frameworks like Jersey or Spring * Knowledge of Maven and Gradle * Experience with software test automation is a plus but not required * Experience with relational databases * Experience deploying services using AWS services and Docker is a plus * Linux skills are preferred * Must have excellent communication and requirement gathering skills * An ability to be productive and successful in an intense work?environment * Able to learn new technologies quickly and jump in anywhere in our stack * Experience with bioinformatics and genomics is a plus Nice to Have: * Experience with GraphQL * Experience with NoSQL databases * Experience with a web app component-based framework Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
    $106k-148k yearly est. 52d ago

Learn more about TWO95 International jobs

View all jobs