DevOps Engineer | Machine Learning Platforms
Pittsburgh, PA jobs
MLOps Engineer (Remote | Pittsburgh, PA area)
On-site: 1 day/month
We are seeking a highly skilled MLOps Engineer to support the end-to-end deployment, monitoring, and optimization of our machine learning models. In this role, you will serve as the critical link between Data Science and Operations, ensuring that models are scalable, reliable, and production-ready.
This position is fully remote, but candidates must reside in the Pittsburgh area and be available for monthly on-site meetings.
About eNGINE
eNGINE builds Technical Teams. We are a Solutions and Placement firm shaped by decades of interaction with Technical professionals. Our inspiration is continuous learning and engagement with the markets we serve, the talent we represent, and the teams we build. Our Consulting Workforce is encouraged to enjoy career fulfillment in the form of challenging projects, schedule flexibility, and paid training/certifications. Successful outcomes start and finish with eNGINE
Key Responsibilities
Pipeline Development: Design, build, and maintain CI/CD pipelines supporting the full machine learning lifecycle, from training to deployment.
Infrastructure Management: Orchestrate and maintain containerized environments using Docker and Kubernetes; manage cloud resources for scalable and efficient inference.
Model Monitoring: Build systems to monitor model performance, detect data drift, ensure uptime, and maintain compliance with reliability standards.
Automation: Automate training, testing, deployment, and retraining processes to reduce manual steps and increase operational efficiency.
Collaboration: Partner with Data Scientists, Software Engineers, and Product teams to integrate ML into production systems and support ongoing enhancements.
Optimization: Continuously evaluate model pipelines and infrastructure for improvements in cost, performance, and scalability.
Technical Requirements
Programming: Expert-level Python, including NumPy, Pandas, scikit-learn, and at least one major deep learning framework (PyTorch or TensorFlow).
Infrastructure: Strong hands-on experience with Docker, Kubernetes, and IaC tools such as Terraform or CloudFormation.
MLOps Tooling: Familiarity with MLflow, Kubeflow, or similar model management platforms.
Cloud Platforms: Practical experience working with AWS, GCP, or Azure ML services.
Best Practices: Solid understanding of version control, automated testing, documentation, and reproducible ML workflows.
Qualifications
Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related technical field.
Proven experience deploying machine learning models to production environments-not just experimentation.
Prior experience supporting or building ML-driven digital products strongly preferred.
Digital product / platform experience
Demonstrated ability to work effectively across cross-functional engineering and data teams.
Strong problem-solving abilities, attention to detail, and a passion for building stable, scalable ML systems.
Next Steps
No C2C, relocation, or sponsorship for this role
For finer details on how eNGINE can impact your career, apply today!
Software Engineer III[80606]
New York, NY jobs
Onward Search is partnering with a leading tech client to hire a Software Engineer III to help build the next generation of developer infrastructure and tooling. If you're passionate about making developer workflows faster, smarter, and more scalable, this is the role for you!
Location: 100% Remote (EST & CST Preferred)
Contract Duration: 6 months
What You'll Do:
Own and maintain Bazel build systems and related tooling
Scale monorepos to millions of lines of code
Collaborate with infrastructure teams to define best-in-class developer workflows
Develop and maintain tools for large-scale codebases
Solve complex problems and improve developer productivity
What You'll Need:
Experience with Bazel build system and ecosystem (e.g., rules_jvm_external, IntelliJ Bazel plugin)
Fluency in Java, Python, Starlark, and TypeScript
Strong problem-solving and collaboration skills
Passion for building highly productive developer environments
Perks & Benefits:
Medical, Dental, and Vision Insurance
Life Insurance
401k Program
Commuter Benefits
eLearning & Education Reimbursement
Ongoing Training & Development
This is a fully remote, contract opportunity for a motivated engineer who loves working in a flow-focused environment and improving developer experiences at scale.
Senior Software Engineer
Charlotte, NC jobs
Senior Software Engineer (Full Stack)
Jacksonville, FL
We are seeking a highly skilled and motivated Senior Software Engineer to join a fast-paced, agile development team. In this fully remote role, you will leverage your full-stack expertise to design, develop, and deliver cutting-edge software solutions using C#, Angular, SQL, and Azure. You will also play a key role in mentoring team members, contributing to the technical growth of the team.
Responsibilities
Design, develop, and maintain robust, scalable, and secure full-stack applications.
Collaborate closely with cross-functional teams to define, plan, and deliver high-quality features.
Write clean, efficient, and maintainable code that adheres to industry best practices.
Optimize and troubleshoot applications to ensure peak performance and reliability.
Utilize Azure services to build and deploy cloud-native solutions.
Design and maintain databases using SQL, ensuring data integrity and optimal performance.
Lead code reviews and provide mentorship to junior developers, fostering a culture of continuous improvement.
Actively participate in sprint planning, retrospectives, and other Agile ceremonies.
Stay current with emerging technologies and contribute to technical decision-making.
Qualifications
5+ years of professional experience in full-stack development.
Proficiency in C#, Angular, SQL, and Azure.
Strong understanding of object-oriented programming and modern design patterns.
Experience building RESTful APIs and integrating third-party services.
Familiarity with Agile development methodologies.
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills, with the ability to mentor and guide others.
Preferred Skills
Experience with DevOps practices, CI/CD pipelines, and infrastructure-as-code.
Knowledge of microservices architecture and containerization (e.g., Docker, Kubernetes).
Understanding of security best practices in web and cloud development.
Sr. Full Stack Developer
Boston, MA jobs
Senior Developer (Full Stack)
100% Remote
6-month contract (potential for extension)
As the Senior Developer (Full Stack), you will be responsible for modernizing legacy applications and developing cloud-native solutions for the Executive Office of Education (EOE). You will design and maintain both front-end and back-end components using Node.js, Angular, and TypeScript, while supporting older Java and .NET systems during their transition. This role involves collaborating with cross-functional teams to analyze existing systems, build scalable APIs, and implement secure, high-performing applications in an AWS environment. You will also mentor junior developers and ensure best practices in architecture, testing, and documentation.
Minimum Qualifications:
Strong experience in TypeScript, JavaScript, HTML, and CSS
Proficiency with Angular for front-end development and Node.js/Express.js for back-end services
Experience with Java and/or .NET for maintaining and refactoring legacy systems
Familiarity with databases such as Postgres, Snowflake, Oracle, and SQL Server
Knowledge of AWS services and cloud-native development
Nice to Have:
Exposure to CI/CD pipelines and DevOps tools (e.g., GitHub Actions, Jenkins)
Experience with ORM tools like Sequelize or Hibernate
Responsibilities:
Design, develop, and maintain full-stack web applications using Node.js and Angular
Assess and refactor legacy applications into modern architectures
Build RESTful APIs and integrate with internal/external services
Collaborate with teams and mentor junior developers on modern frameworks
Write unit/integration tests and perform code reviews
What's In It For You:
Weekly Paychecks
Opportunity to lead modernization initiatives in a fully AWS-implemented environment
Collaborative team culture with cutting-edge technologies
Senior Software Engineer
Columbus, OH jobs
Job Title: Spark 3 Developer
Who We Are:
Vernovis is a Total Talent Solutions company specializing in Technology, Cybersecurity, Finance & Accounting functions. At Vernovis, we help professionals achieve their career goals by matching them with innovative projects and dynamic contract opportunities across Ohio and the Midwest.
Client Overview:
Vernovis is partnering with a leading organization in scientific data management and innovation to modernize its big data platform. This initiative involves transitioning legacy systems, such as Cascading, Hadoop, and MapReduce, to Spark 3, optimizing for scalability and efficiency. As part of this well-established organization, your work will contribute to transforming how big data environments are managed and processed.
What You'll Do:
Legacy Workflow Migration: Lead the conversion of existing Cascading, Hadoop, and MapReduce workflows to Spark 3, ensuring seamless transitions.
Performance Optimization: Utilize Spark 3 features like Adaptive Query Execution (AQE) and Dynamic Partition Pruning to optimize data pipelines.
Collaboration: Work closely with infrastructure teams and stakeholders to ensure alignment with modernization initiatives.
Big Data Ecosystem Integration: Develop solutions that integrate with platforms like Hadoop, Hive, Kafka, and cloud environments (AWS, Azure).
Support Modernization Goals: Contribute to key organizational initiatives focused on next-generation data optimization and modernization.
What Experience You'll Have:
Spark 3 Expertise: 3+ years of experience with Apache Spark, including Spark 3.x development and optimization.
Migration Experience: Proven experience transitioning from Cascading, Hadoop, or MapReduce to Spark 3.
Programming Skills: Proficiency in Scala, Python, or Java.
Big Data Ecosystem: Strong knowledge of Hadoop, Hive, and Kafka.
Performance Tuning: Advanced skills in profiling, troubleshooting, and optimizing Spark jobs.
Cloud Platforms: Familiarity with AWS (EMR, Glue, S3) or Azure (Databricks, Data Lake).
The Vernovis Difference:
Vernovis offers Health, Dental, Vision, Voluntary Short & Long -Term Disability, Voluntary Life Insurance, and 401K.
Vernovis does not accept inquiries from Corp to Corp recruiting companies. Applicants must be currently authorized to work in the United States on a full-time basis and not violate any immigration or discrimination laws.
Vernovis provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
Software Engineer
Columbus, OH jobs
Software Engineer - Internal Product Team
Division: Impower Solutions (Agility Partners)
About Impower
Impower is the technology consulting division of Agility Partners, specializing in automation & AI, data engineering & analytics, software engineering, and digital transformation. We deliver high-impact solutions with a focus on innovation, efficiency, and client satisfaction.
Role Overview
We're building a high-performing internal product team to scale our proprietary tech stack. As a Software Engineer, you'll contribute to the development of internal platforms using modern technologies. You'll collaborate with product and engineering peers to deliver scalable, maintainable solutions that drive Impower's consulting capabilities.
Key Responsibilities
Development & Implementation
Build scalable APIs using TypeScript and Bun for high-performance backend services.
Develop intelligent workflows and AI agents leveraging Temporal, enabling robust orchestration and automation.
Move and transform data using Python and DBT, supporting analytics and operational pipelines.
Contribute to full-stack development of internal websites using Next.js (frontend), Elysia (API layer), and Azure SQL Server (database).
Implement CI/CD pipelines using GitHub Actions, with a focus on automated testing, secure deployments, and environment consistency.
Deploy and manage solutions in Azure, including provisioning and maintaining infrastructure components such as App Services, Azure Functions, Storage Accounts, and SQL databases.
Monitor and troubleshoot production systems using SigNoz, ensuring observability across services with metrics, traces, and logs to maintain performance and reliability.
Write clean, testable code and contribute to unit, integration, and end-to-end test suites.
Collaborate in code reviews, sprint planning, and backlog grooming to ensure alignment and quality across the team.
Innovation & Strategy
Stay current with emerging technologies and frameworks, especially in the areas of agentic AI, orchestration, and scalable infrastructure.
Propose improvements to internal platforms based on performance metrics, developer experience, and business needs.
Contribute to technical discussions around design patterns, tooling, and long-term platform evolution.
Help evaluate open-source tools and third-party services that could accelerate development or improve reliability.
Delivery & Collaboration
Participate in agile ceremonies including sprint planning, standups, and retrospectives.
Collaborate closely with product managers, designers, and other engineers to translate requirements into working solutions.
Communicate progress, blockers, and technical decisions clearly and proactively.
Take ownership of assigned features and enhancements from ideation through deployment and support.
Leadership
Demonstrate ownership and accountability in your work, contributing to a culture of reliability and continuous improvement.
Share knowledge through documentation, pairing, and informal mentoring of junior team members.
Engage in code reviews to uphold quality standards and foster team learning.
Actively participate in team discussions and help shape a collaborative, inclusive engineering culture.
Qualifications
2-4 years of experience in software engineering, ideally in a product-focused or platform engineering environment.
Proficiency in TypeScript and Python, with hands-on experience in full-stack development.
Experience building APIs and backend services using Bun, Elysia, or similar high-performance frameworks (e.g., Fastify, Express, Flask).
Familiarity with Next.js for frontend development and Azure SQL Server for relational data storage.
Experience with workflow orchestration tools such as Temporal, Airflow, or Prefect, especially for building intelligent agents or automation pipelines.
Proficiency in data transformation using DBT, with a solid understanding of analytics engineering principles.
Strong understanding of CI/CD pipelines using GitHub Actions, including automated testing, environment management, and secure deployments.
Exposure to observability platforms such as SigNoz, Grafana, Prometheus, or OpenTelemetry, with a focus on metrics, tracing, and log aggregation.
Solid grasp of software testing practices and version control (Git).
Excellent communication skills, a collaborative mindset, and a willingness to learn and grow within a team.
Why Join Us?
Build impactful internal products that shape the future of Impower's consulting capabilities.
Work with cutting-edge technologies in a collaborative, innovation-driven environment.
Enjoy autonomy, growth opportunities, and a culture that values excellence and people.
ETL/ELT Data Engineer (Secret Clearance) - Hybrid
Austin, TX jobs
LaunchCode is recruiting for a Software Data Engineer to work at one of our partner companies!
Details:
Full-Time W2, Salary
Immediate opening
Hybrid - Austin, TX (onsite 1-2 times a week)
Pay $85K-$120K
Minimum Experience: 4 years
Security Clearance: Active DoD Secret Clearance
Disclaimer: Please note that we are unable to provide work authorization or sponsorship for this role, now or in the future. Candidates requiring current or future sponsorship will not be considered.
Job description
Job Summary
A Washington, DC-based software solutions provider founded in 2017, specializes in delivering mission-critical and enterprise solutions to the federal government. Originating from the Department of Defense's software factory ecosystem, the company focuses on Command and Control, Cybersecurity, Space, Geospatial, and Modeling & Simulation. The company leverages commercial technology to enhance the capabilities of the DoD, IC, and their end-users, with innovation driven by its Innovation centers. The company has a presence in Boston, MA, Colorado Springs, CO, San Antonio, TX, and St. Louis, MO.
Why the company?
Environment of Autonomy
Innovative Commercial Approach
People over process
We are seeking a passionate Software Data Engineer to support the Army Software Factory (ASWF) in aligning with DoDM 8140.03 Cyber Workforce requirements and broader compliance mandates. The Army Software Factory (ASWF), a first-of-its-kind initiative under Army Futures Command, is revolutionizing the Army's approach to software development by training and employing self-sustaining technical talent from across the military and civilian workforce. Guided by the motto “By Soldiers, For Soldiers,” ASWF equips service members to develop mission-critical software solutions independently-especially vital for future contested environments where traditional technical support may be unavailable. This initiative also serves as a strategic prototype to modernize legacy IT processes and build technical readiness across the force to ensure battlefield dominance in the digital age.
Required Skills:
Active DoD Secret Clearance (Required)
4+ years of experience in data science, data engineering, or similar roles.
Expertise in designing, building, and maintaining scalable ETL/ELT pipelines using tools and languages such as Python, SQL, Apache Spark, or Airflow.
Strong proficiency in working with relational and NoSQL databases, including experience with database design, optimization, and query performance tuning (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
Demonstrable experience with cloud data platforms and services (e.g., AWS Redshift, S3, Glue, Athena; Azure Data Lake, Data Factory, Synapse; Google BigQuery, Cloud Storage, Dataflow).
Solid understanding of data warehousing concepts (e.g., Kimball, Inmon methodologies) and experience with data modeling for analytical purposes.
Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Java, Scala) for data manipulation, scripting, and automation.
CompTIA Security+ Certified or otherwise DoDM 8140.03 (formerly DoD 8570.01-M) compliant.
Nice to Have:
Familiarity with SBIR technologies and transformative platform shifts
Experience working in Agile or DevSecOps environments
2+ years of experience interfacing with Platform Engineers and data visibility team, manage AWS resources, and GitLab admin
#LI-hybrid #austintx #ETLengineer #dataengineer #army #aswf #clearancejobs #clearedjobs #secretclearance #ETL
Senior Data Engineer
Nashville, TN jobs
Concert is a software and managed services company that promotes health by providing the digital infrastructure for reliable and efficient management of laboratory testing and precision medicine. We are wholeheartedly dedicated to enhancing the transparency and efficiency of health care. Our customers include health plans, provider systems, laboratories, and other important stakeholders. We are a growing organization driven by smart, creative people to help advance precision medicine and health care. Learn more about us at ***************
YOUR ROLE
Concert is seeking a skilled Senior Data Engineer to join our team. Your role will be pivotal in designing, developing, and maintaining our data infrastructure and pipelines, ensuring robust, scalable, and efficient data solutions. You will work closely with data scientists, analysts, and other engineers to support our mission of automating the application of clinical policy and payment through data-driven insights.
You will be joining an innovative, energetic, passionate team who will help you grow and build skills at the intersection of diagnostics, information technology and evidence-based clinical care.
As a Senior Data Engineer you will:
Design, develop, and maintain scalable and efficient data pipelines using AWS services such as Redshift, S3, Lambda, ECS, Step Functions, and Kinesis Data Streams.
Implement and manage data warehousing solutions, primarily with Redshift, and optimize existing data models for performance and scalability.
Utilize DBT (data build tool) for data transformation and modeling, ensuring data quality and consistency.
Develop and maintain ETL/ELT processes to ingest, process, and store large datasets from various sources.
Work with SageMaker for machine learning data preparation and integration.
Ensure data security, privacy, and compliance with industry regulations.
Collaborate with data scientists and analysts to understand data requirements and deliver solutions that meet their needs.
Monitor and troubleshoot data pipelines, identifying and resolving issues promptly.
Implement best practices for data engineering, including code reviews, testing, and automation.
Mentor junior data engineers and share knowledge on data engineering best practices.
Stay up-to-date with the latest advancements in data engineering, AWS services, and related technologies.
After 3 months on the job you will have:
Developed a strong understanding of Concert's data engineering infrastructure
Learned the business domain and how it maps to the information architecture
Made material contributions towards existing key results
After 6 months you will have:
Led a major initiative
Become the first point of contact when issues related to the data warehouse are identified
After 12 months you will have:
Taken responsibility for the long term direction of the data engineering infrastructure
Proposed and executed key results with an understanding of the business strategy
Communicated the business value of major technical initiatives to key non-technical business stakeholders
WHAT LEADS TO SUCCESS
Self-Motivated A team player with a positive attitude and a proactive approach to problem-solving.
Executes Well You are biased to action and get things done. You acknowledge unknowns and recover from setbacks well.
Comfort with Ambiguity You aren't afraid of uncertainty and blazing new trails, you care about building towards a future that is different from today.
Technical Bravery You are comfortable with new technologies and eager to dive in to understand data in the raw and in its processed states.
Mission-focused You are personally motivated to drive more affordable, equitable and effective integration of genomic technologies into clinical care.
Effective Communication You build rapport and great working relationships with senior leaders, peers, and use the relationships you've built to drive the company forward
RELEVANT SKILLS & EXPERIENCE
Minimum of 4 years experience working as a data engineer
Bachelor's degree in software or data engineering or comparable technical certification / experience
Ability to effectively communicate complex technical concepts to both technical and non-technical audiences.
Proven experience in designing and implementing data solutions on AWS, including Redshift, S3, Lambda, ECS, and Step Functions
Strong understanding of data warehousing principles and best practices
Experience with DBT for data transformation and modeling.
Proficiency in SQL and at least one programming language (e.g., Python, Scala)
Familiarity or experience with the following tools / concepts are a plus: BI tools such as Metabase; Healthcare claims data, security requirements, and HIPAA compliance; Kimball's dimensional modeling techniques; ZeroETL and Kinesis data streams
COMPENSATION
Concert is seeking top talent and offers competitive compensation based on skills and experience. Compensation will commensurate with experience. This position will report to the VP of Engineering.
LOCATION
Concert is based in Nashville, Tennessee and supports a remote work environment.
For further questions, please contact: ******************.
Senior Data Engineer
Charlotte, NC jobs
**NO 3rd Party vendor candidates or sponsorship**
Role Title: Senior Data Engineer
Client: Global construction and development company
Employment Type: Contract
Duration: 1 year
Preferred Location: Remote based in ET or CT time zones
Role Description:
The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering-from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers-while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs.
Key Responsibilities
Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric.
Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads).
Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling.
Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices.
Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis.
Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements.
Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets.
Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices.
Requirements:
Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field.
5+ years of experience in data engineering with strong focus on Azure Cloud.
Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support.
Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns.
Advanced PySpark/Spark experience for complex transformations and performance optimization.
Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency).
Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation).
Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns.
Preferred Skills
Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks).
Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance.
Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing).
Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting.
Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).
Junior Data Engineer
Columbus, OH jobs
Contract-to-Hire
Columbus, OH (Hybrid)
Our healthcare services client is looking for an entry-level Data Engineer to join their team. You will play a pivotal role in maintaining and improving inventory and logistics management programs. Your day-to-day work will include leveraging machine learning and open-source technologies to drive improvements in data processes.
Job Responsibilities
Automate key processes and enhance data quality
Improve injection processes and enhance machine learning capabilities
Manage substitutions and allocations to streamline product ordering
Work on logistics-related data engineering tasks
Build and maintain ML models for predictive analytics
Interface with various customer systems
Collaborate on integrating AI models into customer service
Qualifications
Bachelor's degree in related field
0-2 years of relevant experience
Proficiency in SQL and Python
Understanding of GCP/BigQuery (or any cloud experience, basic certifications a plus).
Knowledge of data science concepts.
Business acumen and understanding (corporate experience or internship preferred).
Familiarity with Tableau
Strong analytical skills
Attitude for collaboration and knowledge sharing
Ability to present confidently in front of leaders
Why Should You Apply?
You will be part of custom technical training and professional development through our Elevate Program!
Start your career with a Fortune 15 company!
Access to cutting-edge technologies
Opportunity for career growth
Eight Eleven Group provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, national origin, age, sex, citizenship, disability, genetic information, gender, sexual orientation, gender identity, marital status, amnesty or status as a covered veteran in accordance with applicable federal, state, and local laws.
Mainframe Developer
Cincinnati, OH jobs
The Planet Group has partnered with a Cincinnati company to locate a Mainframe Software Developer for a contract-to-hire role.
The successful candidate should have expertise in developing and maintaining enterprise-level systems within the Annuities and/or Life Insurance domains. This role requires strong experience working with DXC's PerformancePlus platform, along with a robust understanding of mainframe technologies and related tools. You'll be responsible for the full software development lifecycle-from analyzing requirements to supporting production applications.
Key Responsibilities:
Analyze business requirements and design complex mainframe-based systems.
Develop applications in line with design specifications and functional requirements.
Conduct unit testing to ensure software quality and reliability.
Provide ongoing maintenance and support for existing applications.
Create and maintain comprehensive system documentation.
Must-Have Qualifications:
10+ years of experience in mainframe software development.
Direct experience in Annuities and/or Life Insurance is required.
Expert-level proficiency in:
COBOL
DB2
SQL
CICS
File Aid
XPEDITER
Changeman
3+ years of hands-on experience with DXC's PerformancePlus, including:
Understanding of PerformancePlus architecture and table structures
Familiarity with Agent setup, including Hierarchies, Licensing, and Appointments
Experience with BonusWorkbench
In-depth knowledge of the commission calculator engine
Prior work with DTCC or NSCC-COM
Willingness to participate in after-hours production support on a rotating schedule
Interested candidates can apply by clicking on the link. Please note, because this is a contract-to-hire role, we can only accept candidates who are legally authorized to work in the US without sponsorship.
At The Planet Group, we connect Technology experts with opportunities that match their skills, goals, and ambition. From fast-moving startups to global enterprises, we partner with top organizations across industries-giving you access to roles where your contributions make a difference. Explore flexible options including contract, direct hire, and contract-to-hire, all supported by a team that puts people first.
Senior Devops Engineer
Richmond, VA jobs
Exiger Product and Technology is an experienced team of software professionals with a wide range of specialties and interests. We are building cognitive-computing based technology solutions to help organizations worldwide prevent compliance breaches, respond to risk, remediate major issues and monitor ongoing business activities.
We are building out environments that will pass government certification. You will be working with a growing team of developers, data scientists and QA engineers on maintaining our existing services and infrastructure, while building the next generation of our engineering stack.
Exiger is seeking a motivated, self-driven Infrastructure Engineer who builds microservices and data, works within a continuous integration and delivery pipeline, and embraces test automation as a discipline.
Key responsibilities
Development and maintenance of infrastructure as code (IaC) base
Maintain/deploy Exiger microservices and dependent applications through IaC
Advocate, coordinate and collaborate on internal infrastructure upgrades and maintenance
Utilize logging/monitoring/alerting tools to maintain and continuously improve system health with multiple AWS deployments
Develop/manage package deployments of on-prem and cloud instances of Ion Channel
Development and Improvement of CI/CD and DevOps workflows using Travis CI, Docker and AWS
Use GitHub for code reviews of team member pull requests
Knowledge and skills
Experience with cloud hosting platforms (AWS, Google, Azure)
Experience with containerization (Docker)
Programming languages (Python, Bash, Golang)
Knowledge of database tools and infrastructure (PostgreSQL, MySQL, SQL)
Knowledge of cloud native and DevOps best practices
Experience with multi-account application deployment
Experience with logging/monitoring (Grafana, Kibana, ELK, Splunk)
We're an amazing place to work. Why?
Discretionary Time Off for all employees, with no maximum limits on time off
Industry leading health, vision, and dental benefits
Competitive compensation package
16 weeks of fully paid parental leave
Flexible, hybrid approach to working from home and in the office where applicable
Focus on wellness and employee health through stipends and dedicated wellness programming
Purposeful career development programs with reimbursement provided for educational certifications
Exiger is revolutionizing the way corporations, government agencies and banks manage risk and compliance with a combination of technology-enabled and SaaS solutions. In recognition of the growing volume and complexity of data and regulation, Exiger is committed to creating a more sustainable risk and compliance environment through its holistic and innovative approach to problem solving. Exiger's mission to make the world a safer place to do business drives its award-winning AI technology platform, DDIQ, built to anticipate the market's most pressing needs related to evolving ESG, cyber, financial crime, third-party and supply chain risk. Exiger has won 30+ AI, RegTech and Supply Chain partner awards.
Exiger's core values are courage, excellence, expertise, innovation, integrity, teamwork and trust.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
Exiger's hybrid work policy is periodically reviewed and adjusted to align with evolving business needs.
Auto-ApplySenior DevOps Engineer II
Remote
About Extend:
Extend is revolutionizing the post-purchase experience for retailers and their customers by providing merchants with AI-driven solutions that enhance customer satisfaction and drive revenue growth. Our comprehensive platform offers automated customer service handling, seamless returns/exchange management, end-to-end automated fulfillment, and product protection and shipping protection alongside Extend's best-in-class fraud detection. By integrating leading-edge technology with exceptional customer service, Extend empowers businesses to build trust and loyalty among consumers while reducing costs and increasing profits.
Today, Extend works with more than 1,000 leading merchant partners across industries, including fashion/apparel, cosmetics, furniture, jewelry, consumer electronics, auto parts, sports and fitness, and much more. Extend is backed by some of the most prominent technology investors in the industry, and our headquarters is in downtown San Francisco. What You'll Do:
Create reusable infrastructure components using infrastructure-as-code technologies to allow teams to manage their own infrastructure needs
Mature the CI/CD pipeline to enable teams to scale in a self-service way to help reduce deployment cost and time
Designs and implements DevOps tooling that accelerates AI innovation and empowers teams to build, deploy, and monitor intelligent agentic systems.
Collaborate with product engineering teams to design scalable infrastructure and deployment patterns for customer facing solutions
Act as an expert in AWS technologies by providing guidance on appropriate AWS solutions to address business needs
Develop, refine, and drive adherence to non-functional requirements for new product development in areas of security, reliability, and scalability
Mentor and provide guidance to new engineers on best practices and designs related to CI / CD or AWS technologies
Lead dynamic project efforts related to improving or enabling new technologies throughout Extend
What We Are Looking For:
6+ years experience in a DevOps engineering role
3+ years experience with CDK, AWS CloudFormation, or other infrastructure-as-code systems (like Terraform)
3+ years experience or certification in AWS serverless technologies (API Gateway, Lambda, S3, DynamoDB)
Experience developing and maintaining scalable backend systems and APIs using modern frameworks and cloud infrastructure
Proficiency with AI technologies and agentic workflows such as AWS Bedrock, Mastra, LangChain (or similar technologies)
Knowledge of best practices around security roles/policies for AWS IAM
Experience working with monitoring services (Coralogix, CloudWatch, DataDog, OpenTelemetry, Grafana)
Experience with CI/CD Tooling such as GitHub Actions, CodeBuild, or others
Ability to perform in a high energy environment with dynamic job responsibilities and priorities
Nice to Haves:
Experience with AWS Cloud Development Kit(CDK)
Experience with Typescript
Expected Pay Range: $170,000 - $198,000 per year salaried*
* The target base salary range for this position is listed above. Individual salaries are determined based on a number of factors including, but not limited to, job-related knowledge, skills and experience.
Life at Extend:
Working with a great team from diverse backgrounds in a collaborative and supportive environment.
Competitive salary based on experience, with full medical and dental & vision benefits.
Stock in an early-stage startup growing quickly.
Generous, flexible paid time off policy.
401(k) with Financial Guidance from Morgan Stanley.
Extend CCPA HR Notice
Auto-ApplyDevOps Engineer (hybrid)
Ithaca, NY jobs
Job Description
DevOps Engineer
Ursa Space Systems is building ground-breaking solutions to deliver global intelligence to organizations around the world. Through our SAR/EO/RF satellite network, and data fusion expertise, Ursa Space detects real-time changes in the physical world to expand transparency. Our subscription and custom services enable you to access satellite imagery and analytic results with no geographic, political or weather-related limitations.
Job Summary
Ursa is looking for a skilled DevOps Engineer to join our growing team! There is a lot of cross-pollination here at Ursa Space. You will have the opportunity to work with a diverse team of highly-skilled engineers, working on a variety of projects. There are no egos here - we are looking for the best ideas and are eager to hear your input!
This position will work with Amazon Web Services (AWS) cloud infrastructure, handling design, implementation, and maintenance. You will also work closely with software engineers to improve the platform and implement analytic solutions.
The ideal candidate will have experience with AWS services and the use of Cloud Development Kit (CDK) and Terraform to manage resources with code. They will also have a good understanding of the containerization and orchestration of workloads.
The DevOps Engineer will report directly to the Senior IT Systems Engineer. This position is exempt and not eligible for overtime pay. Ideal candidates would be in our around Ithaca, NY, where our company headquarters are located.
Job Responsibilities
Administer and maintain AWS infrastructure in collaboration with senior team members
Write, deploy and maintain scalable infrastructure as code
Fulfill company-wide needs for cloud resources
Provide AWS support to engineers, scientists, and analysts
Contribute to the development of IaC, CI/CD, and other engineering standards
Assist other engineers with architecting cloud solutions
Monitor platforms and troubleshoot technical issues
Assist with migration of legacy systems to redefined architectures
Coordinate with Ursa customers and vendors to facilitate exchange of data
Automate repetitive tasks where possible
Learn and stay updated on new technologies, products, and releases
All other duties as assigned
Requirements
B.S. in Computer Science and/or a related field, or equivalent work experience
3-5 years of experience with AWS solutions architecture, administration, and security best practices
Experience with Infrastructure as Code (CDK, Terraform, CloudFormation, or similar)
Experience with building and maintaining CI/CD pipelines (CodePipeline/CodeBuild, GitLab, Github Actions, etc.)
Intermediate Python programming skills with understanding of OOP principles
Experience with containerization technologies (Docker, Kubernetes/EKS)
Working knowledge of AWS managed services for networking, systems administration, monitoring, and security
Strong problem-solving and troubleshooting skills
Excellent communication and collaboration abilities
Preferred Skills
AWS Associate certifications (Solutions Architect, SysOps Administrator, Developer)
Kubernetes tools (Helm, autoscalers, Argo Workflows, etc.)
DataDog, Prometheus/Grafana, and other observability tools
SQL (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Redis, etc.)
ArcGIS, STAC, and other industry domain tools
Experience with multiple IaC frameworks
AWS Professional or Specialty certifications
AWS GovCloud experience
Familiarity with compliance standards (NIST, CMMC, etc.)
Exposure to geospatial/satellite imagery analysis
Prior start-up experience
Prior platform engineering experience
Located in our around Company Headquarters in Ithaca, NY
Compensation Range
$100,000 - $130,000
Location
We are headquartered in Ithaca, NY and have a remote workforce in other locations throughout the United States.
Please note: applications without a relevant cover letter or a cover letter written with AI will not be considered. In your cover letter, we would like to hear your personal voice and learn about your sincere interest in Ursa Space Systems.
Benefits and Perks
Competitive Compensation
Discretionary PTO & Flexible Scheduling
Stock Options
401(k) Match
Medical, Dental and Vision Coverage for you and your dependents
FSA & HSA Plans
Employer-paid Life Insurance
Employer-paid LTD and STD for Parental and Family Care
11 Paid Holidays
Employee Resource Groups
Educational Assistance Program
Professional Development Opportunities
And more…
Company Values
Use the team
Figure it out and own it
Aim for elegant simplicity
Empower diversity & inclusivity
Do the right thing
Be scrappy
Ursa Space Systems, Inc is an equal opportunity employer and does not discriminate on the basis of any legally protected status or characteristic. Protected veterans and individuals with disabilities are encouraged to apply.
Powered by JazzHR
7x4eFIdGY6
DevOps Engineer (hybrid)
Ithaca, NY jobs
DevOps Engineer
Ursa Space Systems is building ground-breaking solutions to deliver global intelligence to organizations around the world. Through our SAR/EO/RF satellite network, and data fusion expertise, Ursa Space detects real-time changes in the physical world to expand transparency. Our subscription and custom services enable you to access satellite imagery and analytic results with no geographic, political or weather-related limitations.
Job Summary
Ursa is looking for a skilled DevOps Engineer to join our growing team! There is a lot of cross-pollination here at Ursa Space. You will have the opportunity to work with a diverse team of highly-skilled engineers, working on a variety of projects. There are no egos here - we are looking for the best ideas and are eager to hear your input!
This position will work with Amazon Web Services (AWS) cloud infrastructure, handling design, implementation, and maintenance. You will also work closely with software engineers to improve the platform and implement analytic solutions.
The ideal candidate will have experience with AWS services and the use of Cloud Development Kit (CDK) and Terraform to manage resources with code. They will also have a good understanding of the containerization and orchestration of workloads.
The DevOps Engineer will report directly to the Senior IT Systems Engineer. This position is exempt and not eligible for overtime pay. Ideal candidates would be in our around Ithaca, NY, where our company headquarters are located.
Job Responsibilities
Administer and maintain AWS infrastructure in collaboration with senior team members
Write, deploy and maintain scalable infrastructure as code
Fulfill company-wide needs for cloud resources
Provide AWS support to engineers, scientists, and analysts
Contribute to the development of IaC, CI/CD, and other engineering standards
Assist other engineers with architecting cloud solutions
Monitor platforms and troubleshoot technical issues
Assist with migration of legacy systems to redefined architectures
Coordinate with Ursa customers and vendors to facilitate exchange of data
Automate repetitive tasks where possible
Learn and stay updated on new technologies, products, and releases
All other duties as assigned
Requirements
B.S. in Computer Science and/or a related field, or equivalent work experience
3-5 years of experience with AWS solutions architecture, administration, and security best practices
Experience with Infrastructure as Code (CDK, Terraform, CloudFormation, or similar)
Experience with building and maintaining CI/CD pipelines (CodePipeline/CodeBuild, GitLab, Github Actions, etc.)
Intermediate Python programming skills with understanding of OOP principles
Experience with containerization technologies (Docker, Kubernetes/EKS)
Working knowledge of AWS managed services for networking, systems administration, monitoring, and security
Strong problem-solving and troubleshooting skills
Excellent communication and collaboration abilities
Preferred Skills
AWS Associate certifications (Solutions Architect, SysOps Administrator, Developer)
Kubernetes tools (Helm, autoscalers, Argo Workflows, etc.)
DataDog, Prometheus/Grafana, and other observability tools
SQL (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Redis, etc.)
ArcGIS, STAC, and other industry domain tools
Experience with multiple IaC frameworks
AWS Professional or Specialty certifications
AWS GovCloud experience
Familiarity with compliance standards (NIST, CMMC, etc.)
Exposure to geospatial/satellite imagery analysis
Prior start-up experience
Prior platform engineering experience
Located in our around Company Headquarters in Ithaca, NY
Compensation Range
$100,000 - $130,000
Location
We are headquartered in Ithaca, NY and have a remote workforce in other locations throughout the United States.
Please note: applications without a relevant cover letter or a cover letter written with AI will not be considered. In your cover letter, we would like to hear your personal voice and learn about your sincere interest in Ursa Space Systems.
Benefits and Perks
Competitive Compensation
Discretionary PTO & Flexible Scheduling
Stock Options
401(k) Match
Medical, Dental and Vision Coverage for you and your dependents
FSA & HSA Plans
Employer-paid Life Insurance
Employer-paid LTD and STD for Parental and Family Care
11 Paid Holidays
Employee Resource Groups
Educational Assistance Program
Professional Development Opportunities
And more…
Company Values
Use the team
Figure it out and own it
Aim for elegant simplicity
Empower diversity & inclusivity
Do the right thing
Be scrappy
Ursa Space Systems, Inc is an equal opportunity employer and does not discriminate on the basis of any legally protected status or characteristic. Protected veterans and individuals with disabilities are encouraged to apply.
Auto-ApplyDevOps Engineer, Global
Remote
Vantage Data Centers powers, cools, protects and connects the technology of the world's well-known hyperscalers, cloud providers and large enterprises. Developing and operating across North America, EMEA and Asia Pacific, Vantage has evolved data center design in innovative ways to deliver dramatic gains in reliability, efficiency and sustainability in flexible environments that can scale as quickly as the market demands.
IT Department
The IT Department at Vantage Data Centers plays a pivotal role in driving innovation and efficiency across our technology landscape. We not only design, build, and maintain the critical networking and server infrastructure that powers each data center, but we also collaborate closely with technology partners to stay ahead of industry advancements and make informed decisions. Our team is dedicated to developing and implementing IT standards, automating processes, and supporting the broader organization in their automation journey. From hands-on infrastructure design to adopting cutting-edge automation tools and best practices, we strive to deliver high-performance, cost-effective solutions that enable rapid growth and seamless operations for Vantage Data Centers.
IT Standards Team
Our team is responsible for helping other technology teams on their automation journey, and for developing IT standards that support the IT organization. We embrace many approaches and technologies to speed up the delivery and operations of our Data Centers. From Zero-Touch provisioning of network equipment to the deployment of applications on containerization platforms, we apply our software and operation industry expertise everywhere we can. We question the status-quo and are not afraid to suggest new ways to do things. Individual contributors are encouraged to speak up, propose new insights and take an active role in the definition of our roadmap.
Position Overview
This role can be based remotely in the US.
Our team automates and standardizes the installation, configuration, deployment, and operation of Vantage Data Centers technology platforms. To support our growth, we are looking for an experienced Cloud Engineer to join our team and help us deliver more automation to our internal customers!
You will be using tools such as Ansible, Terraform, and Aria to implement IaC practices. You will write declarative code, create playbooks and orchestrate workflows that will allow people to save time and standardize our operations. You will contribute to administration and architecture for our hybrid cloud environment. You will suggest improvements to our current workflows, bring a positive, "there is a better way" demeanor, and be a key contributor to our technology automation platform!
Essential Job Functions
Reduce the time it takes to provision infrastructure
Create runbooks, scripts, and unit tests to reduce manual labor on repetitive tasks
Work with the other IT teams to resolve how we can improve their existing workflows with the use of automation
Work across the organization to advise and influence improved cloud adoption and governance
Maintain, improve, and support the tools and platforms that are managed by the team
Duties
Author and maintain puppet roles and profiles to automate the configuration of our Windows and Linux servers
Influence reliable, efficient, scalable enterprise grade solutions across the organization
Share your knowledge and expertise with peers by documenting your work and leading and organizing brown bag sessions
Implement standard methodologies for systems automation and platform operations
Ensure the availability and security of the tools and platforms supported by the team
Job Requirements
Bachelor of Science degree in computer science, software development, information technology or equivalent experience required
7 years of hands-on experience as a DevOps, Cloud, or Platform engineer role is required; 8 to 12 years preferred
Strong, hands-on experience developing and administering tools such as Puppet, Ansible, Chef, Terraform
Consistent track record of building and maintaining CI/CD pipelines with tools like Jenkins, GitHub Actions, GitLab CI
Intermediate to advanced understanding of programming and scripting languages Ruby, Java, Python, Bash, .NET, and Powershell; with advanced ability to author and automate code quality standards.
Ability to write idempotent declarative configuration management using Puppet, supported by a clear understanding of operating systems and applications present in the Vantage enterprise.
Advanced understanding of secure coding practices, compliance as code, software attestation and signing,
Advanced understanding of Terraform and ARM.
Ability to use Chocolatey to build custom NuGet packages
Intermediate to advanced knowledge of VMWare VCF 9 components, vCenter, vSphere, and ability to codify Content Library, Aria Automations, Tanzu, and NSX.
Advanced understanding of Azure networking, storage, compute, and serverless.
Advanced knowledge of ELK stack, Kubernetes, Openshift
Intermediate knowledge of FluxCD, Helm
Intermediate knowledge of policy as code such as checkov, open policy agent
Advanced knowledge of cloud native design principles and best practices, and ability apply principles to hybrid cloud environment.
Experience working with ITIL practices
Familiarity with regulatory compliance frameworks such as GDPR, PCI, SOX
Excellent written and oral communication skills
Ability to work independently and as a collaborative team member
Travel required is expected to be less than 5%
Data Center industry experience is strongly preferred, but not required
Physical Demands and Special Requirements
The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions.
While performing the duties of this job, the employee is occasionally required to stand; walk; sit; use hands to handle or feel objects; reach with hands and arms; climb stairs; balance; stoop or kneel; talk and hear. The employee must occasionally lift and/or move up to 25 pounds.
Additional Details
Salary Range: $150,000 - $160,000 Base + Bonus (this range is based on CO market data and may vary in other locations)
This position is eligible for company benefits including but not limited to medical, dental, and vision coverage, life and AD&D, short and long-term disability coverage, paid time off, employee assistance, participation in a 401k program that includes company match, and many other additional voluntary benefits.
Compensation for the role will depend on a number of factors, including your qualifications, skills, competencies, and experience and may fall outside of the range shown.
#LI-Remote #LI-CM1
We operate with No Ego and No Arrogance. We work to build each other up and support one another, appreciating each other's strengths and respecting each other's weaknesses. We find joy in our work and each other, actively seeking opportunities to inject fun into what we do. Our hard and efficient work is rewarded with an above market total compensation package. We offer a comprehensive suite of health and welfare, retirement, and paid leave benefits exceeding local expectations.
Throughout the year, the advantage of being part of the Vantage team is evident with an array of benefits, recognition, training and development, and the knowledge that your contribution adds value to the company and our community.
Don't meet all the requirements? Please still apply if you think you are the right person for the position. We are always keen to speak to people who connect with our mission and values.
Vantage Data Centers is an Equal Opportunity Employer
Vantage Data Centers does not accept unsolicited resumes from search firm agencies. Fees will not be paid in the event a candidate submitted by a recruiter without an agreement in place is hired; such resumes will be deemed the sole property of Vantage Data Centers.
We'll be accepting applications for at least one week from the date this role is posted. If you're interested, we encourage you to apply soon-we're excited to find the right person and will keep the role open until we do!
Auto-ApplyLead DevOps Engineer
Austin, TX jobs
We're hiring our first DevOps Engineer to architect and own our production infrastructure from the ground up, including designing and implementing production-grade Kubernetes clusters on AWS for our multi-tenant SaaS platform serving customers in highly regulated industries (finance, defense, others). This is a unique opportunity to be a part of foundational decisions that will shape our platform for years to come. You'll design and implement secure, compliant, scalable infrastructure on AWS while building the DevOps culture and practices for our growing engineering team. This role involves architecting secure AWS Organizations structure with multi-account strategy, implementing VPC architectures with proper network isolation, and managing IaC across multiple AWS accounts. You'll build security-first infrastructure meeting SOC 2, HIPAA, NIST 800-53, and government compliance requirements, while owning the observability stack and managing complex distributed systems including PostgreSQL, Redis, Kafka, Neo4j, and Temporal workflow orchestration.
Lead DevOps Engineer Qualifications:
We encourage candidates to apply even if they don't have 100% of the below qualifications. We believe in a holistic approach when evaluating talent for our team and post new roles often, so even if this role isn't quite right, we want to meet you!
Bachelor's degree in STEM (Science, Technology, Engineering, Math) or or equivalent work experience
8+ years of production-level experience in AWS environments
5+ years of production experience running Kubernetes on AWS (EKS strongly preferred)
5+ years of experience managing production database instances (managed and self-hosted)
5+ years of experience with AWS Organizations, multi-account strategy, SCPs, and cross-account IAM patterns
5+ years of expertise designing secure VPC architectures (CIDR planning, subnet strategies, routing, VPC peering/Transit Gateway, PrivateLink)
3+ years of experience with multi-tenant SaaS architectures and isolation patterns (network, IAM, data)
3+ years of track record implementing compliance frameworks (SOC 2, HIPAA, FedRAMP, or similar)
5+ years of expertise with IaC (Terraform preferred, open to other tools)
5+ years of strong security mindset including encryption key management, Zero Trust networking, service mesh
3+ years of experience operating high-availability PostgreSQL and distributed databases in production with monitoring and troubleshooting (managed DB services as well)
Proven ability to own infrastructure decisions and work autonomously
Experience with Agile team methodologies including daily stand-ups
Strong experience in effective technical communication and problem-solving within multidisciplinary teams
Comfortable working autonomously with high ownership and accountability in making infrastructure decisions that impact enterprise customers in regulated industries
Like-to-have Qualifications:
Deep understanding of GitHub Actions and build pipelines in AWS
Experience with Temporal or Airflow workflow orchestration platforms
Graph database experience (Neo4j, Memgraph)
Search database experience (OpenSearch, Elasticsearch)
Vector database experience (Qdrant, Milvus)
AWS RDS Proxy and PostgreSQL performance tuning expertise
FIPS 140-2 compliance implementation experience
Experience with modern reverse proxies (Caddy or similar)
Contributions to open-source DevOps tooling
Experience with observability stacks (OpenTelemetry, Jaeger, Prometheus, Grafana)
Experience with secrets management solutions (Infisical, Vault, AWS Secrets Manager)
Perks:
Tremendous growth potential
Open and unlimited PTO & Sick time
Flexible hours, work from home as needed
Medical, dental, vision, & HSA options
401K
Parking Included
Onsite gym & showers
Stocked kitchen with healthy snacks, drinks, coffee, etc
Team events including happy hours, catered lunches, and other fun outings
Innovative, collaborative, and fun work environment that fosters a positive and supportive culture for growth
Full-time Employment: Ability to work as a full-time employee
Senior Devops Engineer
Jersey City, NJ jobs
Exiger Product and Technology is an experienced team of software professionals with a wide range of specialties and interests. We are building cognitive-computing based technology solutions to help organizations worldwide prevent compliance breaches, respond to risk, remediate major issues and monitor ongoing business activities.
We are building out environments that will pass government certification. You will be working with a growing team of developers, data scientists and QA engineers on maintaining our existing services and infrastructure, while building the next generation of our engineering stack.
Exiger is seeking a motivated, self-driven Infrastructure Engineer who builds microservices and data, works within a continuous integration and delivery pipeline, and embraces test automation as a discipline.
Key responsibilities
Development and maintenance of infrastructure as code (IaC) base
Maintain/deploy Exiger microservices and dependent applications through IaC
Advocate, coordinate and collaborate on internal infrastructure upgrades and maintenance
Utilize logging/monitoring/alerting tools to maintain and continuously improve system health with multiple AWS deployments
Develop/manage package deployments of on-prem and cloud instances of Ion Channel
Development and Improvement of CI/CD and DevOps workflows using Travis CI, Docker and AWS
Use GitHub for code reviews of team member pull requests
Knowledge and skills
Experience with cloud hosting platforms (AWS, Google, Azure)
Experience with containerization (Docker)
Programming languages (Python, Bash, Golang)
Knowledge of database tools and infrastructure (PostgreSQL, MySQL, SQL)
Knowledge of cloud native and DevOps best practices
Experience with multi-account application deployment
Experience with logging/monitoring (Grafana, Kibana, ELK, Splunk)
We're an amazing place to work. Why?
Discretionary Time Off for all employees, with no maximum limits on time off
Industry leading health, vision, and dental benefits
Competitive compensation package
16 weeks of fully paid parental leave
Flexible, hybrid approach to working from home and in the office where applicable
Focus on wellness and employee health through stipends and dedicated wellness programming
Purposeful career development programs with reimbursement provided for educational certifications
Our Commitment to Diversity & Inclusion
At Exiger, we know our people are the core of our excellence. The collective sum of the individual differences, life experiences, knowledge, inventiveness, innovation, self-expression, unique capabilities, and talent that our employees invest in their work represent a significant part of not only our culture, but our reputation and what we have been able to achieve as a global organization.
We embrace and encourage our employees' differences in age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other characteristics that make our employees unique. These unique characteristics come together to form the fabric of our organization and our culture, and enhance our ability to serve our clients while helping them to solve their business issues. All qualified candidates will be considered in accordance with this policy.
At Exiger we believe we all have a responsibility to treat others with dignity and respect at all times. All employees are expected to exhibit conduct that reflects our global commitment to diversity and inclusion in any environment while acting on behalf of, and representing, Exiger.
This position is not eligible for residents of California, Colorado, or New York. Must be authorized to work in United States. Candidates must be Clearable for secret/top secret US government clearance.
Exiger is revolutionizing the way corporations, government agencies and banks manage risk and compliance with a combination of technology-enabled and SaaS solutions. In recognition of the growing volume and complexity of data and regulation, Exiger is committed to creating a more sustainable risk and compliance environment through its holistic and innovative approach to problem solving. Exiger's mission to make the world a safer place to do business drives its award-winning AI technology platform, DDIQ, built to anticipate the market's most pressing needs related to evolving ESG, cyber, financial crime, third-party and supply chain risk. Exiger has won 30+ AI, RegTech and Supply Chain partner awards.
Exiger's core values are courage, excellence, expertise, innovation, integrity, teamwork and trust.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
Exiger's hybrid work policy is periodically reviewed and adjusted to align with evolving business needs.
Auto-ApplySenior DevOps Engineer
Cleveland, OH jobs
We Are Brookfield Properties:
At Brookfield Properties, our people are the foundation of our success.
The Brookfield Properties Corporate team brings together subject matter experts who lead with confidence, adaptability, and resourcefulness. The corporate group works across all sectors of Brookfield's real estate business - including housing, logistics, hospitality, office, and retail - collaborating with our best-in-class asset managers.
Efficiency is at the core of what we do. We seek to simplify, standardize, automate, and optimize-creating smarter solutions and maximizing value across every facet of Brookfield's business. When you join the Brookfield Properties Corporate team, you become part of a high-performing, collaborative environment where innovation and impact thrive.
We are seeking a Senior DevOps Engineer to join Brookfield Properties in Cleveland, OH or Chicago, IL. This role is a deeply technical role responsible for building the foundational infrastructure powering Brookfield's most strategic initiatives: artificial intelligence, machine learning, and enterprise-scale data platforms. Reporting to the Cloud Architect, this engineer drives multi-cloud platform design (AWS and Azure), with a focus on reliability, automation, and scalability.
This position is critical in enabling Brookfield's modern data ecosystem including secure, high performance AI/ML platforms, enterprise data lakes, and intelligent applications. You'll partner with cloud, security, data engineering, and ML teams to deliver automated, compliant, and production grade platforms that power analytical and operational workloads.
Role & Responsibilities:
AI & ML Platform Engineering
Design and build scalable cloud-native infrastructure for AI/ML platforms
Automate deployment of infrastructure as code and identify and execute on other areas for automation within cloud solutions
Implement end-to-end pipelines
Collaborate with ML teams to tune infrastructure for performance, reproducibility, and cost-efficiency.
Data Lake & Data Warehouse Infrastructure
Support the architecture and operations of infrastructure supporting enterprise-scale data lakes and cloud data warehouses (Redshift, Snowflake)
Automate ingestion, transformation, and lifecycle policies using IaC and orchestration tools
Support big data frameworks
Ensure compliance, encryption, retention, and access control are enforced across all platforms
Multi-Cloud Infrastructure & Automation
Design modular, reusable infrastructure-as-code across AWS and Azure
Integrate security, cost optimization, DR, and compliance as code into platform blueprints
Build GitOps-based deployment pipelines for infrastructure, ML services, and platform updates
Implement policy-as-code for environment governance
Cybersecurity
Secure cloud infrastructure across AWS, Azure & GCP, embedding defense-in-depth and zero-trust principles throughout network and compute layers
Implement secure networking architectures including private connectivity, encryption in transit, and segmentation of critical workloads
Harden CI/CD pipelines with automated vulnerability scanning, secret management, and signed artifact verification
Collaborate with Security Operations to ensure cloud telemetry, threat detection, and incident response are integrated into platform monitoring
CI/CD, Monitoring & Observability
Build and manage scalable CI/CD pipelines supporting data, ML, and app workloads
Integrate security scanning, test automation, and artifact promotion
Deploy observability tooling across ML and data pipelines
Enable intelligent alerting and logging for infrastructure, pipelines, and AI services
Cross-Functional Collaboration & Strategy
Work with data engineers, ML scientists, software teams, and security to deliver cohesive platforms
Shape strategy and future-state architecture for AI enablement and MLOps
Mentor engineers on DevOps, Cloud Operations, IaC, cloud-native platforms, and data/ML workflows
Continuously improve automation maturity, developer velocity, and platform resiliency
Your Qualifications:
8+ years in DevOps, cloud platform engineering, or SRE roles in enterprise environments
Proven experience with AWS and Azure for data platforms, ML infrastructure, and DevOps automation
Hands-on with SageMaker, Azure ML, Kubeflow, MLflow or other enterprise-grade MLOps platforms
IaC expertise with Terraform or ARM/Bicep is a plus
Fluent in Python and/or Bash for scripting, automation, and platform integrations
Experience building and operating data lakes and data warehouses in the cloud (e.g., S3/ADLS, Redshift, Snowflake)
Strong skills in CI/CD pipelines and DevSecOps practices
Experienced with monitoring and logging systems
Understanding of security, compliance, encryption, IAM, and policy-as-code in a cloud environment
Excellent collaboration and mentoring capabilities; strong communication across technical and business stakeholders
Your Career @ Brookfield Properties:
At Brookfield Properties, your career progression is important to us. As a successful employee, you will have the opportunity to grow within your team, department, and across the Brookfield organization. Our leadership teams are dedicated to the accomplishments of their employees. We also invest time into training and developing our people.
End your job search and find your career today, at Brookfield Properties.
Why Brookfield Properties?
We imagine, create, and operate on a foundation of values to build a better world, together. Brookfield Properties strives to create spaces where going to work never feels routine. As a Brookfield Properties employee, you will enjoy many benefits such as 401K matching, tuition reimbursement, summer Fridays, paid maternity leave and more. There is also a generous employee referral program because we want our existing team members to help us build a more diverse workplace through their networks.
Compensation & Benefits:
Salary Type: Exempt
Pay Frequency: Bi-weekly
Annual Base Salary Range: $155,000-$170,000
Medical & Pharmacy Coverage: Yes, under Brookfield Medical Plan
Dental Coverage: Yes, under Brookfield Medical Plan
Vision Coverage: Yes, under Brookfield Medical Plan
Retirement: 401(k)
Insurance: Employer-paid life & short/long term disability
Brookfield Properties is an equal opportunity employer, and we foster an inviting, inclusive and collaborative environment.
We are proud to create a diverse environment and are proud to be an equal opportunity employer. We are grateful for your interest in this position, however, only candidates selected for pre-screening will be contacted.
#BPUS
Auto-ApplyRemote Software Engineer - Bioinformatics
San Diego, CA jobs
Kforce's client, a growing Biotechnology/Pharmaceutical technology company, is seeking a remote Software Engineer to work as part of the Research Software team in Functional Genomics. We are working directly with the Hiring Manager on this search assignment. This position is 100% remote. The company offers a competitive compensation package including base salary, annual bonus and Stock/RSU's. The Software Engineer will work as part of a multidisciplinary and highly innovative team designing and implementing tools to support their drug discovery research efforts. Work will include Docker-based web services delivering rich data-driven user-interfaces for the interrogation of complex biological and chemical data.
Responsibilities:
* Develop tools for various research teams that help medicinal chemists, biologists and computational biologists better capture and leverage in-house and external data in their research
* Maintain legacy Java software and migrate legacy Java software into contemporary technologies (Java or Node/Typescript)
* Create new backend services to accommodate our expanding infrastructure
* Enhance institutional data access using tools like RESTful API's and modern web UI/UX frameworks
* Work directly with end-users to troubleshoot and/or design and prioritize feature enhancements to existing tools* BS or MS degree in Computer Science, Computer Engineering, Biomedical, Biotechnology or related field preferred
* At least 3+ years of software development experience
* Proficient with Java, JavaScript or typescript would be ideal
* Experience with Node.js, Git and Python are a plus
* Knowledge of Java GUI frameworks like Swing, AWT is a plus
* Knowledge of web frameworks like Jersey or Spring
* Knowledge of Maven and Gradle
* Experience with software test automation is a plus but not required
* Experience with relational databases
* Experience deploying services using AWS services and Docker is a plus
* Linux skills are preferred
* Must have excellent communication and requirement gathering skills
* An ability to be productive and successful in an intense work?environment
* Able to learn new technologies quickly and jump in anywhere in our stack
* Experience with bioinformatics and genomics is a plus
Nice to Have:
* Experience with GraphQL
* Experience with NoSQL databases
* Experience with a web app component-based framework
Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.