Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

How tech teams are making extraordinary progress in COVID-19 shutdown while working remotely?

COVID-19 has led businesses into extreme work challenges. To overcome these challenges, companies have shifted their working patterns and have embraced Remote Work to avoid any negative impact on employee’s health as well as the business. COVID-19 has made IT-sector office culture’s shift to remote work prominently. The tech teams, who already have DevOps processes in place, got huge benefits and did not get much impact while shifting their workplace from regular office to home.

Optimizing container workload infrastructure while respecting instance-level dependencies

Ocean by Spot continuously makes sure that all pods’ requirements are met so they can be scheduled by Kubernetes on the right nodes, with intelligent bin packing for optimized resource usage. In some use cases there are instance level dependencies, such as: To ensure that these instance level dependencies are met, we are pleased to share that Ocean launch specifications now supports a maximum number of instances allowed to run concurrently.

Scaling OpenShift Container Resources using Ansible

Assume you have a process to determine the optimal settings for CPU and memory for each container running in your environment. Since we know resource demand is continuously changing, let’s also assume these settings are being produced periodically by this process. How can you configure Ansible to implement these settings each time you run the associated playbook for the container in question?

Getting SRE Buy-in from C-Levels for Error Budgets and SLOs, Part 3

You now have postmortems properly implemented, automated, and well-structured. You’re generating reports and data automatically based on all your incidents. Two levels of management have agreed to your SRE buy-in efforts. That is a huge accomplishment! If you’re here, you’re making great traction adopting SRE best practices, but the battle is not won yet. The hardest but most strategic, important effort will be proving to your C-levels why they should buy into SRE.

Protecting Critical Infrastructure in Kubernetes and Rancher

“As we expand, it’s critical for our team to have both a fast and automated rollout process for each customer environment. In the end, each of our user’s access experience must be identical. Rancher is one product that’s critical to that strategy.” – Jeff Klink, VP Engineering, Cloud and Security Specialist, Sera4 Security worries keep many of us awake at night – no matter our industry.

Custom Alerts Using Prometheus in Rancher

This article is a follow up to Custom Alerts Using Prometheus Queries. In this post, we will also demo installing Prometheus and configuring Alertmanager to send emails when alerts are fired, but in a much simpler way – using Rancher all the way through. We’ll see how easy it is to accomplish this without the dependencies used in previous article.

You've launched your first Kubernetes cluster, now what?

As Kubernetes continues to grow in popularity at a staggering rate, it’s only natural more and more people want to see what all the fuss is about. We’ve seen first hand how excited people are to try it out since launching #KUBE100 (our Kubernetes beta) – we’ve had tremendous interest and some great feedback so far. If you’re reading this and you have no idea what #KUBE100 is, it’s the name we gave to our k3s-powered, managed Kubernetes beta program.

How to use JFrog CLI to Create, Update, Distribute & Delete Release Bundles

This blog post will provide you with information on how to use JFrog CLI with JFrog Distribution workflows. JFrog Distribution manages your software releases in a centralized platform. It enables you to securely distribute release bundles to multiple remote locations and update them as new release versions are produced. For those of you who are not yet familiar with the JFrog CLI, it is an easy to use client that simplifies working with JFrog solutions using a simple interface.

Monitoring Amazon EKS logs and metrics with the Elastic Stack

To achieve unified observability, we need to gather all of the logs, metrics, and application traces from an environment. Storing them in a single datastore drastically increases our visibility, allowing us to monitor other distributed environments as well. In this blog, we will walk through one way to set up observability of your Kubernetes environment using the Elastic Stack — giving your team insight into the metrics and performance of your deployment.