Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

How to pronounce Kubernetes so you don't get laughed at

We’ve all been there. A new tool is trending, it is getting mentioned all over the place, and you get dropped into a conversation about it. The last thing you want to do is embarrass yourself by pronouncing something wrong and revealing you don’t know anything about the latest thing. We want to make sure you avoid that happening. When it comes to Kubernetes, there are really only four terms that you might need some help with pronouncing.

How to find-and use-your GKE logs with Cloud Logging

Logs are an important part of troubleshooting and it’s critical to have them when you need them. When it comes to logging, Google Kubernetes Engine (GKE) is integrated with Google Cloud’s Logging service. But perhaps you’ve never investigated your GKE logs, or Cloud Logging? Here’s an overview of how logging works in GKE, and how to configure, find, and interact effectively with the GKE logs stored in Cloud Logging.

MicroK8s now native on Windows and macOS

Windows and macOS developers can now use MicroK8s natively! Use kubectl at the Windows or Mac command line to interact with MicroK8s locally just as you would on Linux. Clean integration into the desktop means better workflows to dev, build and test your containerised apps. MicroK8s is a conformant upstream Kubernetes, packaged for simplicity and resilience. It provides sensible defaults and bundles the most commonly used components for at-your-fingertips access.

Longhorn: Rancher's Journey from Zero to GA

When Frodo was commissioned on a seemingly straightforward journey to retrieve the One Ring, I doubt he realized the adventure and commitment he was embarking on. Rancher Labs started on a similarly daring journey almost four years ago. It didn’t take a wizard showing up uninvited at dinner to convince us of this. From the beginning of Rancher Labs, our founding team had a deep conviction about the importance of storage in the future of cloud-first computing.

Longhorn Simplifies Distributed Block Storage in Kubernetes

Today we’re announcing the general availability of Longhorn, an enterprise-grade, cloud-native container storage solution. Longhorn directly answers the need for an enterprise-grade, vendor-neutral persistent storage solution that supports the easy development of stateful applications within Kubernetes. We’ve been working on Longhorn for almost as long as we’ve been around as a company.

Introduction to KUDO: Automate Day-2 Operations (II)

In a previous article, we discussed KUDO and the benefits of it when you want to create or manage Operators. In this article we will focus on how to start to work with KUDO: Installation, using a predefined Operator and create your own one. Installing KUDO To install KUDO the first step is to install the CLI plugin in order to manage KUDO via CLI. Depending on your OS you can use a package manager like Brew or Krew, however installing the binary is a straightforward option to proceed.

Everything You Need to Know about Kubernetes Services Networking in Your Rancher Cluster

As a leading, open-source multi-cluster orchestration platform, Rancher lets operations teams deploy, manage and secure enterprise Kubernetes. Rancher also gives users a set of CNI options to choose from, including open-source Project Calico.

CVE-2020-8555 And What We've Done About It

A security vulnerability (CVE-2020-8555) with a Medium severity has been reported that affects following versions of Kubernetes: Note, an attack using this vulnerability requires permission to create a pod or StorageClass and would typically only be granted to internal administrators or developers within an organization. It is possible to mitigate an attack by implementing policies using Gatekeeper and restricting StorageClass using Kubernetes access controls.

Flexibly route traffic to designated Kubernetes infrastructure nodes

Ocean by Spot is a Kubernetes data plane service that provides a serverless infrastructure engine for running containers. Ocean is designed to work in such a way that pods and workloads can take advantage of the underlying capabilities of cloud infrastructure such as compute, networking and storage across different pricing models, lifecycles, performance and availability levels, without having to know anything about it.