Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Essential Kubernetes Extensions Explained

If you’ve done your research, you probably know that Kubernetes is only one piece of the puzzle. Production grade deployments require a lot of moving pieces including logging and monitoring, governance, and more. You’ll also need some key extensions — some Kubernetes can’t go without, others that will make your life a lot easier. Let’s take a closer look.

Deploy Kubernetes Clusters on Microsoft Azure with Rancher

If you’re in enterprise IT, you’ve probably already looked into Microsoft’s Azure public cloud. Microsoft Azure offers excellent enterprise-grade features and tightly integrates with Office 365 and Active Directory. It also provides a managed Kubernetes service, AKS, that you can provision from the Azure portal.

Using Codefresh to Deploy a Windows Server Application to Google Kubernetes Engine

While Kubernetes has been traditionally used with Linux workloads, the desire to run Windows applications is an important need for many organizations that have critical applications running on Windows Server. Docker has already offered support for native Windows containers, so the next missing piece would be Windows node support in Kubernetes clusters. Google Cloud has recognized this gap and is now offering Windows support for Kubernetes clusters.

Making the Most of Helm 3

Building upon the success of Helm 2, Helm 3 has recently been released and the server-side component, Tiller, is finally gone! Helm works out-of-the-box with Coderesh, so releasing your Helm 3 applications is as easy as pie. In this blog post, you will learn about viewing Helm releases, and monitoring Helm environments. Still using Helm 2? Not to worry! With a click of a button, in Codefresh, you can manage both Helm 2 and Helm 3 clusters simultaneously!

A 'No-BS' Checklist for Kubernetes

KubeCon + CloudNativeCon sponsored this post, in anticipation of KubeCon + CloudNativeCon EU, in Amsterdam. If you’re new to Kubernetes and have been tasked with researching a vendor-supported platform for your enterprise, chances are you’re feeling overwhelmed. You’ll encounter a seemingly never-ending list of vendors, all promising more or less the same. To help you navigate the space and ask vendors the right questions, we created this no-BS Kubernetes checklist.

How to monitor OPA Gatekeeper with Prometheus metrics

In this blog post, we’re going to explain how to monitor Open Policy Agent (OPA) Gatekeeper with Prometheus metrics. If you have deployed OPA Gatekeeper, monitoring this admission controller is as relevant as monitoring the rest of the Kubernetes control plane components, like APIserver, kubelet or controller-manager. If something breaks here, Kubernetes won’t deploy new pods in your cluster; and if it’s slow, your cluster scale performance will degrade.

How tech teams are making extraordinary progress in COVID-19 shutdown while working remotely?

COVID-19 has led businesses into extreme work challenges. To overcome these challenges, companies have shifted their working patterns and have embraced Remote Work to avoid any negative impact on employee’s health as well as the business. COVID-19 has made IT-sector office culture’s shift to remote work prominently. The tech teams, who already have DevOps processes in place, got huge benefits and did not get much impact while shifting their workplace from regular office to home.

Optimizing container workload infrastructure while respecting instance-level dependencies

Ocean by Spot continuously makes sure that all pods’ requirements are met so they can be scheduled by Kubernetes on the right nodes, with intelligent bin packing for optimized resource usage. In some use cases there are instance level dependencies, such as: To ensure that these instance level dependencies are met, we are pleased to share that Ocean launch specifications now supports a maximum number of instances allowed to run concurrently.

Scaling OpenShift Container Resources using Ansible

Assume you have a process to determine the optimal settings for CPU and memory for each container running in your environment. Since we know resource demand is continuously changing, let’s also assume these settings are being produced periodically by this process. How can you configure Ansible to implement these settings each time you run the associated playbook for the container in question?