Operations | Monitoring | ITSM | DevOps | Cloud

Rancher

Achieving Major Efficiencies through Migration from OpenShift to Rancher

Sometimes technology partnerships are greater than the sum of their parts. That’s the case with two Swiss companies who have come together to deliver Kubernetes solutions to their customers. VSHN is Switzerland’s leading 24/7 cloud operations partner and first Kubernetes Certified Service Provider. amazee.io is an open source container hosting provider that offers flexible solutions built for speed, security and scalability.

Competition or Coopetition in the Persistent Storage Market?

Rancher Labs’ recent launch of Longhorn was in response to DevOps’ distress call for a cloud-native persistent storage solution for Kubernetes. At the time, industry pundit Chris Mellor posted that the company had entered into direct competition with its partners Portworx and Storage OS. A healthy dose of coopetition may be more like it.

The Power of Open Source Software: Rancher Academy Issues 1,000th Certificate

The Rancher Academy launched on May 15, 2020. Here we are, 94 days later, and we’ve issued our 1,000th certificate to a graduate of the Certified Rancher Operator: Level 1 course. Rancher is open source software, so anyone can download it and use it. With that freedom, though, comes a cost: we all learn how to use it according to how we need to use it. Through this lens, the actual potential of Rancher becomes distorted, and the experience of each individual varies widely.

August 2020 Online Meetup - Rancher 2.5 Preview - EKS Lifecycle Management

Hosted cloud provider Kubernetes services like EKS alleviate the operational burden of Kubernetes. The cluster operator is still responsible for upgrades and all the day 2 operations for the applications running on the cluster. In this meetup we'll discuss how Rancher can help manage the lifecycle of EKS clusters, and will walk through importing existing and provisioning new EKS clusters through Rancher. We will also look at how to deploy Rancher logging and monitoring onto the cluster to handle day 2 operations on the cluster.

Creating Memorable Gaming Experiences with Kubernetes

If you’re a gamer, you probably know how immersed you can get in your favorite game. Or if you’re the parent or partner of a gamer, you probably know what it’s like to try to get the attention of someone who is in “gaming mode.” Creating worlds and enriching players’ lives is in Ubisoft’s DNA.

KMC - How Helm 3 and Helm Charts Create Reproducible Security

Helm 3 is developing a set of best practices that help make Kubernetes applications more secure. As a recent graduate from incubation to full-fledged project of the Cloud Native Computing Foundation, Helm has been developing its own ecosystem and is working towards mature tooling. Join Rancher and JFrog as they provide more details into updates in Helm 3 and how Helm Charts create reproducible security in the Kubernetes ecosystem.

Disaster Recovery Preparedness for Your Kubernetes Clusters

In the pre-Kubernetes, pre-container world, backup and recovery solutions were generally implemented at the virtual machine (VM) level. That works for traditional applications when an application runs on a single VM. But when applications are containerized and managed with an orchestrator like Kubernetes, this system falls apart. That means effective disaster recovery (DR) plans for Kubernetes must be designed for containerized architectures and natively understand the way Kubernetes functions.

The No. 1 Rule of Disaster Recovery

Let’s imagine you are running a hosting shop with highly visible production applications. Your team has backups, and you have a disaster recovery (DR) policy. You think you are ready to handle any real-world scenario in addition to checking all your compliance boxes. Your third-party backup tools are creating backups, and your implemented solutions have a brochure indicating restore capability.

KMC - Automated Optimization of Kubernetes Performance

Using the Rancher platform and services, enterprise IT and DevOps teams can overcome the complexity of standing up and running multiple Kubernetes containers. However, as deployments scale, and the number of apps and workloads that teams have running on Kubernetes multiplies, complexity grows exponentially. Much of the difficulty centers on trying to find the best configuration settings for applications. Manual, trial-and-error approaches are ineffective, and always overprovisioning isn’t a viable strategy.