Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Why Civo joined the Open Cloud Coalition

At Civo, we’ve always believed that the cloud industry should be fair, open, and accessible to companies of all sizes - not just the select few with deep pockets and large platforms. When the Open Cloud Coalition (OCC) was first proposed, we immediately saw it as an opportunity to cut through the noise and amplify the important voices of the cloud industry.

Autoscaling Amazon EKS with Karpenter: A Step-by-Step Guide

Managing resource scaling in EKS clusters can quickly become complex, especially with fluctuating workloads. Traditional autoscaling solutions often require predefined configurations, which can lead to inefficiencies and unnecessary costs. This is where Karpenter comes in—a powerful, open-source project designed to dynamically manage and scale nodes in response to real-time application demands.

Get complete Kubernetes observability by monitoring your CRDs with Datadog Container Monitoring

Custom resources are critical components in Kubernetes production environments. They enable users to tailor Kubernetes resources to their specific applications or infrastructure needs, automate processes through operators, simplify the management of complex applications, and integrate with non-native applications such as Kafka and Elasticsearch.

A guide on scaling out your Kubernetes pods with the Watermark Pod Autoscaler

While overprovisioning Kubernetes workloads can provide stability during the launch of new products, it’s often only sustainable because large companies have substantial budgets and favorable deals with cloud providers. As highlighted in Datadog’s State of Cloud Costs report, cloud spending continues to grow, but a significant portion of that cost is often due to inefficiencies like overprovisioning.

Kubernetes autoscaling guide: determine which solution is right for your use case

Kubernetes offers the ability to scale infrastructure to accommodate fluctuating demand, enabling organizations to maintain availability and high performance during surges in traffic and reduce costs during lulls. But scaling comes with tradeoffs and must be done carefully to ensure teams are not over-provisioning their workloads or clusters. For example, organizations often struggle with overprovisioning in Kubernetes and wind up paying for resources that go unused.

How AI Helped To Migrate 37 Apps From Heroku To AWS Under 2 Hours

Startups relying on Heroku often hit roadblocks as they scale. Rising costs, technical limitations, and lack of control over infrastructure force many to explore alternatives. One such startup recently migrated 37 applications from Heroku to AWS using Qovery’s DevOps AI Migration Agent. Here’s how they accomplished this migration in less than two hours, saving days of manual work.

Kubernetes is Not Just a Platform - It's a Whole Ecosystem

As someone who is building a platform that is intended to make Kubernetes operations easier for everyone––I’ve learned a lot about production Kubernetes operations. The main thing I’ve noticed folks getting wrong, is that it isn’t simply a platform, it’s an entire ecosystem.

Build Your Own Developer Platform in 90 Minutes

In today’s fast-paced technology landscape, creating a robust developer platform is essential for streamlining software development processes and ensuring efficient collaboration across teams. In this talk, we will explore how you can build your own developer platform in just 90 minutes using a powerful combination of Backstage, ArgoCD, and Kubernetes (K8s).

From AWS EKS to AWS EKS with Qovery - 6 Reasons Why This Startup Migrated

Sometimes, even when you’re running on a solid infrastructure like AWS EKS, a growing company needs to rethink how they manage it. One startup, fresh off a successful Series A raise, found themselves in just that situation. They had been running two EKS clusters — Production and Staging — managed entirely by their CTO, who is an experienced AWS power user.