Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Join the ITOps AI Revolution: Actionable Insights with VMware Tanzu Insights

Many organizations struggle with managing thousands of services and applications. A typical environment consists of a combination of modern cloud applications, on-premises workloads, and workloads that are in the process of being moved to the cloud. IT and operations teams can easily be overwhelmed by the large volume of data and activity that is generated across these systems.

How to detect and prevent memory leaks in Kubernetes applications

In our last blog, we talked about the importance of setting memory requests when deploying applications to Kubernetes. We explained how memory requests lets you specify how much memory (RAM for short) Kubernetes should reserve for a pod before deploying it. However, this only helps your pod get deployed. What happens when your pod is running and gradually consumes more RAM over time?

Build Your Own Network with Linux and Wireguard

Last Christmas, I bought my wife “Explain the cloud like I am 10” after she told me many times that it was hard for her to relate to what I am doing in my daily work at Qovery. While so far, I have been the sole reader to enjoy the book, I was wondering during my lecture if there were any resources to explain how to build all that. Most topics are software oriented.. So, in this article, I am going to explain how to build your own cloud network 🎊

Multi-Service Progressive Delivery with Argo Rollouts

In the previous article of the series, we explained how to use Configmap generators in order to use Progressive Delivery for your configuration (and not just the container images). In this post, we will also cover another popular question: how to use Argo Rollouts with multiple services. Argo Rollouts is a Kubernetes controller that allows you to perform advanced deployment methods in a Kubernetes cluster. By default, it only supports a single service/application.

Configuration Drift: Understanding, Avoiding, Managing and Resolving in Kubernetes

If you work with Kubernetes, you know that any number of issues can pose a serious threat to the stability and security of your deployments. One that's subtly damaging is configuration drift, which occurs when the actual state of how your system is set up — its configuration — strays from the way you defined. Configuration drift in Kubernetes can happen when people make changes manually, systems aren't synchronized properly or monitoring falls short.

Tracing Your Steps Toward Full Kubernetes Observability

Kubernetes is one of the most important and influential technologies for building and operating software today because it’s so incredibly capable. It’s flexible, available, resilient, scalable, feature-rich and backed by a global community of innovators — that’s a pretty impressive list of intangibles to apply to any particular capability.

Kubernetes Autoscaling for Continuous Integration/Continuous Deployment

Continuous Integration/Continuous Deployment (CI/CD), the ability to adapt swiftly to fluctuating workloads is paramount. Kubernetes, with its dynamic orchestration capabilities, offers an invaluable toolset for achieving seamless scalability. This article explores the concept of Kubernetes autoscaling and its pivotal role in optimising CI/CD pipelines.

Free Preview Environments For Open-Source Projects

We at Qovery are excited to offer our Preview Environments for free to all open-source projects. A Preview Environment is like a sandbox where developers can see how changes to the code will work before these changes are final. This is great for projects where many parts, like the backend, frontend, and databases, must talk to each other.