The latest News and Information on Containers, Kubernetes, Docker and related technologies.
At Codefresh, we are always happy to see companies and organizations as they adopt Argo CD and get all the benefits of GitOps. But as they grow we see a common pattern: It is at this point that organizations come to Codefresh and ask how we can help them scale out the Argo CD (and sometimes Argo Rollouts) initiative in the organization. After talking with them about the blockers, we almost always find the same root cause.
Kubernetes is an excellent solution for building a flexible and scalable infrastructure to run dynamic workloads. However, as our cluster expands, we might face the inevitable situation of scaling and managing multiple clusters concurrently. This notion can introduce a lot of complexity for our day-to-day workload maintenance and adds difficulty to keep all our policies and services up to date in all environments.
With the rise of cloud computing, containerization, and microservices architecture, developers are adopting new approaches to building and deploying applications that are more scalable and resilient. Microservices architecture, in particular, has gained significant popularity due to its ability to break down monolithic applications into smaller, independent services.
Tradeoff: a balance achieved between two desirable but incompatible features; a compromise. Schooling often promotes the idea that there is a right and wrong answer to questions… It does little to prepare us for how many times that there are multiple right answers and no definitive best path forward. In a time where we have unlimited information at our fingertips, you can throw a stone and hit a thousand people with an opinion.
In a previous blog post, we explained how containers’ CPU and memory requests can affect how they are scheduled. We also introduced some of the effects CPU and memory limits can have on applications, assuming that CPU limits were enforced by the Completely Fair Scheduler (CFS) quota. In this post, we are going to dive a bit deeper into CPU and share some general recommendations for specifying CPU requests and limits.