The latest News and Information on Containers, Kubernetes, Docker and related technologies.
Content Learning how to monitor the Kubernetes API server is crucial when running cloud-native applications in Kubernetes environments. The Kubernetes API server can be considered as the front end of the Kubernetes control plane. Any interaction or request from users or internal Kubernetes components with the control plane go through this component. Ensuring you monitor the Kubernetes API server properly is of vital importance to ensure your Kubernetes cluster works as expected.
We spoke with two members from the SRE team, Alex Blyth and Zulhilmi Zainudin, to learn more about their role at Civo. Through this series, we aim to provide you with an overview of the different roles we have at Civo and what advice our team has. You can discover more about our team in our “day in the life of a Go Dev” and “day in the life of an Intern” blog.
Before I dive into the launch of Cycle’s latest feature (and it’s a big one!) I want to share some context about how we got here. Let’s rewind back to 2015: containers, at least in their modern form, had just begun to take the developer ecosystem by storm. At the same time, we at Cycle were watching everything unfold: from Docker’s meteoric rise to the first few releases of tools like Kubernetes, Rancher, and so on.
Docker is a PaaS product, developed by Docker.Inc to containerize applications. It does so by combining app source code with OS libraries and dependencies required to run that code in any environment. Kubernetes is a similar tool developed by Google, which scales up this containerized application after deployment. While one works in building the containers the other essentially helps in scaling it up, then why so much buzz around these two?
VMware Tanzu Operations Manager is a software appliance designed for platform operators to use BOSH, the infrastructure-as-code automation powerhouse, a much more pleasant and straightforward experience. BOSH can provision and deploy software over hundreds of virtual machines, and it also performs monitoring, failure recovery, and software updates with zero-to-minimal downtime.
This blog will discuss running serverless containers in AWS EKS with Fargate. Why and how we can use this configuration and provides a working example of how to use AWS EKS with Fargate. Recently, a customer reached out with an interesting request. They wanted us to run containers in serverless mode with AWS EKS. Their intention was to use Kubernetes features and run containers in serverless mode. Side note: in EKS you should manage NodeGroups and pay for it.
Data is becoming increasingly essential to businesses globally, allowing for insights to be gathered around critical processes and operations. Over time, the traditional systems put in place to hold our data have become unsuitable for modern-day needs due to the continuous growth of data. Edge computing has emerged to reshape the current computing environment and allow data to be processed closer to where it’s being generated.