Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Kubernetes as a New Standard for Infrastructure Management

For IT teams inside large organizations used to managing any number of operating environments, Kubernetes is a breath of fresh, standardizing air. Forget its origins, forget any excitement over containers or microservices, and forget the sprawling ecosystem of related projects. What has some folks charged with managing Kubernetes deployments really excited is the prospect of managing all application infrastructure essentially the same way.

Building and deploying a Docker image to a Kubernetes cluster

Deploying Docker images to Kubernetes is a great way to run your application in an easily scalable way. Getting started with your first Kubernetes deployment can be a little daunting if you are new to Docker and Kubernetes, but with a little bit of preparation, your application will be running in no time. In this blog post, we will cover the basic steps needed to build Docker images and deploy them to a Kubernetes cluster.

Essential Observability Techniques for Continuous Delivery

Observability is an indispensable concept in continuous delivery, but it can be a little bewildering. Luckily for us, there are a number of tools and techniques to make our job easier! One way to aid in improving observability in a continuous delivery environment is by monitoring and analyzing key metrics from builds and deploys. With tools such as Prometheus and their integrations into CI/CD pipelines, gathering and analysis of metrics is simple. Tracking these things early on is essential.

Achieving CI Velocity at Tigera using Semaphore

Tigera serves the networking and policy enforcement needs of more than 150,000 Kubernetes clusters across the globe and supports two product lines: open source Calico, and Calico Enterprise. Our development team is constantly running smoke, system, unit, and functional verification tests, as well as all our E2Es for these products. Our CI pipelines form an extremely important aspect of the overall IT infrastructure and enable us to test our products and catch bugs before release.

Exploring AWS Lambda Deployment Limits

We have explored how we can deploy Machine Learning models using AWS Lambda. Deploying ML models with AWS Lambda is suitable for early-stage projects as there are certain limitations in using Lambda function. However, this is not a reason to worry if you need to utilize AWS Lambda to its full potential for your Machine Learning project. When working with Lambda functions its a constant worry about the size of deployment packages for a developer.

Enhancing the DevOps Experience on Kubernetes with Logging

Keeping track of what’s going on in Kubernetes isn’t easy. It’s an environment where things move quickly, individual containers come and go, and a large number of independent processes involving separate users may all be happening at the same time. Container-based systems are by their nature optimized for rapid, efficient response to a heavy load of requests from multiple users in a highly abstracted environment and not for high-visibility, real-time monitoring.

Implementing infrastructure as code with Ansible

If you’re here, it means that your application is a hit, coming through a long way of development and deployments. Your application is finally in a stage where you or your team need to set up more servers than you can handle manually, and you have to provision them fast. There’s also the need to make sure that all of them have the same configuration, packages, and versions in order for your application to have the same behavior in all of them.

Node.js Resiliency Concepts: Recovery and Self-Healing

In an ideal world where we reached 100% test coverage, our error handling was flawless, and all our failures were handled gracefully — in a world where all our systems reached perfection, we wouldn’t be having this discussion. Yet, here we are. Earth, 2020. By the time you read this sentence, somebody’s server failed in production. A moment of silence for the processes we lost.