Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Edge Computing Explained

Data is becoming increasingly essential to businesses globally, allowing for insights to be gathered around critical processes and operations. Over time, the traditional systems put in place to hold our data have become unsuitable for modern-day needs due to the continuous growth of data. Edge computing has emerged to reshape the current computing environment and allow data to be processed closer to where it’s being generated.

Kubernetes Audit Logs - Best Practices And Configuration

Kubernetes is the de facto leader of container orchestration tools. With the growing popularity of micro-service-based development, Kubernetes emerged as the go-to tool to deploy and manage large-scale enterprise applications. However, with the plethora of features offered by Kubernetes, it is a complex tool to manage and operate. This article will focus on how to configure Kubernetes Audit Logs so that you can have the records of events happening in your cluster.

Stress test your Kubernetes application with Speedscale's offering in the Datadog Marketplace

Properly testing a service’s APIs to ensure that it can handle production traffic presents many challenges for engineers—SREs need to guarantee the resiliency of their application, while developers must ensure that their features perform well at any given scale. Speedscale is a testing framework built for Kubernetes applications that enables you to load test with real-world production scenarios by replaying actual API traffic that your application has experienced.

Three multi-tenant isolation boundaries of Kubernetes

Many of the benefits of running Kubernetes come from the efficiencies that you get when you share the cluster – and thus the underlying compute and network resources it manages – between multiple services and teams within your organization. Each of these major services or teams that share the cluster are tenants of the cluster – and thus this approach is referred to as multi-tenancy.

VMware Tanzu Service Mesh Advanced to Improve Multi-Cloud Operations for Developers and DevOps Teams

The VMware Tanzu Service Mesh team is showing previews of upcoming multi-cloud operations capabilities focused on improving productivity for developers and operation teams. Here's a sneak peek of the features that were showcased this week at VMware Explore 2022 Europe.

Auto-scaling of Intel FlexRAN components based on MicroK8s and Ubuntu real-time kernel support

RAN has incrementally evolved with every generation of mobile telecommunications, thus enabling faster data transfers between user devices and core networks. The amount of data has increased more than ever with an increase in the number of interlinked devices. With existing network architectures, challenges lie in handling increasing workloads with the ability to process, analyse and transfer data faster. The 5G ecosystem requires virtual implementations of RAN.

One Click Visibility: Coralogix expands APM Capabilities to Kubernetes

There is a common painful workflow with many observability solutions. Each data type is separated into its own user interface, creating a disjointed workflow that increases cognitive load and slows down Mean Time to Diagnose (MTTD). At Coralogix, we aim to give our customers the maximum possible insights for the minimum possible effort. We’ve expanded our APM features (see documentation) to provide deep, contextual insights into applications – but we’ve done something different.

7 Essential Factors When Choosing Platform Engineering Solution

The trend of Platform Engineering is now gaining momentum, which analysts and industry experts refer to as one of the most disruptive philosophies of the moment. But regardless of experts’ predictions and assumptions, what matters for organizations today is understanding what adopting an approach such as Platform Engineering actually entails, what a successful solution looks like, and how to adopt best practices for its implementation. That's what this article is about.
Sponsored Post

How to Test Autoscaling in Kubernetes

In an ideal world, you want to have precisely the capacity to manage the requests of your users, from peak periods to off-peak hours. If you need three servers to attend to all the requests at peak periods and just one server at off-peak hours, running three servers all the time is going to drive up expenses, and running just one server all the time is going to mean that during peak periods, your systems will be overwhelmed and some clients will be denied service.