Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Why you should use Central Error Logging Services

Logs are vital for every application that runs in a server environment. Logs provide essential information which points to whether the current system is operating properly. Looking through logs, you will gather data on system issues, errors, and trends. However, it is not feasible to manually look up errors on various servers across thousands of log files. The solution? Central errors logging services.

A Quick Guide to Log Shipping To Logz.io: Collectors, Code, and Clouds

One of the great things about Logz.io Log Management is that it’s based on the most popular open source logging technology out there: the ELK Stack (click here to view our thoughts and plans on the recent Elastic license). This means Logz.io users get to leverage log shipping and collector options within the rich ELK ecosystem. So how do you know which log shipping technology to use?

Troubleshooting Large Queues in RabbitMQ

If you’re a RabbitMQ user, chances are that you’ve seen queues growing beyond their normal size. This causes messages to get consumed long after they have been published. If you’re familiar with Kafka monitoring, you’ll call it consumer lag, but in RabbitMQ-land it’s often called queue length or queue depth.

Top Benefits of Cloud-Based Log Management

In the ultracompetitive times in which we live, organizations must leverage every asset at their disposal if they’re to survive and thrive. Log data is undoubtedly valuable, so having a proper log management strategy in place is vital for any tech team . Unfortunately, implementing a great log management strategy isn’t as easy as it sounds. It involves many factors, including the selection of an adequate tool.

Splunking AWS ECS Part 2: Sending ECS Logs To Splunk

Welcome to part 2 of our blog series, where we go through how to forward container logs from Amazon ECS and Fargate to Splunk. In part 1, "Splunking AWS ECS Part 1: Setting Up AWS And Splunk," we focused on understanding what ECS and Fargate are, along with how to get AWS and Splunk ready for log routing to Splunk’s Data-to-Everything platform.

Logz.io Celebrates the Release of OpenTelemetry v.1.0

OpenTelemetry 1.0 (Otel) is finally here (in fact, 1.0.1). The announcement brings the industry closer to a standard for observability. OpenTelemetry v1.0.1 will focus solely on tracing for now, but work continues on integrations for metrics and logs. We are still a long way off from this vision becoming reality. Metrics today are in beta, and this is where the community focus is being applied. Logging is even earlier in its life lifecycle.

Logging with the HAProxy Kubernetes Ingress Controller

The HAProxy Kubernetes Ingress Controller publishes two sets of logs: the ingress controller logs and the HAProxy access logs. After you install the HAProxy Kubernetes Ingress Controller, logging jumps to mind as one of the first features to configure. Logs will tell you whether the controller has started up correctly and which version of the controller you’re running, and they will assist in pinpointing any user experience issues.

Advanced Link Analysis: Part 1 - Solving the Challenge of Information Density

Link Analysis is a data analysis approach used to discover relationships and connections between data elements and entities. This is a very visual and interactive technique that can be done in the Splunk platform – and is almost always driven by a person, an analyst or investigator, to understand the data and discover necessary insights specific to the business problem at hand.

Introducing Grafana Enterprise Logs, a core part of the Grafana Enterprise Stack integrated observability solution

Today, we are launching a new Grafana Labs product, Grafana Enterprise Logs. Powered by the Grafana Loki open source project for cloud native log aggregation, and built by the maintainers of the project, this offering is an exciting addition to our growing self-managed observability stack tailored for enterprises.