Operations | Monitoring | ITSM | DevOps | Cloud

Graylog

How Graylog's Advanced Functionalities Help You Make Sense of All Your Data

The inherent limitations of most log managers and the need to work within the constraints of your current hardware may force your enterprise to make some hard choices. Less useful data may be left unchecked, old information will eventually get deleted, and the amount of data that is accessed in real-time is sacrificed to reduce excess workload.

Announcing Graylog v3.0 Beta 1

Today we are releasing the first public beta of Graylog v3.0. This release includes a whole new content pack system, an overhauled collector sidecar, new reporting capabilities, improved alerting with greater flexibility, support for Elasticsearch 6.x, a preview version of an awesome new search page called Views, and tons of other improvements and bug fixes.

Why Should You Bother With Information Technology Operations Analytics?

Your organization’s IT system is a complex network of intercommunicating devices that can provide you with an abundance of useful data - if you apply the right practices to gather and filter it. However, to realize how each of these sources interacts and interconnects with one another, you need will to master the art of Information Technology Operations Analytics.

How to Read Log Files on Windows, Mac, and Linux

Logging is a data collection method that stores pieces of information about the events that take place in a computer system. There are different kinds of log files based on the kind of information they contain, the events that trigger log creation, and several other factors. This post focuses on log files created by the three main operating systems--Windows, Mac, and Linux, and on the main differences in the ways to access and read log files for each OS.

Log Analysis and the Challenge of Processing Big Data

To stay competitive, companies who want to run an agile business need log analysis to navigate the complex world of Big Data in search of actionable insight. However, scouring through the apparently boundless data lakes to find meaningful info means treading troubled waters when appropriate tools are not employed. Best case scenario, data amounts to terabytes (hence the name “Big Data”), if not petabytes.