Operations | Monitoring | ITSM | DevOps | Cloud

Logging

The latest News and Information on Log Management, Log Analytics and related technologies.

3 Ways LogStream Can Improve Your Data Agility

Four months into this new gig at Cribl, I wish I could bottle up that “lightbulb” moment I get when walking people through how Cribl LogStream can help them gain better control of their observability data. So I hope the scenario walkthroughs below will capture some of that magic and shed some light on how LogStream can improve your organization’s data agility – helping you do more with your data, quickly, and with less engineering resources.

A Splunk Approach to Baselines, Statistics and Likelihoods on Big Data

A common challenge that I see when working with customers involves running complex statistics to produce descriptions of the expected behaviour of a value and then using that information to assess the likelihood of a particular event happening. In short: we want something to tell us, "Is this event normal?". Sounds easy right? Well; Sometimes yes, sometimes no. Let's look at how you might answer this question and then dive into some of the issues it poses as things scale-up.

Feature Spotlight: Centralized Log Collection

Speedscale is proud to announce its Centralized Log Collection capability. When diagnosing the source of problems in your API, more information is better. For most engineers, the diagnosis process usually starts with the application logs. Unfortunately, logs are usually either discarded or stored in Observability systems that engineers don’t have direct access to. Compounding this issue is that the log information is typically not correlated to what calls were made against the API.

Prevent Data Downtime with Anomaly Detection

A couple months ago, a Splunk admin told us about a bad experience with data downtime. Every morning, the first thing she would do is check that her company’s data pipelines didn’t break overnight. She would log into her Splunk dashboard and then run an SPL query to get last night’s ingest volume for their main Splunk index. This was to make sure nothing looked out of the ordinary.

Using Oracle Cloud as a Data Lake Made Simple With Cribl LogStream

All Cloud providers such as AWS, Azure, Google Cloud Platform, and Oracle Cloud offer Object Storage solutions to economically store large volumes of data and retrieve it on demand. It’s far cheaper to store one petabyte of data in object storage than in block storage. As AWS S3 has become the standard, many on-premise storage appliance vendors have incorporated S3 APIs to store and retrieve data. Oracle wisely continued that trend to OCI (Oracle Cloud Infrastructure).

The Five Tenets of Observability

A new year is a chance to have a new start, and one thing that it’s a great opportunity to think about is the monitoring and observability platform you’re using for your applications. If you’ve been using a legacy monitoring system, you’ve probably heard about observability all over the ‘net and want to figure out if this is really something you need to care about.

Make the most of your observability data with the Data Volume app

As a DevOps, SecOps, or IT operations manager, you're surrounded by all the technology for the systems running the entire organization. This means legacy infrastructure, multi-cloud environments, services, tools, and applications. All of these components generate data—a huge amount of data—some of which you need to leverage for full-stack observability to ensure those systems supporting the business are running efficiently.