Event-driven architecture is an efficient and effective way to process random, high-volume events in software. Real-time monitoring of event-driven system architecture has become increasingly important as more and more applications and services move toward this architecture.
Elasticsearch is used for a wide variety of data types — one of these is metrics. With the introduction of Metricbeat many years ago and later our APM Agents, the metric use case has become more popular. Over the years, Elasticsearch has made many improvements on how to handle things like metrics aggregations and sparse documents. At the same time, TSVB visualizations were introduced to make visualizing metrics easier.
In part 1 of this post, we talked about how Cribl is empowering security functions by giving our customers freedom of choice and control over their data. This post focuses on their experiences and the benefits they are getting from our suite of products. In a past life, I was in charge of security and operational logging at Transunion — around 2015, things started going crazy.
We are caught in a whirlwind of rapid data change. As more engineers, services and sophisticated practices are helping generate an astronomical amount of digital information, there’s a growing challenge of the data explosion. Coralogix offers a completely unique solution to the data problem. Using Coralogix Remote Query, the platform can drive cost savings without sacrificing insights or functionality.
Cost optimization has been one of the hottest topics in observability (and beyond!) lately. Everyone is striving to be efficient, spend money wisely, and get the most out of every dollar invested. At Logz.io, we recently embarked on a very interesting and fruitful data volume optimization journey, reducing our own internal log volume by a whopping 50%. In this article, I’ll tell you how exactly we achieved this result.
Enterprises are getting increasingly tired of feeling locked into vendors, and rightfully so. As soon as you put your observability data into a SaaS vendors’ storage, it’s now their data, and it’s difficult to get it out or reuse it for other purposes. As a result, strategic independence is becoming increasingly important as organizations decide what data management tools they’re going to invest time and resources into.
Log events come in all sorts of shapes and sizes. Some are delivered as a single event per line. Others are delivered as multi-line structures. Some come in as a stream of data that will need to be parsed out. Still, others come in as an array that should be split into discrete entries. Because Cribl Stream works on events one at a time, we have to ensure we are dealing with discrete events before o11y and security teams can use the information in those events.
With governments doubling down on logging compliance, many public sector organizations have been focusing on optimizing their log management, especially to ensure they retain logs for required periods of time. Logs — though seemingly straightforward — are the backbone of many mission-based use cases and therefore have the potential to accelerate mission success when centrally organized and leveraged strategically. In public sector, logs are instrumental in.