The latest News and Information on Log Management, Log Analytics and related technologies.
Cost optimization has been one of the hottest topics in observability (and beyond!) lately. Everyone is striving to be efficient, spend money wisely, and get the most out of every dollar invested. At Logz.io, we recently embarked on a very interesting and fruitful data volume optimization journey, reducing our own internal log volume by a whopping 50%. In this article, I’ll tell you how exactly we achieved this result.
We are caught in a whirlwind of rapid data change. As more engineers, services and sophisticated practices are helping generate an astronomical amount of digital information, there’s a growing challenge of the data explosion. Coralogix offers a completely unique solution to the data problem. Using Coralogix Remote Query, the platform can drive cost savings without sacrificing insights or functionality.
Enterprises are getting increasingly tired of feeling locked into vendors, and rightfully so. As soon as you put your observability data into a SaaS vendors’ storage, it’s now their data, and it’s difficult to get it out or reuse it for other purposes. As a result, strategic independence is becoming increasingly important as organizations decide what data management tools they’re going to invest time and resources into.
With governments doubling down on logging compliance, many public sector organizations have been focusing on optimizing their log management, especially to ensure they retain logs for required periods of time. Logs — though seemingly straightforward — are the backbone of many mission-based use cases and therefore have the potential to accelerate mission success when centrally organized and leveraged strategically. In public sector, logs are instrumental in.
Log events come in all sorts of shapes and sizes. Some are delivered as a single event per line. Others are delivered as multi-line structures. Some come in as a stream of data that will need to be parsed out. Still, others come in as an array that should be split into discrete entries. Because Cribl Stream works on events one at a time, we have to ensure we are dealing with discrete events before o11y and security teams can use the information in those events.