Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Don't Drown in Your Data - Why you don't need a Data Lake

As a leader in Security Analytics, we at Elastic are often asked for our recommendations for architectures for long-term data analysis. And more often than not, the concept of Limitless Data is a novel idea. Other security analytics vendors, struggling to support long-term data retention and analysis, are perpetuating a myth that organizations have no option but to deploy a slow and unwieldy data lake (or swamp) to store data for long periods of time. Let’s bust this myth.

Integrating BindPlane Into Your Splunk Environment Part 2

Often it can be a challenge to collect data into a monitoring environment that does not natively support that data source. Bindplane can help solve this problem. As the Bindplane Agent is based on OpenTelemetry (and is also as freeform as possible), one can bring in data from disparate sources that are not easily supported by the Splunk Universal Forwarder.

The Quixotic Expedition Into the Vastness of Edge Logs, Part 2: How to Use Cribl Search for Intrusion Detection

For today’s IT and security professionals, threats come in many forms – from external actors attempting to breach your network defenses, to internal threats like rogue employees or insecure configurations. These threats, if left undetected, can lead to serious consequences such as data loss, system downtime, and reputational damage. However, detecting these threats can be challenging, due to the sheer volume and complexity of data generated by today’s IT systems.

Automatic log level detection reduces your cognitive load to identify anomalies at 3 am

Let’s face it, when that alert goes off at 2:58am, abruptly shaking you out of a deep slumber because of a high-priority issue hitting the application, you’re not 100% “on”. You need to shake the fog out of your head to focus on the urgent task of fixing the problem. This is where having the best log analytics tool can take on some of that cognitive load. Sumo Logic recently released new features specific to our Log Search queries that automatically detect log levels.

Leveraging Git for Cribl Stream Config: A Backup and Tracking Solution

Having your Cribl Stream instance connected to a remote git repo is a great way to have a backup of the cribl config. It also allows for easy tracking and viewing of all Cribl Stream config changes for improved accountability and auditing. Our Goal: Get Cribl configured with a remote Git repo and also configured with git signed commits. Git signed commits are a way of using cryptography to digitally add a signature to git commits.

Store and analyze high-volume logs efficiently with Flex Logs

The volume of logs that organizations collect from all over their systems is growing exponentially. Sources range from distributed infrastructure to data pipelines and APIs, and different types of logs demand different treatment. As a result, logs have become increasingly difficult to manage. Organizations must reconcile conflicting needs for long-term retention, rapid access, and cost-effective storage.

Send your logs to multiple destinations with Datadog's managed Log Pipelines and Observability Pipelines

As your infrastructure and applications scale, so does the volume of your observability data. Managing a growing suite of tooling while balancing the need to mitigate costs, avoid vendor lock-in, and maintain data quality across an organization is becoming increasingly complex. With a variety of installed agents, log forwarders, and storage tools, the mechanisms you use to collect, transform, and route data should be able to evolve and adjust to your growth and meet the unique needs of your team.

Data Lakes Explored: Benefits, Challenges, and Best Practices

A data lake is a data repository for terabytes or petabytes of raw data stored in its original format. The data can originate from a variety of data sources: IoT and sensor data, a simple file, or a binary large object (BLOB) such as a video, audio, image or multimedia file. Any manipulation of the data — to put it into a data pipeline and make it usable — is done when the data is extracted from the data lake.

Dark Data: Discovery, Uses, and Benefits of Hidden Data

Dark data is all of the unused, unknown and untapped data across an organization. This data is generated as a result of users’ daily interactions online with countless devices and systems — everything from machine data to server log files to unstructured data derived from social media. Organizations may consider this data too old to provide value, incomplete or redundant, or limited by a format that can’t be accessed with available tools.