Operations | Monitoring | ITSM | DevOps | Cloud

Splunk

Data Lakehouses: Everything You Need To Know

A data lakehouse is a modern data architecture. It is popular among many organizations that incorporate the features of both data lakes and data warehouses. The features of a data lakehouse make it ideal for a range of data analytics use cases. This article explains data lakehouses, including how they emerged, how they shape up versus data lakes and data warehouses, their architecture, and finally, the pros and cons of using a data lakehouse.

Building Resilience With the Splunk Platform One Use Case at a Time

You know that the Splunk platform is the ultimate tool to help advance your business on the path to resilience. You want to use it to see across hybrid environments, overcome alert fatigue, and get ahead of issues. You could be just starting out in your security journey and want to build an essential security foundation or if you're starting out in observability, you might want to accelerate your troubleshooting. You might be working in retail, telecommunications, or the public sector.

Predictions: a Deeper Dive into the Rise of the Machines

As Gaurav described in his retail predictions blog, the impact of AI and automation on the retail industry should not be underestimated. The compound effects of improvements in technology and labour shortages have created an ideal scenario for innovation. Here we will take a deeper look into some of the AI and automation use cases that we have seen in retail and outline some of the areas of focus to help you get started.

Log Aggregation: Everything You Need to Know for Aggregating Log Data

Log aggregation is the process of consolidating log data from all sources — network nodes, microservices and application components — into a unified centralized repository. It is an important function of the continuous and end-to-end log management process where log aggregation is followed by log analysis, reporting and disposal. In this article, let’s take a look at the process of log aggregation as well as the benefits.

The Incident Commander Role: Duties & Best Practices for ICs

Imagine that a critical incident — a major outage, cyberattack or disaster — occurs out of nowhere in your company. In such a case, you'll try to minimize the damage and get back to normal operations as quickly as possible. But how will you do that? You've no idea how to manage such incidents. This is where incident commanders come in. They're trained professionals who lead the response to critical incidents.

Common Event Format (CEF): An Introduction

In the world of software engineering, monitoring and logging are two essential processes that help developers keep track of the performance and behavior of their applications. To facilitate this process, several logging formats have been developed over the years, including the Common Event Format (CEF). In this blog post, we will take a closer look at what the Common Event Format is, how it works, and why it is important.

Data Analytics 101: The 4 Types of Data Analytics Your Business Needs

Data analytics refers to the discovery, management and communication of meaningful insights from historical information to drive business processes and improve decision making. The process involves: So, let's take a look at data analytics today, specifically the 4 types you need and what they'll tell you about your organization.

How Monitoring, Observability & Telemetry Come Together for Business Resilience

Systems going down because of an unforeseen incident? Got problems with your app or website? Is your audience missing out on products and services because your load times are too slow? Then monitoring and observability (and telemetry) should be of interest to you! In this long article, we’re covering everything! I’ll start with the concepts and how they work.