Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

The new era of observability: Why logs matter more than ever

20 years ago, software ate the world. The old ways of monitoring, failing over, or routinely rebooting quickly became inadequate and with a new focus on software excellence, how we monitor and maintain them had to be rethought. Even back then, when new software was released on an annual basis, it was clear that developers and futurists needed to build, inform, and optimize their approach, which required a deeper understanding of the application experience.

Introducing the Observability Center of Excellence: Taking Your Observability Game to the Next Level

Chasing false alerts — or worse, having your system go down with no alerts or telemetry to give you a heads-up — is the nightmare we all want to avoid. If you’ve experienced this, you’re not alone. Before joining Splunk, I spent 14 years as an observability practitioner and leader for several Fortune 500 companies and in my 2.5 years with Splunk I have had the opportunity to work with customers of all shapes and sizes.

Using Observo AI as a Security Data Fabric

Data fabrics are cohesive data layers that bridge data sources with data consumers, including analytics platforms such as SIEMs. They automate tasks like data ingestion, integration, and curation across diverse data sources, improving the agility and responsiveness of data ecosystems. More specifically, a security data fabric adds additional capabilities, including governance and compliance, security enrichment, and the integration of security events.

Understanding Java Logs

Logs are the notetakers for your Java application. In a meeting, you might take notes so that you can remember important details later. Your Java logs do the same thing for your application. They document important information about the application’s ability to function and problems that keep it from working as intended. Logs give you information to help fix coding errors, but they also give your end users information that helps them monitor performance and security.

How to Build a Data Migration Plan? A Step By Step Guide

Data growth is growing at an extraordinary pace, with a compound annual growth rate (CAGR) of 28% projected over the next few years. For organizations dealing with logs, metrics, and traces, this massive data expansion brings both opportunities and challenges. As data volumes soar, having flexibility in where you store and analyze it—whether in a SIEM, object storage, or other platforms—has become essential.

Revisiting improved HTTP logging in ASP.NET Core 8

A few years ago, I had a play with HTTP logging added in ASP.NET Core 6. ASP.NET Core 8 introduced a set of additional configuration options that I believe are essential to make this feature usable. I will recap the details from the previous post below, but for more context, the first part of this series is here. In this post, I'll go through some of the changes introduced in HTTP logging since last. Before I jump into the improvements, let's recap how to set up HTTP logging.

Using Trace Data for Effective Root Cause Analysis

Solving system failures and performance issues can be like solving a tough puzzle for engineers. But trace data can make it simpler. It helps engineers see how systems behave, find problems, and understand what's causing them. So let’s chat about why trace data is important, how it's used for finding the root cause of issues, and how it can help engineers troubleshoot more effectively.

How to Integrate Docker with Logit.io

Docker is an open-source container service provider, designed to help developers build, run, and share container applications. Users building and running these container applications need to conduct effective debugging and monitoring practices and for this, they have turned to Docker logging. To understand the importance of this, the latest edition of our how-to guide series surrounds Docker.