Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Log Management, Log Analytics and related technologies.

A look at Azure monitoring and troubleshooting

Even now, plenty of businesses are still making the shift to the cloud. Chief decision-makers are plagued by fears about availability, potential downtime and security. Organizations adopting Microsoft Azure need to be able to confidently make the transition without interruptions, which requires building out a strategy for monitoring your Azure environment.

Observability Onboarding Video Series Part 2 (of 3): Adding Use Cases!

The 2nd video in this series walks you through the next stage of onboarding, with a focus on two key use cases: Monitoring with Kubernetes Pods with Splunk Infrastructure Monitoring and Troubleshooting Microservices with Splunk Application Performance Monitoring.

What is Log Analytics? The Significance of Log Analytics Solutions Explained

Every action, transaction, and interaction in an application generates some sort of data. ThisAnd this data holds a wealth of information that, when collected and analyzed over a period of time, provides a comprehensive view of application behavior and performance. Logging is the most widely used technique for collecting data on application states, transactions, errors, and code flow tracking.

How To Visualize Business Service Performance with Splunk ITSI

The complex nature of modern digital landscapes means the ability to effectively monitor and understand its impact on your business is not just desirable — it's a necessity. This is where Splunk IT Service Intelligence (ITSI) comes into play. ITSI offers a sophisticated platform for service insights and detailed analytics that can be used by digital operations teams as the first step of a troubleshooting workflow.

Pipeline Module: Event to Metric

At the most abstract level, a data pipeline is a series of steps for processing data, where the type of data being processed determines the types and order of the steps. In other words, a data pipeline is an algorithm, and standard data types can be processed in a standard way, just as solving an algebra problem follows a standard order of operations.

Demystifying Kubernetes Observability with Generative AI and LLMs

Generative AI and large language models (LLM) are fundamentally changing the way we interact with data, especially in the realm of Kubernetes and observability. These technologies are reshaping our field, and there is a lot to understand and unpack so organizations like yours can make sense of it all. What data is important, and what isn’t? How can LLMs make my day-to-day easier, and what do I need to do to ensure I don’t get overwhelmed?