Operations | Monitoring | ITSM | DevOps | Cloud

Logging

The latest News and Information on Log Management, Log Analytics and related technologies.

How to Monitor Host Metrics with OpenTelemetry

Today's environments often present the challenge of collecting data from various sources, such as multi-cloud, hybrid on-premises/cloud, or both. Each cloud provider has its own tools that send data to their respective telemetry platforms. OpenTelemetry can monitor cloud VMs, on-premises VMs, and bare metal systems and send all data to a unified monitoring platform. This applies across multiple operating systems and vendors.

A look at Azure monitoring and troubleshooting

Even now, plenty of businesses are still making the shift to the cloud. Chief decision-makers are plagued by fears about availability, potential downtime and security. Organizations adopting Microsoft Azure need to be able to confidently make the transition without interruptions, which requires building out a strategy for monitoring your Azure environment.

Observability Onboarding Video Series Part 2 (of 3): Adding Use Cases!

The 2nd video in this series walks you through the next stage of onboarding, with a focus on two key use cases: Monitoring with Kubernetes Pods with Splunk Infrastructure Monitoring and Troubleshooting Microservices with Splunk Application Performance Monitoring.

What is Log Analytics? The Significance of Log Analytics Solutions Explained

Every action, transaction, and interaction in an application generates some sort of data. ThisAnd this data holds a wealth of information that, when collected and analyzed over a period of time, provides a comprehensive view of application behavior and performance. Logging is the most widely used technique for collecting data on application states, transactions, errors, and code flow tracking.

Pipeline Module: Event to Metric

At the most abstract level, a data pipeline is a series of steps for processing data, where the type of data being processed determines the types and order of the steps. In other words, a data pipeline is an algorithm, and standard data types can be processed in a standard way, just as solving an algebra problem follows a standard order of operations.

Demystifying Kubernetes Observability with Generative AI and LLMs

Generative AI and large language models (LLM) are fundamentally changing the way we interact with data, especially in the realm of Kubernetes and observability. These technologies are reshaping our field, and there is a lot to understand and unpack so organizations like yours can make sense of it all. What data is important, and what isn’t? How can LLMs make my day-to-day easier, and what do I need to do to ensure I don’t get overwhelmed?