Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Log Management, Log Analytics and related technologies.

Cut Costs, Not Visibility. Use S3 for Low-Cost Log Retention and Faster Response.

Why pay for continuous ingestion of data you rarely use? Learn how to maintain a lean data strategy by keeping long-term logs in cheap S3 storage, while retaining the power to "promote" specific slices into Splunk whenever an audit or investigation arises. See how Promote for Amazon S3 gives you the speed of local indexing without sacrificing speed in investigations.

Catch Every Moment in Kubernetes: Splunk's Observability Advantage

Discover why real-time, unsampled observability is critical for Kubernetes environments with Stephane Estevez from Splunk at KubeCon Europe 2026. Learn how Splunk’s unique approach helps you catch every important moment—even when containers vanish in milliseconds. Watch now for expert insights on cloud-native monitoring, observability, and Kubernetes best practices!

Claude Code + OpenTelemetry: Per-Session Cost and Token Tracking

I was looking at our Claude Code spend in the Anthropic console the other day. Aggregate cost, aggregate tokens — no breakdown by developer, no breakdown by session. I knew my Hackathon team had been using it heavily on building out new features for the OpenTelemetry Distro Builder. But heavily how? I had no idea. Turns out Claude Code has been emitting OpenTelemetry signals the whole time. Per-session cost, token counts, every tool call it makes on your codebase.

Bindplane + VictoriaMetrics: Unified Telemetry for Metrics, Traces, and Logs at Scale

We’re excited to announce new native Bindplane destinations for the VictoriaMetrics ecosystem. It’s now easier to collect, process, and route OpenTelemetry metrics, traces, and logs at scale. You can directly connect VictoriaMetrics’ high-performance storage engines to Bindplane’s vendor-neutral, OpenTelemetry-native telemetry pipeline.

From Alerts to Answers: Introducing Coralogix Cases

Modern incident response doesn’t fail due to a lack of alerts firing. It fails because teams are overwhelmed by the sheer volume and the lack of context around them. Today, most observability and monitoring platforms generate a flood of alerts. Each one is triggered independently, even when they are symptoms of the same issue. Engineers are left trying to reconstruct the full picture while jumping between dashboards, Slack messages, and tickets.

Reinventing the Incident Responder's Day: Empowering Tier 2 SOC Analysts with Splunk's Agentic SOC Platform

The Tier 2 SOC Analyst or the Incident Responder (often hailed as the "Sherlock Holmes of the network") faces an increasingly complex and relentless digital landscape. In a world where analysts are being overwhelmed by alerts, held back by fragmented, manual tooling and inefficient workflows, incident responders are charged with the critical task of identifying, analyzing, and mitigating security threats.

A 4-Month Bug Fixed in <10 Minutes with Olly

In today’s highly interconnected systems, the subtle relationships between services are rarely obvious. Modern, complex architectures generate telemetry that functions less as “flashing signs” and more as faint “breadcrumbs” to be followed across a vast network of signals. In 2025, about two-thirds of outages involved third-party systems like cloud platforms and APIs.

The limits of MCP and how Olly surpasses them

Model Context Protocol (MCP) servers act as adapter layers between clients and AI based workloads. MCP installation into an IDE, such as Cursor, brings a wealth of information directly into the developers primary tool, minimizing context switching and, especially in the world of observability, bringing telemetry closer to the code. MCP is not without its limits. These limits initially seem trivial, but in time, some of the inherent limitations to a basic MCP implementation become apparent.

How Coralogix's Data Pipeline Turns Obscure Data into Clear Business Value

Observability data arrives as a flood of signals, full of potential, but rarely consistent. Error messages and debug logs can reveal what businesses care about: reliability, customer experience, and revenue. The challenge is turning raw technical events into information the whole organization can act on. Many observability systems store data first and structure it later, forcing teams to rebuild context in dashboards and queries, often duplicating logic across services.