Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Distributed Tracing and related technologies.

Tail sampling vs. head sampling in distributed tracing

In this video, Grafana Labs' Robin Gustafsson (CEO for K6 + VP, Product) and Sean Porter (Distinguished Engineer) discuss the differences between head sampling and tail sampling approaches in distributed tracing. They explore why head sampling often amounts to sampling randomly and hoping for the best, while tail sampling — the approach used by Adaptive Traces in Grafana Cloud — allows you to intelligently capture the traces that actually matter to you.

Capture high-value traces without managing a pipeline: Tail sampling with Adaptive Traces

Tracing is the richest observability signal in common use today. In distributed systems, it reveals how requests flow across multiple services, allowing you to uncover and address performance bottlenecks. Teams often scale back or abandon tracing altogether, however, because most successful requests produce redundant data that’s noisy and expensive to store.

Why OpenTelemetry instrumentation needs both eBPF and SDKs

As a vendor-neutral open standard, OpenTelemetry has become the default choice for application instrumentation. However, it’s important to remember that OpenTelemetry isn’t a single technology — it’s an ecosystem. Under the hood, it provides multiple options for instrumenting your applications. In this blog post, we explore two instrumentation approaches: OpenTelemetry eBPF Instrumentation and runtime-specific OpenTelemetry SDKs, like the OpenTelemetry Java agent.

7 Strategies for IT Ops Teams to Monitor and Optimize Real-Time Commodity Pricing Systems for Financial Reliability

Real-time commodity pricing systems have become mission-critical infrastructure for financial institutions, trading desks, and enterprise resource planning operations. As of December 2025, with 72% of trading firms migrating to cloud-native CTRM and ETRM platforms, IT Ops teams face mounting pressure to maintain pricing accuracy, minimize latency, and ensure system resilience during volatile market conditions.

OpenTelemetry Metrics with 5 Practical Examples

Picture this, your observability tool already nails the basics like request rates, latency and memory usage, but you need more insight. Think user churn rates, engagement spikes, or even how many carts get abandoned mid-checkout. That’s where OpenTelemetry steps in, providing a way to track those critical custom metrics with ease.

Top OpenTelemetry Backends for Storage & Visualization

OpenTelemetry backends provide storage, analysis, and visualization for telemetry data (traces, metrics, logs). This guide lists available OpenTelemetry-compliant backend options, categorized by use case: APM platforms, storage backends, visualization tools, and distributed tracing systems. For detailed comparison, see OpenTelemetry Backend Comparison.

OTel Updates: OpenTelemetry Proposes Changes to Stability, Releases, and Semantic Conventions

Over the past year, the Governance Committee ran user interviews and surveys with organizations deploying OpenTelemetry at scale. A few patterns came up consistently: Stability levels aren't always obvious. When you install an OTel distribution, some components might be experimental or alpha without clear markers. This makes it harder to evaluate what's production-ready. Instrumentation libraries sometimes wait on semantic conventions.

Fixing Performance Issues Fast with Logs & Tracing

Learn how to quickly track down performance bottlenecks using Sentry Logs and Tracing. In this video, we walk through identifying a slow screen, jumping into the connected trace, and pinpointing slow backend steps, database calls, and AI/LLM operations. See how logs, issues, and traces work together to show the full picture of what happened in a single session.

How to Track Down the Real Cause of Sudden Latency Spikes

Start with distributed tracing to find which service is slow, then use continuous profiling to see why the code is slow, and finally apply high-cardinality analysis to identify which users or conditions trigger the problem. It's 2 AM. Your phone buzzes. Users are reporting timeouts. The metrics dashboard shows p99 latency spiking from 200ms to 4 seconds, but everything looks normal—CPU at 60%, memory stable, no error spikes. A quick pod restart helps briefly, then latency climbs right back up.

Bindplane Onboarding | Install Your First OTel Collector & Send Windows Events to Google SecOps

In this 10-minute step-by-step walkthrough, Chelsea from the Bindplane Customer Success team shows you how to install your first Bindplane OpenTelemetry Collector and start sending Windows Event telemetry from a Windows VM directly into Google SecOps.