Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Distributed Tracing and related technologies.

KubeCon Europe 2026: OpenTelemetry Recap from Amsterdam

The reason why I like writing recap articles is because AIs don’t have enough context to write them for us. You have to be there, in person, listen to sessions, interact in the hallways with the community, and absorb as much new knowledge as possible. That’s what I did last week in Amsterdam at KubeCon + CloudNativeCon Europe ‘26. Well, at least I tried to. Let me break down what I consider the most interesting topics were last week.

Distributed Tracing | Debugging your Next.js applications with Sentry

Sometimes a simple stack trace won’t provide enough information for you to debug the issue at hand. There are types of issues that require you to know what happened leading up to the exception. In those cases, reach for tracing. Distributed tracing gives you an overview of every operation that happened during the execution of a certain functionality across your whole stack. Aside from being an awesome debugging tool, it also lets you identify any performance bottlenecks in your application. In this video you’ll learn how to view traces in Sentry and implement them in your Next.js application.

Send your existing OpenTelemetry traces to Sentry

You spent months instrumenting your app with OpenTelemetry. The idea of ripping it out to adopt a new observability backend is not an option. Sentry's OTLP endpoint means you don't have to. In fact, two environment variables are all you need and your existing traces start showing up in Sentry's trace explorer. Sentry's OTLP support is currently in open beta. This means you can start using it today, but there are some known limitations we'll cover later.

Agno Monitoring & Observability with OpenTelemetry and SigNoz

Learn how to implement end-to-end monitoring and observability for Agno-based AI systems using OpenTelemetry and SigNoz. In this video, we walk through instrumenting your Agno workflows, collecting traces, metrics, and logs, and visualizing everything in SigNoz to gain real-time visibility into performance, failures, and bottlenecks. You'll see how to move from basic logging to production-grade observability—so you can debug faster, optimize latency, and confidently run AI systems at scale.

Is OpenTelemetry overkill? There's a lazier (and better) way. #speedscale #sre #ebpf #kubernetes

If you "aspire to be lazy" like we do, you know that building staging environments and mocking complex back-ends (like MySQL, AI models, and 3rd party APIs) is a massive time sink. In this demo, we show you how to use Internet Magic (aka eBPF) to: Stay tuned for Part 2, where we take these recordings and spin up a staging environment automatically.

OpenClaw Monitoring & Observability with OpenTelemetry and SigNoz

Learn how to implement monitoring and observability for OpenClaw systems using OpenTelemetry and SigNoz. In this video, we cover how to instrument OpenClaw, collect traces, metrics, and logs, and visualize everything in SigNoz for real-time insights into performance and reliability. You’ll see how to quickly identify bottlenecks, debug issues, and improve system stability in production.

From raw data to flame graphs: A deep dive into how the OpenTelemetry eBPF profiler symbolizes Go

Imagine you're troubleshooting a production issue: your application is slow, the CPU is spiking, and users are complaining. You turn to your profiler for answers—after all, this is exactly what it's built for. The profiler runs, collecting thousands of stack samples. eBPF profilers, including the OpenTelemetry eBPF profiler, operate at the kernel level, so they capture raw program counters: memory addresses pointing into your binary.

Explore Kubernetes with native OpenTelemetry data

Kubernetes environments generate a constant stream of signals across clusters, nodes, pods, and workloads. For teams that have standardized on OpenTelemetry (OTel), maintaining ownership of that data is critical. But in practice, many observability platforms require translation into vendor-specific data formats, leading to fragmented product experiences, blank dashboards, and uncertainty about data integrity.

Annotate traces to improve LLM quality with Datadog LLM Observability

LLM applications rarely crash. They degrade quietly. Once these applications are shipped to production, subtle quality failures become harder to catch with traditional signals. Tone shifts, hallucinated details, off-topic responses, and incomplete reasoning can emerge while latency and token usage look stable.