Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Distributed Tracing and related technologies.

How to Integrate OpenTelemetry Collector with Prometheus

Pulling observability data together is rarely clean. Metrics come from everywhere, formats vary, and making sense of it takes some work. OpenTelemetry Collector and Prometheus fit perfectly here. The Collector handles ingestion and processing from different sources, while Prometheus stores and queries the data. Simple, effective, and no vendor lock-in. In this blog, we cover how to integrate the Collector with Prometheus, common pitfalls, and ways to control costs.

Database observability: How OpenTelemetry semantic conventions improve consistency across signals

Databases are a crucial part of modern systems, which means database observability is incredibly important, too. However, gathering information on them can be complex, variable, and tricky to instrument in a consistent way. OpenTelemetry is helping to change that, and one of the most important aspects in making it work is a set of shared rules called semantic conventions.

Scaling Observability: How We Designed Bindplane to Manage 1,000,000 OpenTelemetry Collectors

Join the live stream at 11 am ET, here. Platform teams tend to start with just one, or in some cases a handful of OpenTelemetry (OTel) Collectors usually running in gateway mode. They then embrace the benefit of a vendor-neutral, standardized, telemetry collector for unified logs, metrics, and traces.

A Developer's Framework for Selecting the Right Tracing Vendor

Distributed tracing tracks requests as they flow through microservices, revealing bottlenecks, failures, and performance patterns. Without proper tracing, debugging production issues becomes guesswork—especially in complex architectures with dozens of services. Modern applications generate millions of traces daily. The right vendor helps you extract actionable insights without drowning in data or breaking your budget.

Monitor OpenTelemetry-native metrics with Datadog

OpenTelemetry (OTel) is emerging as the industry standard for collecting and transmitting observability data. Datadog supports several ways to send and accept OTel-native data, while also continuing to support its own native telemetry format. To provide a consistent monitoring experience, Datadog now supports using OTel-native metrics alongside Datadog-native metrics across dashboards, queries, and core visualizations in the Datadog platform.

Your Collector, Your Rules: Introducing BYOC and the OpenTelemetry Distribution Builder

Join the live stream at 11 am ET, here. OpenTelemetry’s super-power has always been: Choice. Yet, most observability vendors still insist you run their collector. Today we’re removing that last point of friction. With Bring Your Own Collector (BYOC), Bindplane now accepts any upstream-compatible build, recognizes exactly which receivers, processors, and exporters it contains, and adapts the UI and configuration workflow on the fly.

How to Set Up Tracing for Elixir Apps Using AppSignal

Over time, web applications have evolved from simple request/response-based systems into complex, distributed ones with lots of moving parts. If something goes wrong (and you can be sure it will), finding the cause can be nearly impossible. But this need not be the case: enter tracing. Tracing refers to the process of collecting detailed information about the execution of requests within an application, including function calls, execution time, and other relevant data.

Jaeger vs Zipkin: Which is Right for Your Distributed Tracing

When requests slow down across your microservices, tracing helps you understand where time is spent. Jaeger and Zipkin are two popular tools for distributed tracing, built to answer a simple question: where did the request go? If you're choosing between them or just exploring options, this guide breaks down the differences and when each one might be a better fit.

Traceparent: How OpenTelemetry Connects Your Microservices

In a microservices setup, tracking a single request across services quickly gets complex. One service calls another, then a third, and your logs don’t line up. The traceparent header carries context between services, so all parts of a request connect back to the start. For example, when a frontend sends a request to an API, which then calls a database service, traceparent it links those calls in the trace. Without it, you’re left guessing how requests flow.