Operations | Monitoring | ITSM | DevOps | Cloud

5 Best Practices for Incorporating AI Into Your Team

Honeycomb’s Jessica Kerr and Fred Hebert recently hosted a webinar with Courtney Nash of The VOID where they dug into one of the biggest questions in tech right now: How do we build systems (and teams) that actually learn with AI, not just use it? The conversation was surprisingly optimistic about what happens when we stop treating AI as a productivity tool and start seeing it as a teammate. You can watch the full webinar here, or read on below for a quick recap.

How to Replace Synthetics with the httpcheck Receiver

A 200 OK doesn't always mean everything is okay. You've probably seen it: your health check endpoint returns success, but your users are staring at an error page. Maybe the database connection pool is exhausted, or a critical downstream service is timing out, but your API dutifully returns 200 because technically it responded. This is the reality of monitoring HTTP endpoints in production—status codes alone don't tell the whole story.

How We Saved 70% of CPU and 60% of Memory in Refinery's Go Code, No Rust Required

We've just released Refinery 3.0, a performance-focused update which significantly improves Refinery's CPU and memory efficiency. Refinery has a big job: it performs dynamic, consistent tail-based sampling that maintains proportions across key fields, adjusts to changes in throughput, and reports accurate sampling rates.

Announcing Honeycomb for Frontend Observability React Native Beta

React Native apps straddle two worlds: JavaScript powering your UI and native modules running underneath. Add in backend services, and when something goes wrong, there are many possible culprits. Was it JS logic, the native bridge, the native API call, or a downstream API call? Most tools give you parts of the picture. A crash tool can tell you where the app failed but not what else happened in a session.

What the 2025 DORA Report Teaches Us About Observability and Platform Quality

The 2025 DORA State of AI-Assisted Software Development Report delivers a critical insight for technology leaders: AI is fundamentally an amplifier, not a solution. It magnifies the strengths of high-performing organizations with robust observability while exposing the dysfunctions of struggling ones. For organizations that have rushed to adopt AI coding assistants all while expecting immediate productivity gains, this finding demands a strategic pivot.

How to Responsibly and Effectively Contribute to Open Source Using AI

With the influx of AI tooling, it’s never been easier to contribute to open source communities. These tools are capable of gathering context quickly, “understanding” repositories faster than ever before. They provide instant summaries about repositories that, previously, would have meant reading lines and lines of code. They can fix bugs in programming languages you don’t know, and ultimately allow more contributors to get involved, which (almost) every open source project wants.

Integrating JMX and OpenTelemetry

The OpenTelemetry community and the contributors to the Java Special Interest Group (SIG) have spent a great deal of time integrating core Java technologies into the project. An integration that is particularly useful is Java Management Extensions (JMX). It has been around since J2SE 5, and has been mature for some time. Many of the most widely used Java applications have adopted it over time and support this extension.

Observability Day San Francisco: The Future of AI and Observability Is Bright

AI and observability are no longer separate conversations—they’re deeply intertwined. Across keynotes, panels, and demos, speakers at Honeycomb's Observability Day San Francisco unpacked what that means for engineering teams today: faster insights, smarter tools, and new challenges to solve.

Meet Canvas: Your AI-guided Workspace Within Honeycomb

Modern systems are wonderfully capable, but relentlessly complex. Debugging across microservices, frontends, and cloud edges often means switching between five or more tools, trying to stitch together “what changed” and “why it broke.” Honeycomb’s wide events model has proven to be a superpower for taming that complexity, by allowing you to easily observe and query end-to-end traces without worrying about how much granular data you attach to your events.