Operations | Monitoring | ITSM | DevOps | Cloud

AI Working for You: MCP, Canvas, and Agentic Workflows - Part 2

In our previous post in our series on observability for the agent era, we looked at how Honeycomb provides unique visibility into LLMs operating in your production environment. Now, let’s flip it around and explore how Honeycomb provides observability insights uniquely suited to helping your AI agents rapidly diagnose and fix production issues, and build production feedback into the next round of development.

The Fundamentals: Fast, Deep, and Ready for What Comes Next - Part 3

The previous two posts in this series have looked at some of the use cases Honeycomb customers are implementing to observe LLMs in production and power agentic observability workflows. In this third and final post, we’ll take it back to basics and look at how the fundamental capabilities and infrastructure of Honeycomb provide the comprehensive data and fast performance that makes these use cases work at production scale. AI capabilities built on a weak observability foundation fall apart fast.

AI Demos Are Easy. Enterprise AI Is Not. | Harness Blog

‍Why 90% of AI prototypes never make it to production, and what to do about it. Every week, someone on my team shows me a demo that looks incredible. An agent that writes deployment pipelines. A chatbot that triages incidents. A copilot that generates test cases from Jira tickets. The demo takes 20 minutes. The audience claps. Everyone leaves convinced we're six weeks from shipping it. We're not.

AI for GitOps: Tame your Argo Sprawl | Harness Blog

Innovation is moving faster than ever, but software delivery has become the ultimate chokepoint. While AI coding assistants have flooded our repositories with an unprecedented volume of code, the teams responsible for actually delivering that code, our Platform and DevOps engineers, are often left drowning in manual toil. If you’re managing Argo CD at an enterprise scale, you’re painfully familiar with the "Day 2" reality.

From Data to Dollars: How AI-Driven Hyper-Personalization Is Reshaping Retail Revenue

Every retailer knows that personalization drives revenue. The evidence has been consistent for years: personalized experiences convert better, retain customers longer, and generate higher average order values. What has changed is the scale and sophistication at which personalization is now possible - and the gap it creates between brands that embrace AI-driven approaches and those still relying on manual rules and static segments.

Debugging the black box: why LLM hallucinations require production-state branching

The most frustrating sentence in modern engineering is no longer "it works on my machine." It is: "It worked in the playground." When an LLM-powered feature, such as a RAG-based search, an autonomous agent, or a dynamic prompt engine, fails in production, it doesn’t throw a standard stack trace. It returns "slop," hallucinations, or silent retrieval failures. Standard debugging workflows fail during triage because LLM hallucinations cannot be reproduced using static mocks or clean seed data.

KubeCon + CloudNativeCon EU 2026: What We Learned About AI, Observability, and Fast Feedback Loops

Honeycomb was excited to attend KubeCon + CloudNativeCon Europe, where one theme stood out across sessions: as AI reshapes how software is built and run, teams are being pushed to rethink how they understand their systems. Without strong observability and feedback loops, AI can accelerate confusion, misalignment, and operational risk.