Operations | Monitoring | ITSM | DevOps | Cloud

A Leader's Guide to Upskilling Teams for the AI Era

Every week, we hear about new AI breakthroughs. AI models write code, create videos, or analyze data in ways we couldn’t imagine just months ago. But there’s a gap: While most companies have adopted AI tools, the majority of employees still don’t use AI in their everyday work. As a manager, you see AI’s potential to change how your team works. Yet your employees struggle to figure out how AI fits into their daily tasks.

Agentic AI in Customer Support: Is It Ready to Resolve 80% of Issues Autonomously?

Daniel O’Sullivan, Senior Director Analyst at Gartner Customer Service & Support Practice, recently said in an article “Agentic AI has emerged as a game-changer for customer service, paving the way for autonomous and low-effort customer experiences.

Debugging issues with Sentry's MCP

Turns out, this MCP thing is pretty solid. We've built the MCP server to tap into all the different areas of context within Sentry and make it easy to bring these into your editor client to help debug your application. Want to know the most fixable issues in your environment? Easy. Want to see your query performance for your backend? Just ask it.

AI Reliability Insights: How to Build a Gremlin MCP Server

Gremlin’s Reliability Intelligence helps teams uncover the cause behind failure modes so they can move faster and improve reliability without sacrificing velocity. The new Gremlin MCP Server, part of Reliability Intelligence, gives you new ways to explore your data, giving you access to insights and recommendations to improve reliability and better run your systems using Gremlin. In this webinar, Gremlin CTO Sam Rossoff shows you how to integrate your favorite LLM and use plain language to query data, uncover insights, create dynamic dashboards, and more.

Introducing Honeycomb Intelligence Canvas

Canvas is an AI-guided workspace inside Honeycomb that combines an AI assistant with an interactive notebook for visualizing query results and traces. You can ask a natural language question about your data and Canvas will immediately start exploring your traces, through multiple queries and other tools, to find the right next steps. Instead of having to write each query yourself, Canvas automatically proposes relational queries, comparisons, and visualizations that explain why an SLO fired or what changed after a deploy.

FireHydrant 4-Minute Demo

Get a quick walkthrough of the FireHydrant platform. FireHydrant is the all-in-one incident management platform that helps teams resolve incidents up to 90% faster — and prevent them from happening again. From flexible alerting and powerful automation to retros and AI insights, it brings clarity and control to every step of your response.

Debug, query, and build faster with AI: How we use Grafana Assistant at Grafana Labs

We recently released Grafana Assistant into public preview for Grafana Cloud, and we’ve been excited to see how our customers have already made it part of their daily observability routines. At the same time, Assistant is becoming a go-to companion for developers right here at Grafana Labs, whether they’re debugging on-call issues, helping customers, or trying to remember tricky PromQL syntax.

Meet Canvas: Your AI-guided Workspace Within Honeycomb

Modern systems are wonderfully capable, but relentlessly complex. Debugging across microservices, frontends, and cloud edges often means switching between five or more tools, trying to stitch together “what changed” and “why it broke.” Honeycomb’s wide events model has proven to be a superpower for taming that complexity, by allowing you to easily observe and query end-to-end traces without worrying about how much granular data you attach to your events.

Behind the Dashboard: How to monitor your LLM integrations

Behind the Dashboard is an ongoing series where we look under the hood of a specific Catchpoint feature. Each episode breaks down the technology itself, what’s challenging about using it for monitoring, and how we removed friction and toil to make it a valuable part of the Catchpoint platform. In this episode Leon, Mursi, and Rahul take a look at Catchpoint’s LLM monitoring capabilities, including ensuring your integrated LLMs are up and performing optimally; as well as knowing if you’re using the most effective (accurate) and economical (cheapest per query) option in your suite.