Operations | Monitoring | ITSM | DevOps | Cloud

The 54% Improvement Playbook: How Top Performers Integrate GenAI into ITSM

Don't just read the report—learn how to replicate its most impressive results. In our 2025 State of ITSM Report, a select group of top-performing organizations achieved a staggering 54.3% reduction in resolution time by strategically integrating GenAI. This live session moves beyond the data to share their playbook. We'll provide a step-by-step guide on how to pair GenAI with foundational ITSM practices and demonstrate how to weave these tools into your team's daily workflows to achieve maximum efficiency.

Agentic AI Essentials: Examining the Hype Around Agentic AI

In the first article of our Agentic AI Essentials series, we’ll establish what makes agentic AI distinct. We’ll look at the process of tool calling and examine how agentic systems convert intelligence into action. We’ll also explore the human fears, pressures, and ambitions that fuel the hype around agentic systems. By sorting the signal from the noise, IT decision-makers can take the first step toward making sound decisions around agentic AI adoption.

Operational Risk Management in High-Stakes Decision Environments

In high-stakes environments, every choice carries weight. Whether it is a complex financial process, a real-time cybersecurity response, or a tightly regulated operational workflow, small missteps can rapidly evolve into major failures. Organizations increasingly rely on integrated riskmanagement strategies that blend human judgment with technology. The goal is simple: reduce uncertainty before it becomes costly. But the path to that goal is rarely straightforward.

Let Your LLM Debug Using Production Recordings

Modern LLM coding agents are great at reading code, but they still make assumptions. When something breaks in production, those assumptions can slow you down—especially when the real issue lives in live traffic, API responses, or database behavior. In this post, I’ll walk through how to connect an MCP server to your LLM coding assistant so it can pull real production data on demand, validate its assumptions, and help you debug faster.

AI SRE in Practice: Resolving GPU Hardware Failures in Seconds

When a pod fails during a TensorFlow training job, the investigation usually starts with the obvious questions. The answers rarely come quickly, especially when the failure involves GPU hardware that most engineers don’t troubleshoot regularly. This scenario walks through an actual GPU hardware failure and shows how AI-augmented investigation changes both the time to resolution and the expertise required to handle it.

Cloud Strategy for 2026: the Year of Repatriation, Resilience, and Regional Rebalancing

This year is set to be a pivotal year for cloud strategy, with repatriation gaining momentum due to shifting legislative, geopolitical, and technological pressures. This trend has accelerated, with a growing focus on data sovereignty. These challenges have set the stage for 2026 to be the year of repatriation, resilience, and regional rebalancing. Here, Rob Coupland, Chief Executive Officer at Pulsant, offers his insights.

Vibe coding tools observability with VictoriaMetrics Stack and OpenTelemetry

AI-powered coding assistants have transformed how developers write software. Tools like Claude Code, OpenAI Codex, Gemini CLI, Qwen Code, and OpenCode have introduced what many call “vibe coding” — a new paradigm where users describe their intent and AI agents handle the implementation details. But as these tools become integral to development workflows, a critical question emerges: how do we understand what’s happening under the hood?

Lightrun MCP: Your AI Assistant Now Debugs and Validates Production Code

Intermittent production bugs are hard to debug and rarely reproduce locally. Teams fall into a loop of adding logs, and every rollback slows them down. In this demo, R&D team leads Maor Yaffe and Or Golan show how an AI assistant can verify production issues using real runtime data, without redeploying. By connecting Cursor to Lightrun MCP, the agent inspects live production behavior, collects real variable values, and confirms the root cause with evidence instead of assumptions.