Operations | Monitoring | ITSM | DevOps | Cloud

What is ServiceNow's AI Control Tower?

What happens when AI agents stop being scattered and start being steered? Customer service queues shrink, teams get time back for high-value work, and everyone finally works off the same data. That’s the power of the ServiceNow AI Control Tower—all your AI, all under control. No more fragmentation. No more busywork. Just visibility, control, and workflows that scale across the entire business.

Observability for GenAI Applications (Grafana OpenTelemetry Community Call)

In this episode, we’re diving into observability for Generative AI apps. AI helps us write code and monitor applications in production - but how do we observe the AI itself? And how do we make sense of complex, non-deterministic AI systems? We’re joined by two great guests: Ishan Jain, working on GenAI observability and Luccas Quadros, working on Grafana Assistant. Together, they bring both platform-level insights and real-world perspectives.

From idea to agent: Building AI workflows with relaxAI and n8n

Join us for this live online webinar as we explore how to design, build, and deploy practical AI agents using n8n’s workflow automation platform powered by relaxAI’s UK sovereign infrastructure. Our speaker, Ben Norris, AI Engineer at Civo, will guide you through the real-world process of creating intelligent agents that automate tasks across tools and services, all without deep coding expertise.

[Webinar] Building Quality-Driven Agentic AI in Noisy Big Data Environments

Watch as Itiel Shwartz, Komodor CTO and Co-Founder as he shares hard-won lessons from developing an AI agent that processes millions of K8s events daily to deliver autonomous troubleshooting that reached 95%+ accuracy in benchmarking. This webinar covers: Building production ready systems that maintain reliability when 90% of your data is noise. How Komodor developed an AI SRE agent that processes millions of K8s events daily to deliver autonomous troubleshooting that reached 95%+ accuracy in benchmarking.

An introduction to GPU time-slicing

GPUs are no longer a niche component. Gamers know them for immersive graphics, workstation users rely on them for balanced performance, and in the age of AI, GPUs have become one of the most in-demand resources in modern infrastructure. They are also expensive. That reality creates two immediate constraints, for individuals and enterprises alike: GPU-backed instances should be provisioned deliberately, and once provisioned, they should be used efficiently.

AI Anomaly Detection: Catch AI Cost Surprises Before They Kill Margins

Consider this: traditional cloud cost monitoring was like checking your fuel gauge once a month — after the trip was already over. That model worked when infrastructure scaled slowly. You provisioned resources predictably and paid for stable, linear usage. AI breaks that model. Today, AI costs behave like a high-performance engine with a hypersensitive throttle. A small input, like a prompt change or a single power user, can dramatically increase your fuel burn in seconds.

Measuring Claude Code ROI and Adoption in Honeycomb

At Honeycomb, we’ve been using Claude Code across our engineering team for a while. Anecdotally, I had a sense of who the power users were, and I had seen some examples of complex usage. But I wanted to be able to confidently answer questions, like: Claude Code supports OpenTelemetry out of the box, which means sending telemetry to Honeycomb takes just a few minutes of configuration.

ChatOps that actually works: Grafana Cloud, Slack, and AI-powered observability

Context switching isn’t just inefficient—under pressure, it’s exhausting. It slows decision-making, increases the risk of mistakes, and makes even experienced engineers feel like they’re always a step behind the system they’re responsible for. At Grafana Labs, we want to build tools that meet you where you are. That's why we embedded Grafana Assistant, our context-aware AI assistant, directly in Grafana Cloud.

How to Troubleshoot BGP Faster with Kentik AI Advisor

A BGP session goes down because a transit provider exceeded the maximum prefix limit. How do you find the root cause — fast? In this 10-minute demo, we walk through two approaches using Kentik AI Advisor. First, we troubleshoot step by step using natural language: asking AI Advisor to identify the affected interface, check for interface flapping, and review syslog messages until we find the maximum-prefix violation. Then we show how custom network context and natural language runbooks let AI Advisor do the entire investigation autonomously — following the same four steps a senior engineer would.