Operations | Monitoring | ITSM | DevOps | Cloud

Sponsored Post

Transform your workflow with Raygun's remote MCP

We're happy to announce Raygun's new remote MCP server, giving AI tools direct access to live error data so they can investigate issues, surface root causes, and take action with real context, not guesses. It's been nearly a year since Anthropic released the Model Context Protocol (MCP), and a lot has changed in the AI space. Since then, almost all major players now support MCP, allowing them to tap into the massive and ever-expanding catalogue of MCP servers. When MCP first launched, we shipped our own Raygun MCP within 48 hours of the spec dropping, which was an early step toward giving LLMs visibility into Raygun data.

Breaking down AI adoption barriers feat. Ivanti's Scott Hughes

ivanti.com/itsm-automation Unlock the secrets to successful Agentic AI deployment and widespread AI adoption in your organization with insights from Scott Hughes, SVP of Revenue Operations and Corporate IT at Ivanti. This video explores why IT-business alignment is critical, the importance of high-quality data, and how legacy infrastructure poses challenges for effective AI integration. Key insights.

Building Smarter AI Products #Datadog #DASH #AI

AI capabilities are advancing faster than ever — transforming how teams design, build, and ship intelligent products. In this teaser from Building Successful AI-powered Products at Datadog DASH, experts discuss the rise of agent-based systems, evolving model capabilities, and how to stay ahead in the new era of automation.

Coffee and Claude: How Honeycomb MCP Makes AI Work for You

If you caught our recent Introducing Honeycomb MCP: Your AI Agent’s New Superpower webinar, you know it was a lively mix of big ideas, demos, and a few laughs about the messy, fast-moving world of AI. Hosted by Austin Parker, Morgante Pell, and James Bland from AWS, the conversation explored how Honeycomb’s new Model Context Protocol (MCP) is changing the way developers and AI agents interact with data.

How to Optimize GPU

The Problem: AI workloads are dynamic, unpredictable, and expensive. Data prep can choke your pipeline, training jobs hog GPUs without awareness, and inference, the most latency-sensitive phase, is notoriously hard to scale efficiently. Worse, traditional infrastructure tools treat GPU as a static commodity, ignoring model intent, workload shape, and sharing capabilities.

Orbital Materials: WorldClass AI Models Built on CivoStack

Daniel Miodovnik, COO of Orbital Materials, explains how the CivoStack enables world‑class AI models that outperform the big‑tech giants. He outlines the power‑draw and cooling of megawatt‑scale GPU racks, the water‑ and CO₂‑intensity of today’s data centres, and why a sovereign, Civo‑based solution is the key to speed, and predictable costs.

Bridging the Gap Between AI Writing and Human Expression

Never before has AI dominated the content we read every day as much as today. As each day passes, the online and offline worlds are being filled with AI writing, and soon, it will become difficult to find the human touch in any content. With AI being so prevalent, it has raised an important question: Will the human essence in writing just disappear as we let AI generate more and more writing each day? Does it really have to be an ongoing fight between human creativity and machine algorithms?

AI Agent for Proactive Problem Management: A Shift Toward a Ticketless Future

As organizations rely on increasingly complex IT infrastructures, incident management often turns into a constant cycle of alerts, escalations, and fixes. While reactive responses may keep operations running, they rarely address the deeper systemic issues that slowly erode performance. Recurring incidents, silent failures, and hidden patterns are usually symptoms of unresolved root causes that traditional approaches struggle to uncover.

AI And Sustainability: Measuring The Impact Of The Generative AI Boom

Before 2022, Alex Hanna worked on Google’s Ethical AI team. Today, she’s the director of research at the Distributed AI Research Institute, a transition sparked by Google’s handling of a paper exposing AI’s growing environmental footprint. So, how bad is it, really? That depends on who you ask. Take Jesse Dodge, a senior research analyst at the Allen Institute for AI. Jesse told NPR that a single ChatGPT query can use as much electricity as keeping a light bulb on for 20 minutes.

Rovo AI: Create Work Items from Loom | Demo Den | Atlassian

Ever wish you could turn a quick Loom recording into Jira work items without all the manual typing? Now you can! In this Demo Den episode, Pierre walks through a new Rovo AI feature that automatically converts your Loom videos into actionable Jira work items. Whether you're recording bug reports, feature requests, or project updates, Rovo handles the data entry for you. What Pierre covers: Turning Loom videos into work items with Rovo How it works in your AI-enabled Jira instance.