Operations | Monitoring | ITSM | DevOps | Cloud

Introducing AI Agent Monitoring in Sentry

Monitoring agents and LLM applications is... different. Managing everything from tool calls, to model configurations, token usage, and AI systems do their best to solve problems on their own - so errors aren't always clear. Sentry's agent monitoring focuses on making it easy to dive into your AI applications and understand whats breaking, where, so you can fix it faster.

Introducing Seer: Sentry's AI Debugging Agent

There's a lot more context to an error than the message blinking in red on your screen. Seer understands the context of your application and everything behind that error. Seer collects information from the Stack Trace, Logs, Traces and Spans, Profiles, and the code from your GitHub repo and uses it to understand what's causing your issues, and propose fixes.

Debugging Errors in Background Jobs

Debugging background jobs is one of those tasks that always sounds easier than it is—until you’re knee-deep in stack traces that offer no real clues. Background jobs love to run in isolated environments, cutting themselves off from all the helpful context you’d normally have. @nikolovlazar shows us how to debug these errors anyway—piecing together the missing context across systems so you can actually fix the problem instead of just guessing.

Debugging Microservices

Debugging microservices is tough, especially when you're juggling multiple services and relying only on logs. This video cuts through the complexity by showing you how to implement distributed tracing using Sentry. You'll see a practical demonstration in a food ordering app (built with React and Go) of how tracing can give you a clear view of your entire request flow, from the initial button click to the final operation across all your services.