Operations | Monitoring | ITSM | DevOps | Cloud

Vibe coding tools observability with VictoriaMetrics Stack and OpenTelemetry

AI-powered coding assistants have transformed how developers write software. Tools like Claude Code, OpenAI Codex, Gemini CLI, Qwen Code, and OpenCode have introduced what many call “vibe coding” — a new paradigm where users describe their intent and AI agents handle the implementation details. But as these tools become integral to development workflows, a critical question emerges: how do we understand what’s happening under the hood?

Lightrun MCP: Your AI Assistant Now Debugs and Validates Production Code

Intermittent production bugs are hard to debug and rarely reproduce locally. Teams fall into a loop of adding logs, and every rollback slows them down. In this demo, R&D team leads Maor Yaffe and Or Golan show how an AI assistant can verify production issues using real runtime data, without redeploying. By connecting Cursor to Lightrun MCP, the agent inspects live production behavior, collects real variable values, and confirms the root cause with evidence instead of assumptions.

What the Latest Google "AI Mode" Means for Users Who Care about Privacy and Better Experiences

When Google introduced its AI highlights above the main search results, we thought that was all the company would push to prove its determination to turn traditional Google Search, praised by businesses for expansive SEO opportunities, into an AI-powered experience. But if you live in the U.S. and have recently paid attention to the Google homepage, there's a new button called "AI Mode." Well, it turns out the company is still working hard not to lose its dominance to competitors.

Top tips: RAG isn't the problem, context is. Here are 3 fixes.

Top Tips is a weekly column where we highlight what’s trending in the tech world and list ways to explore these trends. This week, we’ll be talking about how we can improve our retrieval-augmented generation (RAG) systems using contextual engineering. Prompt engineering has gained a lot of attention in the past year, and it’s finally time to move on to a better experience that transforms the way AI results are provided to us.

Context is King: Why Network AI Needs Domain Knowledge to Work

Generic AI fails in network operations because it lacks the “institutional knowledge” of your specific environment and business priorities. Learn how Kentik’s Custom Network Context encodes your unique operational reality into AI Advisor, turning a generic chatbot into a context-aware teammate.

Context Engineering: How Dev Teams 10x Productivity with AI

Context engineering isn't just an AI buzzword. It's how high-performing dev teams are transforming productivity at scale. Chris Geoghegan, VP of Product at Zapier, breaks down why individual AI gains don't compound and what your team needs to do instead.In this GitKon session, learn how to.

Make Your Engineering Processes Resilient. Not Your Opinions About AI

Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes something like this: It’s an understandable fear; and also the wrong conclusion.

When is it ok or not ok to trust AI SRE with your production reliability?

There’s a moment every engineer knows. An AI suggests a fix, it looks reasonable,maybe even obvious, but production is on the line and you hesitate before clicking execute. There’s a big difference between an AI that can recommend an action and one you’re willing to let take that action. All it takes is one bad call, one kubectl command that makes things worse, and suddenly every automated suggestion is a potential liability instead of a help.