Operations | Monitoring | ITSM | DevOps | Cloud

From Vibes to Signals: Observing Your AI Coding Workflow

Agentic coding tools like Claude Code and Codex have taken centre stage and inserted themselves into the critical path of software development. This shift has happened fast, and for most teams, the visibility hasn’t caught up. Until now we’ve been evaluating our vibe coding the same way – on vibes. You might say “this feels faster” or “that seems like a better approach”. That’s not going to scale.

How Kotak811 Revolutionized Digital Banking Observability with Coralogix

Kotak811, the digital-first engine of Kotak Mahindra Bank, is a banking platform serving over 23 million users across India. Since its launch in 2017, Kotak811 has transformed into the bank’s primary growth driver, now accounting for 70% of all new customer acquisitions. The platform is widely recognized for offering a paperless, mobile-first experience, providing everything from instant zero-balance accounts to seamless UPI payments and investment tools.

What "AI-Ready Data" actually means for observability teams

Many organizations deploying AI are learning similar lessons right now: the challenge isn’t this or that AI model, it’s the data. According to Gartner, 60% of AI projects will be abandoned by organizations because of failures to support these projects with AI-ready data. Also, 63% of organizations either lack or aren’t sure they have the right data management practices to get there.

Code Agents Need Observability

For those of us using tools like Claude Code, Codex, or Gemini, we already know they’re powerful. They can write code, refactor functions, open PRs, even run commands. For a lot of developers, they’re already part of the daily workflow. But once you zoom out beyond the individual developer, the biggest problem isn’t productivity. It’s control. AI coding tools are powerful, but they introduce a new, unpredictable cost layer that most teams don’t fully understand.

AI agents are only as smart as the data you feed it

AI is only as useful as the context you give it. An autonomous observability agent can unlock serious value from your telemetry, but only when the foundation is right: good telemetry, a strong data layer, and efficient access to the data. Annie Freeman and Lewis Isaac had a lot to say about this at AWS Summit London this week! hashtag#Observability hashtag#AI hashtag#AWSSummitLondon hashtag#DevOps hashtag#OpenTelemetry.

DataPrime at Ingest: Fine-Grained TCO Routing with DPXL

The real economic decision for observability happens at ingest, before storage, billing, and retention choices are locked-in. Until now, the logic governing that decision could only see three broad fields: application, subsystem, and severity. That just changed. TCO routing now matches on any field in the event payload, including nested keys, custom fields, and event body content, using DPXL, the DataPrime Expression Language.

Building Audit-Ready Observability for Digital Banking

Most observability platforms are built to answer one question: what’s broken right now. Regulators are asking a different one: what happened, exactly, and can you prove it? Digital banking operates under constant regulatory scrutiny, where frameworks like DORA, PCI-DSS, and GDPR require every incident to be fully reconstructed across systems, timelines, and access. Systems can recover quickly, but the ability to explain what happened often remains fragmented across tools and teams.

Observability is a design problem: Live Laugh Logs ep. 1 - KubeCon Amsterdam 2026

What happens when 20,000 engineers descend on Amsterdam to talk about Kubernetes and AI? Welcome to Episode 1 of Live Laugh Logs, the podcast from Annie, Lewis and Andre from the Coralogix Developer Relations team where we will get together and recap everything going on in our worlds! We had an amazing time at KubeCon in Amsterdam and had loads of insights from the talks we went to around designing observability systems, all the AI tools being created and how to observe them, and using agent-generated code.

Debug frontend issues with AI: Real user monitoring meets the Coralogix MCP server

It is 2 AM. Someone on-call gets paged. Conversion rates on the checkout page dropped 30 percent in the last hour. The immediate questions are familiar. Is this a JavaScript error? A slow API call? A broken third-party script? A performance regression that never throws an exception but quietly drives users away? In most teams, answering those questions is not hard because the data is missing. It is hard because the investigation is split across too many places.