Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Log Management, Log Analytics and related technologies.

Benchmarking Kubernetes Log Collectors: vlagent, Vector, Fluent Bit, OpenTelemetry Collector, and more

At VictoriaMetrics, we built vlagent as a high-performance log collector for VictoriaLogs. To validate its performance and correctness under a real production-like load, we developed a benchmark suite and ran it against 8 popular log collectors. This post covers the methodology, throughput results, resource usage, and delivery correctness. Collectors under the test: We’ve made all benchmark configurations and source code public, so you can reproduce and verify the results independently.

Architecting Log Management for Privacy and Scale without the Headache

As companies grow, they inevitably hit a wall: observability data explodes while privacy requirements become stricter. For years, engineers have faced a painful tradeoff—either ship petabytes of sensitive data to a central cloud (incurring egress costs and compliance risks) or manage a complex self-hosted stack that is painful to scale.

Claude Code is running bash commands on your infrastructure. Here's how to watch it.

I’ve been staring at Claude Code telemetry for the past few weeks, and I keep noticing the same thing: most teams drop it into their environment, say “it’s amazing,” and have absolutely no idea what it’s actually doing at the system level. That’s fine for a personal dev tool. It’s not fine when you’ve rolled it out to 50 engineers.

Monitor schema health with engine.schema_fields: Structure, Drift, and Volatility

If you’ve worked with an observability pipeline, you’ve probably experienced schema problems: a field disappears, a type shifts from string to number, or a new label quietly appears. The causes are everywhere. Different teams adopt different naming conventions. A dependency upgrade changes the shape of a library’s log output. Over time, these small, reasonable decisions compound into schema sprawl: dashboards break, alerts misfire, and teams scramble to find out what happened.

From Data Chaos to Results: The New Data Strategy for the Agentic Era

The world is generating data at a pace that defies the human ability to draw insights and comprehend. By 2028, we’ll reach almost 400 zettabytes of global data—with over 55% of it coming from machines talking to machines. For enterprises, this isn’t just a storage problem; it’s an existential challenge.