Operations | Monitoring | ITSM | DevOps | Cloud

Why Use a Purpose-Built Time Series Database

A time series database has a straightforward definition: it’s a database purpose-built for efficiently ingesting, storing, and querying time series data. Time series data is any data with a timestamp, collected regularly or periodically, that you’ll often visualize on graphs where the X-axis is time. This definition doesn’t quite tell you what sets it apart from other types of databases, though.

Get Kafka-Nated Special Episode: A Kristmas Kafka

Join us for A Kristmas Kafka, an informal and deeply technical roundtable with Apache Kafka committers, contributors and community leaders. This conversation brings together the people closest to the Kafka codebase to reflect on where the project started, how it has evolved and what lies ahead for streaming systems.

How AI-Native Data Pipelines Help Create a Security Data Lake

Security teams are generating and storing more telemetry than ever before. Logs, metrics, traces, and events come from cloud services, applications, identities, and infrastructure across many environments. Retention requirements continue to grow, yet the cost of storing all of this data in traditional hot storage can quickly exceed annual budgets. At the same time, investigations and audits rely on fast access to historical data, and any delay can slow response time or limit visibility.

Five Missed Opportunities Hidden Inside Every Denial Appeal File

Denial appeal files contain operational details that reveal patterns most teams overlook. Within these records, recurring documentation errors, workflow delays, and inconsistent payer communication often remain unaddressed, reducing recovery and extending payment cycles. As payers apply stricter clinical validation and medical necessity standards, these hidden gaps become costly and time-intensive to correct.

Get Kafka-Nated, Bonus Episode: A tale of Kafka Past, Current, and Future

Join us for “A Kristmas Kafka,” an informal technical roundtable with Apache Kafka committers, contributors, and community leaders. We’re gathering the people who build and shape Kafka to explore where the project has been, where it is today, and where it’s heading next. No marketing, no product pitches—just a dense, technical conversation about architecture, innovation, and the future of streaming systems.

Valkey JSON module now available on Aiven for Valkey

The Valkey JSON module implements native JSON data type support within Valkey, allowing users to efficiently store, query, and modify complex, nested JSON data structures directly. This overcomes previous architectural complexities, such as needing to serialize entire documents as strings or flatten data into hashes, by providing native handling for nested data models.

Aiven AI editor

We dive into the game-changing Aiven AI Editor, a powerful tool that lets you interact with your data using natural language. No more syntax headaches. Instantly generate a visual map of your database to understand your tables and their relationships at a glance. Watch how easy it is to ask questions like "How many sales did we have in 2025?" and get perfectly formatted SQL queries in seconds.

What's New in InfluxDB 3.8: Linux Service Management, Kubernetes Helm Chart, and Smarter Ask AI

InfluxDB 3.8 is now available for both Core and Enterprise, alongside the 1.6 release of the InfluxDB 3 Explorer UI. This release is focused on operational maturity and making InfluxDB easier to deploy, manage, and run reliably in production. InfluxDB 3 Core remains free and open source under MIT and Apache 2 licenses, optimized for recent data. InfluxDB 3 Enterprise builds on that foundation with long-range querying, clustering, security, and full operational tooling.

The Hidden Costs and Concerns of Iceberg Maintenance

Everyone talks about how great Apache Iceberg is, but nobody warns you about this: without proper maintenance, your tables will bloat, queries will slow down, and your catalog will run out of memory. Here are the 4 critical operations you MUST run regularly. Expiring snapshots prevents metadata bloat (Datadog learned this the hard way with catalog memory pressure). Deleting orphan files cleans up failed writes. Compacting data files keeps streaming workloads fast. Compacting manifests optimizes query planning.