Operations | Monitoring | ITSM | DevOps | Cloud

Benchmarking Hyperscalers: Aiven for ClickHouse Delivers More for Less

Aiven is proud to introduce our new pricing plans for Aiven for ClickHouse! Depending on your chosen region and plan, you can now double your compute power for the same price. On the other hand, if reducing costs is your priority, it's now possible to lower your total cost by up to 30% for the same compute power, depending on how you use the service. But how did we achieve this? This article explains how, and why Aiven’s nodes are the best for your ClickHouse workloads.

Getting Started with Diskless Kafka: A Beginner's Guide

Diskless topics are proposed in KIP-1150, which is currently under community review. The examples in this article use "Inkless", Aiven's implementation of KIP-1150 that lets you run it in production. I joined Aiven as a Developer Advocate in May, shortly after the Kafka Improvement Proposal KIP-1150: Diskless Topics was announced, which reduces the total cost of ownership of Kafka by up to 80%!

Your ClickHouse, optimized: Experience more flexible, price-performant plans

At Aiven, we're constantly striving to provide our users with the most efficient, powerful, and cost-effective solutions for their data needs. That's why we're thrilled to announce the launch of significantly enhanced plans for Aiven for ClickHouse across AWS, Google Cloud, and Microsoft Azure. Today, we are launching new plan options within our startup and business tiers as well as the ability to fine-tune local storage independently.

VictoriaLogs Unleashed: Cluster Version Now Available for Exceptional, Linear Scaling

You asked, and we listened! We’re thrilled to announce the release of the VictoriaLogs Cluster version – one of the most requested and anticipated updates from our user community. This marks a significant leap forward for VictoriaLogs, empowering users to handle log volumes and ingestion rates far beyond the limits of a single node.

Smarter Data Center Capacity Planning for AI Innovation

Global demand for data center capacity is skyrocketing. From 2023 to 2030, power consumption across data centers is expected to grow by up to 22% annually, driven primarily by generative AI (GenAI) workloads. By 2030, AI workloads are predicted to account for 70% of total demand. This demand doesn’t just mean more hardware; it necessitates high-density computing environments to support training large language models like GPT and real-time inference systems.

What is Internet Jitter & How to Test It

If you’ve ever had a user complain that their video call was choppy or their VoIP call had weird delays, even though the Internet speed looked fine, you’ve probably run into Internet jitter. It’s one of those issues that doesn’t always show up on a speed test, but it can absolutely wreck real-time communication. And if you’re managing networks across remote offices, home setups, or hybrid work environments, you’ll want to keep an eye on it.

In Case You Missed it: DX NetOps Active Experience Launched

There’s no doubt that managing networks today is a whole different ballgame than it used to be. Complexity is growing, environments are more fragmented, and user expectations have never been higher. One of the biggest challenges for network operations teams? Visibility—or the lack of it. Network operations used to be much simpler. Traffic flowed through your own data center, and you had the visibility and control needed to manage performance and troubleshoot issues.

10 Best Ticketing Tools of 2025

Whether you’re dealing with IT issues, customer questions, or just trying to keep track of who’s supposed to fix what and when, ticketing tools are the unsung heroes of organized chaos. They help teams stay on top of requests, assign responsibility, (no more “I thought you were going to handle it”) and actually close the loop on problems instead of letting them collect dust in someone’s inbox.

The Cost of Bad Data: Why Time Series Integrity Matters More Than You Think

Data plays a critical role in shaping operational decisions. From sensor streams in factories to API response times in cloud environments, organizations rely on time-stamped metrics to understand what’s happening and determine what to do next. But when that data is inaccurate or incomplete, systems make the wrong call. Teams waste time chasing false alerts, miss critical anomalies, and make high-stakes decisions based on flawed assumptions.

Rollbacks, Red Eyes And Unreliable Deployments

We spoke to data professionals from a range of industries about the impact of unreliable database deployments — not just on their systems, but on their workload, time, and well-being. From delayed releases to weekend firefighting, and the fallout for teams and customers, they share the day-to-day pressures they face and the small changes that help make deployments, and life, a little less stressful. What stood out from these conversations?