Operations | Monitoring | ITSM | DevOps | Cloud

Running AI without blowing up your storage

Storage is often underestimated: In infrastructure discussions, compute and networking get most of the attention, while storage is treated as secondary. For AI workloads, that can be a costly oversight. Data throughput for specialized hardware: AI infrastructure powered by GPUs can process massive volumes of data at unprecedented speeds. This puts immense pressure on the storage system to keep up. Scale-out performance: An on-prem, scale-out, software-defined storage setup allows you to meet high performance demands, grow capacity as needed, and stay in control of infrastructure costs.

Applying AI/ML in Observability - Tech Talk #7

Ready to master anomaly detection? Join us for Part 2 of our "Applying AI/ML in Observability" series, where we do a deep dive into vmanomaly! In this live stream, Mathis and Marc will be joined by a very special guest: Fred Navruzov, the lead developer and mastermind behind VictoriaMetrics' vmanomaly. If you want to move beyond the basics and unlock the full potential of AI-driven observability, this is a session you can't afford to miss.

Securing the Invisible: Why Ambient AI Needs Next-Gen Security

If, like me, you’re continuously striving to keep pace with the ever-evolving world of artificial intelligence, you’re probably hearing a lot about how Ambient AI is poised to dominate discussions and developments throughout the second half of 2025. Ambient AI refers to artificial intelligence systems that operate unobtrusively in the background of our daily environments, constantly sensing, analyzing, and responding to various inputs without explicit human interaction.

EU AI Act: what changes in August 2025 and how to prepare

‍ On August 2, 2025, a key part of the EU AI Act comes into force. It has serious implications for how you manage incidents related to artificial intelligence. ‍ While the full regulation will not apply until 2026, new obligations for providers of general-purpose AI (GPAI) models begin this summer. If you are building or deploying AI-powered services in Europe, the clock is ticking.

CapCut for Real Estate: AI Voice Narration for Property Tours

Listing videos have proved a potent display of property available on the internet; however, not all videos with good frames cut through the market. The CapCut Desktop Video Editor has been designed as an all-in-one editing tool that enables real estate professionals to design a property tour with AI voiceover, action transitions, and high-definition pictures. CapCut gives the opportunity to create high-quality, compelling virtual tours even in the case of absence of a professional narrator and a studio where it is possible to shoot.
Sponsored Post

When AI Becomes the Judge: Understanding "LLM-as-a-Judge"

Imagine building a chatbot or code generator that not only writes answers - but also grades them. In the past, ensuring AI quality meant recruiting human reviewers or using simple metrics (BLEU, ROUGE) that miss nuance. Today, we can leverage Generative AI itself to evaluate its own work. LLM-as-a-Judge means using one Large Language Model (LLM) - like GPT-4.1 or Claude 4 Sonnet/Opus - to assess the outputs of another. Instead of a human grader, we prompt an LLM to ask questions like "Is this answer correct?" or "Is it on-topic?" and return a score or label. This approach is automated, fast, and surprisingly effective.

Can Agentic AI Fix the Chatbot Fatigue in the CX Industry? A Strategic Analysis for CXOs

Belinda Parmar, CEO of The Empathy Business, in a recent article with Financial Times, said, Customer service has undergone a significant transformation in recent years. Where success was once measured by resolution speed and cost efficiency, today’s customers expect far more. They seek personalized interactions, contextual awareness, and a genuine human touch, delivered alongside fast, reliable support.

With AI, You're Gonna Have to Manage Your (Massive) Energy Use in SPM

Forget boring spreadsheets. Strategic portfolio management (SPM) isn't just about ticking boxes. It’s the big boss plan that makes sure every penny spent and every project your company starts points towards the main goal. It's your company's smart GPS, guiding you through the AI energy maze. When it comes to AI's power hunger, SPM is a knight in shining armor. It helps leaders get smart, making sure they grab all the fancy tech without trashing the world.

Smarter debugging with Sentry MCP and Cursor

Debugging a production issue with Cursor? Your workflow probably looks like this: Alt-Tab to Sentry, copy error details, switch back to your IDE, paste into Cursor. By the time you’ve context-switched three times, you’ve lost your flow and you’re looking at generic suggestions that don’t show any understanding of your actual production environment or codebase.

Semantic Caching: What We Measured, Why It Matters

Semantic caching promises to make AI systems faster and cheaper by reducing duplicate calls to large language models (LLMs). But what happens when it doesn’t work as expected? We built a test environment to find out. Through a caching system, we evaluated how semantically similar queries would behave. When the cache worked, response times were fast. When it didn’t, things got expensive. In fact, a single semantic cache miss increased latency by more than 2.5x.