Operations | Monitoring | ITSM | DevOps | Cloud

HAProxyConf 2025 Recap

A lot can change in three years. The world of 2022 was a quite different place. Queen Elizabeth II was the longest-serving living monarch, the world population hadn’t yet cracked eight billion, and many of us were still emerging from the strangeness of the Covid years. Meanwhile, at HAProxyConf 2022, we unveiled HAProxy Fusion Control Plane for the first time.

Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)

‍ We're not witnessing the end of observability, we're witnessing its evolution into something far more powerful. The observability industry is having its Moneyball moment. Just like Billy Beane revolutionized baseball by using data analytics to compete with teams that had vastly larger budgets, observability is undergoing a fundamental transformation.

Honeycomb Users Are Living in the Future, Part 1: Sampling

When we talk to new Honeycomb users, a few things stand out as sounding downright magical. Sometimes we’ll hear, “Wow, is that a new feature?” and we’ll say that no, it’s been like that for years. Clearly we need to get the word out! This is the first installment of a blog series I’ll be writing, covering areas of Honeycomb that elicit reactions of awe and disbelief from new users.

Top 5 MSP takeaways from the 2025 IT Trends Report

Earlier this year, Auvik released our annual IT Trends Report, spotlighting some of the key changes for network management, MSP, and IT practitioners. We know the market and its ups and downs can have a huge impact on the success of MSPs, so we’re bringing you a roll-up of key statistics and findings related to MSP specifically. Read on to see what we found.

How to Get Logs from Docker Containers

When a container misbehaves, logs are the first place to look. Whether you're debugging a crash, tracking API errors, or verifying app behavior—docker logs gives you direct access to what's happening inside. This blog covers the full workflow: how to retrieve logs, filter them by time or service, and set up logging for production environments.

Troubleshooting LangChain/LangGraph Traces: Common Issues and Fixes

We’ve covered how to get LangChain traces up and running. But even when everything’s instrumented, traces can still go missing, show up half-broken, or look nothing like what you expected. This guide is about what happens after setup, when traces exist, but something’s off.

Troubleshoot root causes with GitHub commit and ownership data in Error Tracking

When an error occurs, developers need to act quickly. But too often, they’re left searching through stack traces without enough context to understand what happened, who owns the code, or what change may have introduced the issue. This slows down triage, creates inefficient handoffs, and takes time away from building new features.

Monitor your LiteLLM AI proxy with Datadog

As organizations rapidly scale their use of large language models (LLMs), many teams are adopting LiteLLM to simplify access to a diverse set of LLM providers and models. LiteLLM provides a unified interface through both an SDK and proxy to speed up development, centralize control, and optimize LLM-powered workflows. But introducing a proxy layer adds abstraction, making it harder to understand how requests are processed.