Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on DevOps, CI/CD, Automation and related technologies.

Applying Feature Flag Context To Your OpenTelemetry Spans | Harness Blog

Integrating feature flag context into OpenTelemetry traces enhances observability by recording flag states as span attributes, making it easier to analyze how specific flags influence application behavior. When you toggle a feature flag, you're changing the behavior of your application; sometimes, in subtle ways that are hard to detect through logs or metrics alone. By adding feature flag attributes directly to spans, you can make these changes observable at the trace level.

Easy Guide for Connecting Redis to a Grafana Data Source

Redis is a widely used in-memory data store, commonly deployed as a cache, session store, message broker, or fast key-value database. Because Redis often sits on the critical path of an application, having visibility into its behavior (memory usage, client connections, command throughput, cache efficiency) is essential for troubleshooting and performance tuning.

Why Aging Networks Put Critical Infrastructure at Risk-and What It Means for Us

Everywhere around us, technology is evolving at lightning speed, yet the networks which underpin these capabilities often lag behind. This gap creates vulnerabilities that can impact everything from energy grids to emergency services. Forbes recently explored this urgent issue in an article featuring insights from our CEO Bruce McClelland, who shared an informed perspective on why modernization is essential, not optional. I encourage you to take a few minutes to read the full article.

How To Calculate Your OpenAI Cost Per API Call (And Why It Matters Now)

OpenAI doesn’t bill per feature, per customer, or per transaction. It bills per token, across multiple models, with usage patterns that can change by the hour. As a result, two API calls that support the same feature can have very different costs. Without a clear way to translate token-level pricing into something product, engineering, and finance teams can reason about, AI spend becomes difficult to forecast and harder to control.

Six FinOps Certifications And Courses To Set You Up For Success in 2026

FinOps is evolving fast, and 2026 is shaping up to be a big year for specialization. While these certifications are ranked from beginner to advanced to help you build skills in the right order, one course stands out as the hottest recommendation right now: FinOps for AI. AI spend is accelerating, ownership is getting murky, and teams are scrambling to keep up. That urgency is exactly why FinOps for AI is generating so much interest heading into 2026.

Should you still pay for SSL certificates?

There’s a particular flavor of skepticism that shows up whenever someone suggests using Let’s Encrypt. The security team crosses their arms. “Free certificates? For production? We’re a serious organization. We use Sectigo.” I get it. You’ve been buying certificates from the same vendors for twenty years. They send you invoices, you pay them, certificates appear. It feels responsible, and free feels like a trap. But is it?

Supercharge your LLM Using Production Data Context

Are your LLM coding agents (like Cursor or Claude Code) hallucinating fixes because they don't know what's actually happening in production? In this video, Matt from Speedscale shows you how to bridge the gap between your local IDE and live production traffic using the Model Context Protocol (MCP). Most observability tools just give you telemetry. Speedscale’s MCP server gives your agent the "inner workings" of actual API calls and payloads, so it can check its assumptions against reality. No more "vibe-coding" and hoping it works; let your agent find the 500 errors and rate limits for you.

Introducing Policies: Compliance from day 1, built into Platform Hub

We recently expanded Platform Hub with Policies, giving platform teams a foundation for compliance and consistency from day one. Governance moves out of scripts and spreadsheets and into the pipeline itself: visible, traceable, and automated from the first deployment. Using Rego, you can write custom policy checks based on your requirements, block non-compliant deployments, and access detailed audit logs of policy evaluation events.