Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on API Development, Management, Monitoring, and related technologies.

New API endpoints: Pause and resume website & ping monitors

We’ve added new API capabilities that give you more control over your monitoring workflows – directly from code. You can now pause and resume website and ping monitors via the StatusGator API, exposing the same pause functionality that’s available in the UI.

Mock vs Stub: Essential Differences

When discussing the process of testing an API, one of the most common sets of terms you might encounter are “mocks” and “stubs.” These terms are quite ubiquitous, but understanding exactly how they differ from one another - and when each is the correct method for software testing - is critical to building an appropriate test and validation framework. In this blog, we’re going to talk about the differences and similarities between mocks and stubs.

The CES Hangover: 3 Expensive Hardware Fails That Were Actually Software Problems

The dust has settled on Las Vegas. We saw transparent TVs, cars that drive sideways, and enough “AI-powered” toothbrushes to confuse a dentist. CES is incredible at selling the dream of hardware. The demos are slick, the lighting is perfect, and everything works on the showroom floor. But as engineers, we know the dirty secret of CES: The hardware is the easy part.

The API Metrics Every SaaS Team Must Track In 2026

API metrics have long been a core part of building and operating reliable SaaS products. Teams track the likes of request volume, latency, and uptime to ensure APIs perform as expected under load. First: API cost intelligence metrics measure how API usage translates into cloud, AI, and third-party spend — and attribute that cost to customers, features, workflows, and teams so SaaS businesses can protect margins as usage scales. But today, the API metrics that matter most go beyond performance.

Supercharge your LLM Using Production Data Context

Are your LLM coding agents (like Cursor or Claude Code) hallucinating fixes because they don't know what's actually happening in production? In this video, Matt from Speedscale shows you how to bridge the gap between your local IDE and live production traffic using the Model Context Protocol (MCP). Most observability tools just give you telemetry. Speedscale’s MCP server gives your agent the "inner workings" of actual API calls and payloads, so it can check its assumptions against reality. No more "vibe-coding" and hoping it works; let your agent find the 500 errors and rate limits for you.

Let Your LLM Debug Using Production Recordings

Modern LLM coding agents are great at reading code, but they still make assumptions. When something breaks in production, those assumptions can slow you down—especially when the real issue lives in live traffic, API responses, or database behavior. In this post, I’ll walk through how to connect an MCP server to your LLM coding assistant so it can pull real production data on demand, validate its assumptions, and help you debug faster.