Operations | Monitoring | ITSM | DevOps | Cloud

How to Monitor Network Performance for Call Centers (Remote & On-Site)

A customer calls to place an urgent order. Your agent's VoIP line cuts out mid-sentence. Is it their home connection? Your network? The ISP? The phone system? You have no visibility, and by the time you figure it out, the customer's gone. This is the reality for modern call centers. Whether your agents work from a central office, from home, or split between both. Network issues don't just slow operations; they destroy customer experiences in real-time.

Your Opsgenie Migration is the Path to Proactive Reliability

With the Opsgenie end-of-life deadline (April 5, 2027) fast approaching, you're facing a critical choice: Do you truly need to move your dedicated Incident Response workflow into the complexity of Jira Service Management (JSM) or Compass? If your current process is a reactive treadmill—plagued by alert fatigue, lost context, and constant non-critical paging—the mandated move risks replacing one chaotic toolset with another complex ITSM solution. View this not as a burden, but as a chance to build a standardized, human-centric workflow that solves your biggest pain points and transforms your response from chaos to control.

From Zero Tickets to High-ROI: AI + DEX in 2026 (w/ Samuele Gantner and Vedant Sampath)

Kicking off 2026, Tim and Tom welcome Nexthink Chief Product Officer Samuele Gantner and first-time guest CTO Vedant Sampath for a candid “three pillars” deep-dive on enterprise AI. They explore how AI is reshaping product and engineering: new tooling, new development cycles, and the shift from deterministic software to probabilistic agents—plus the critical role of evals, benchmarks, guardrails, and performance. Then they unpack Nexthink’s three-pillar framework.

The Context Engineering Framework: 3 Shifts for AI-Powered Dev Teams

You’ve probably used AI earlier today. Maybe you asked it to debug a function, generate a test case, or explain a legacy codebase you just inherited. But here’s the thing: you didn’t just type a question and get an answer. You explained your problem, shared background context, pasted code snippets, clarified what you meant, then refined the output until it was actually useful. In other words, you were context engineering.

A Recap of 2025

In the past, our yearly recaps were mostly about numbers. What we shipped, how much Spike grew, and a long list of stats. See past recaps: 2023, 2024. But 2025 felt different to me. It had many moments that shaped how Spike as a product and the company looks today. Some of them were exciting. Some were uncomfortable, and all of them changed how I think about building Spike. We’re still bootstrapped and operating lean, with a team of fewer than ten people.

What is OTLP and How It Works Behind the Scenes

If you have worked with observability tools in the last decade, you have likely managed, and been burnt by, a fragmented collection of tools and libraries. Each observability signal required its own tool, data formats were incompatible and had little or no correlation. For example, log records would not link to traces, meaning you had to guess which traces led to which events. The OpenTelemetry Protocol (OTLP) solves this by decoupling how telemetry is generated from where it is analyzed.