Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Building On-call: Continually testing with smoke tests

With the release of On-call, our system’s reliability had to be solid from the outset. Our customers have high expectations of a paging product—and internally, we would not be comfortable with releasing something that we weren’t sure would perform under pressure. While our earlier product, Response, was the core of a customer’s incident response process after an incident was detected, we’re now the first notification an engineer gets when something’s wrong.

Where does the time go after you resolve an incident?

We were curious: once an incident is over, how long does it take companies to document, review, create learnings, finish clean-up items, and complete any other follow-up action items? We work with a wide variety of companies, from small start-ups to Enterprises with thousands of engineers. But we wanted to know: where is their time spent after they resolve an incident? Here’s what we found!

How our data team handles incidents

Historically, data teams have not been closely involved in the incident management process (at least, not in the traditional “get woken up at 2AM by a SEV0” sense). But with a growing involvement of data (and therefore data teams) in core business processes, decision making, and user-facing products, data-related incidents are increasingly common, and more important than ever.

A tough day for incident responders: lessons from the CrowdStrike update

Today marks a particularly challenging day for incident responders across the globe. As many of you may have noticed, a recent update from CrowdStrike has triggered widespread disruptions, causing chaos in various sectors. The ripple effects have been far-reaching and severe: While the technical specifics of the issue might not be the focus here—and indeed, there are experts better suited to dissect the cause—what's crucial is understanding the impact on those who manage such crises.

Time, timezones, and scheduling

Our On-call product has been in the wild for a few months now, and in this post I want to talk about building a time-sensitive system and what we did to handle some of the challenges. I’ll cover what our scheduler is responsible for, the basics of working with time, and talk a bit about how we tested our system.

The complexity of phone networks

Arguably the most important part of an on-call product is knowing that you will be notified when things break, wherever you are. When it comes to SMS and phone call notifications, we have to leave the familiar realm of the internet and JSON responses, and deal with systems that provide limited observability and insight into what’s gone wrong.

Building a multi-platform on-call mobile app

A significant part of being on-call is the ability to respond to pages and handle escalations on the go. In the early stages of developing incident.io On-call, we considered whether a Minimum Viable Product (MVP) could rely solely on SMS and phone calls. However, we quickly realized that a fully featured mobile app was going to be essential to the on-call experience. This led us to the question: how should we build this mobile app?

Dear Customers, we couldn't have done it without you. With love, incident.io

We’re excited and honored (and might even be blushing a little) to share our Summer 2024 accolades from G2, including being ranked #1 in G2’s Relationship Index! There are several factors that go into determining this ranking, including: While all of these awards are special to us, Best Relationship means a lot because, well, our customers mean a lot.

Behind the scenes: Launching On-call

March 5th was a big day for incident.io as we released our on-call product to the world. Nine months of listening to our customers, coding, fixing, testing, and polishing came together for our biggest product launch to date. Releasing On-call was a huge milestone and represented the next step in our journey as a company.

Onboarding yourself as an engineer at incident.io

At incident.io we use infrastructure as code for configuring everything we can, and we feel that there’s no reason we should exclude our own product from that. As well as configuring things like Google Cloud Platform, Sentry and Spacelift via our infrastructure repo, we also configure incident.io. On your first day as an engineer here, the first PR that you make is to our infrastructure repo.