Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Driving a customer-focused incident response process

Deep into an incident, Slack firing, up to your ears in decisions, not sure where to turn next? It’s easy for external communication with your customers to fall far down the list of priorities in these moments. However, these are the exact situations where comms are vital, and where underestimating their importance can having damaging and lasting effects on your organisation.

Tell the story of your incident with timeline curation

It isn’t the first time you’ve heard us say this and it won’t be the last: getting your post-incident process right is a game-changer. Being able to run effective debriefs and create useful postmortems helps us learn from our mistakes, respond better to future incidents and identify how we can build resilience in our product and teams. In short, it’s the thing the shifts the dial from just “fixing” to actually improving.

3 common pitfalls of post-mortems

Small confession: we currently use the term 'post-mortem' in incident.io despite preferring the term 'incident debrief'. Unless you have particularly serious incidents, the link to death here really isn’t helping anyone. However, we're optimising for familiarity, so we're sticking to the term 'post-mortem' here. Ask any engineer and they’ll tell you that a post-mortem is a positive thing (despite the scary name).

We've raised $34M to help organisations be resilient in the face of failure

TL;DR: We’ve raised $34M to bring increased resilience to organisations around the world. With this latest round of investment we’re expanding internationally in the US, accelerating our product plans, and growing our amazing team 🎉 As technology becomes more complicated and runs an ever greater part of our lives, failure becomes more inevitable, and more costly.

Making the wrong choice on build vs buy

A few years ago I’d just moved to London and started out at my first software job. I was having a great time building things and making new friends, and one evening a friend and I decided there was a new problem we wanted to solve: we really didn’t like the expenses software. We thought it was confusing and over-complex, and decided we could do better.

Tracing Gorm queries with OpenCensus & Google Cloud Tracing

At incident.io we use gorm.io as the ORM library for our Postgres database, it’s a really powerful tool and one I’m very glad for after years of working with hand-rolled SQL in Go & Postgres apps. You may have seen from our other blog posts that we’re heavily invested in tracing, specifically with Google Cloud Tracing via OpenCensus libraries.

What I learned from leading my first incident

A few weeks ago we had a major incident. We were releasing our Practical Guide to Incident Management, and after posting about it online an incident.io employee noticed that the page wasn’t loading. Just to set the scene, I’ve been at incident.io for 3 months and don’t have any experience of incidents in my previous role. When the team got paged I expected this to be one of those “follow along and learn how the wizards work their magic” exercises.

Uncovering the mysteries of on-call

For the vast majority of organisations, some form of round-the-clock cover is critical to successful business operations. On-call is an essential part of an effective incident response process, yet there is no commonly accepted playbook on how to most effectively structure and compensate on-callers. We ran a survey to uncover the mysteries of how on-call works in organisations of different shapes and sizes around the world.