Operations | Monitoring | ITSM | DevOps | Cloud

Incident.io

Developer environments should be cattle, not pets

Cattle, not pets is a DevOps phrase referring to servers that are disposable and automatically replaced (cattle) as opposed to indispensable and manually managed (pets). Local development environments should be treated the same way, and your tooling should make that as easy as possible. Here, I’ll walk through an example from one of my first projects at incident.io, where I reset my local environment a few times to keep us moving quickly.

What are you learning from your incidents?

Think about this—what was the last incident that challenged you? Did you learn anything from it? It will be shocking to no one to hear that we deal with our fair share of incidents. These run the gamut from tiny bugs to significant outages (thankfully, the latter happening only very rarely 😮‍💨). Either way, we always take the time to learn from them in some way. This might look like changes to our response processes or revisiting systems we’re using.

Embracing the active user paradox

Question—when was the last time you purchased a new product and sat down to read the manual end-to-end before getting started? Ask this question to a room of 10 people and you’d likely get one or two hand raises, even though reading first could save you time and preempt many of the questions you’re likely to ask. Herein lies the problem when it comes to creating a SaaS product.

Taking the fear out of migrations

Over the last 18 months at incident.io, we’ve done a lot of migrations. Often, a new feature requires a change to our existing data model. For us to be successful, it’s important that we can seamlessly transition from the old world to the new as quickly as we can. There are few things in software where I’d advocate a ‘one true way,’ but the closest I come is probably migrations. There’s a playbook that we follow to give us the best odds of a smooth switchover.

Game Day: Stress-testing our response systems and processes

At incident.io, we deal with small incidents all the time—we auto-create them from PagerDuty on every new error, so we get several of these a day. As a team, we’ve mastered tackling these small incidents since we practice responding to them so often. However, like most companies, we’re less familiar with larger and more severe incidents—like the kind that affect our whole product, or a part of our infrastructure such as our database, or event handling.

Making transparency a principle of your company's culture

You’ve probably heard the phrase “transparency is key” more than you can bear at this point—so let’s get this out of the way. Transparency is key. The phrase suddenly became that much more unbearable. But before you drop off, let me also communicate something else: transparency is often not enough. Often, companies make the mistake of leaning on transparency as a catchall solution to many of their internal comms issues.

Your non-technical teams should be using incident management tools, too

For many businesses across the world, incident management is something that’s usually left to engineers. These teams are on the front lines, declaring, managing, and resolving all sorts of incidents across the org, regardless of where it originates or what form it takes. But there’s a glaring issue with this approach. Outside of technical teams, people across organizations aren’t accustomed or trained to use the word “incident” whenever an issue comes up.

Here's what to focus on when reviewing an incident

Incidents can be a bit noisy. Especially when it’s one of higher severity, there are a lot of moving parts that can make it difficult to come away with the information you want at a glance. And if you’re someone who isn’t necessarily tapped into the day-to-day of incident response, such as a head of a department or executive, you’ll want to be able to glean the most actionable information in just a few seconds without having to dig through dense documents.

How we approach integrations at incident.io

If you pick a random SaaS company out of a jar and go to their website, chance are they integrate with another tool. Typically, the end goal of integrations is to meet users in the middle by working with other tools they’re already using on a day-to-day. Put another way, integrations are a strategic business decision. But the question remains: why don’t companies just build a tool with similar functionality in order to make the product stickier?

Need your own incident post-mortem template? Here's ours

Having a dedicated incident post-mortem is just as important as having a robust incident response plan. The post-mortem is key to understanding exactly what went wrong, why it happened in the first place, and what you can do to avoid it in the future. It’s an essential document but many organizations either haphazardly put together post-incident notes that live in disparate places or don’t know where to start in creating their own post-mortems.