Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Troubleshooting Feature Flags with Komodor and Sentry

Komodor is a Kubernetes-native platform we’ve created to streamline troubleshooting. It was born out of frustrations we felt as developers, when we were required to waste hours of our time on troubleshooting, instead of focusing on what we really wanted to do - creating and innovating. Komodor sits on top of your K8s cluster and integrates with every existing tool you have, be it CI/CD, repo, monitoring, alerting, or communication.

Sentry's New Mobile App for Managing Releases

Once a year we let our imagination go wild for a whole week during our annual Hackweek event. It’s where we come up with product updates, like dark mode support, design them and implement prototypes. The mobile engineering team came up with the idea for a Sentry mobile app that focuses on Release Health. We wanted to give developers a concise but comprehensive view of if a release was healthy, errored, or experiencing abnormal crash sessions across multiple projects.

Instrumenting Our Frontend Test Suite (...and fixing what we found)

Here at Sentry, we like to dogfood our product as much as possible. Sometimes, it results in unusual applications of our product and sometimes these unusual applications pay off in a meaningful way. In this blog post, we’ll examine one such case where we use the Sentry JavaScript SDK to instrument Jest (which runs our frontend test suite) and how we addressed the issues that we found.

Root out the odd operation with Operations Breakdown

Transactions are sent when your service receives a request and sends a response, like an API call or a page load. Within each transaction is a series of operations. We built Operations Breakdown to help you, the developer, quickly see how much time was spent in each operation within a transaction. Why? Simple, so you can address the operations with the longest duration and likely causing annoying performance issues for your customer.

Why Debugging JavaScript Sucks - And What You Can Do About It

What makes JavaScript great is also what makes it frustrating to debug. Its asynchronous nature makes it easy to manipulate the DOM in response to user events, but it also makes it difficult to locate problems. And JavaScript’s ubiquity has resulted in a variety of runtimes (e.g. Chromium’s V8, Safari’s JavaScriptCore, and Firefox’s SpiderMonkey) but having so many platforms can cause dizzying idiosyncracies — all of which need to be supported equally.

Jamstack, Next.js, Netlify, and Sentry: How The Pieces Fit

Jamstack (Javascript + APIs + Markup) is a web architecture that combines the convenience of pre-built websites with the capacity to handle custom APIs and serverless functions. By separating the frontend UI from backend databases, Jamstack allows developers to structure their application in ways that deliver dynamic content faster.

Find the Root Cause Faster with Trace View and Trace Navigator

Like a bratty teenager, traditional monitoring answers your questions, but does so in a terse, unhelpful manner: Why is my page slow? Guess it’s the API call. It’s a 504 thing — you wouldn’t understand. Ok, so why is the API call slow? Ask your DB query. Gosh! You need a better conversation with your code — one which gives you contextual clues about your application’s performance.

Sentry Application Monitoring for Next.js

As you could probably tell from the title, we shipped an SDK for Next.js. This means you can capture errors, measure performance, manage releases, configure suspect commits, and automatically upload sourcemaps to view unminified JavaScript and TypeScript with zero(-ish) configuration. Why was Next.js next on our list? Well, it’s one of the fastest-growing React frameworks and developers love it.

Better Alerts [as in, far more specific and just generally way better]

A couple of weeks back, we broke sign-ups. And in the most meta fashion, we learned about this because someone here had the foresight to set up an alert in Sentry to notify us if sign-ups dropped to zero. Getting alerted kicked off our incident response process. A team was formed to tackle “What broke?”, “How do we fix this?”, “How long has this been happening?”, “Are any other services impacted?”, and much more.