Fort Collins, CO, USA
2008
  |  By Lance Erickson
Elixir’s “let it crash” philosophy is one of the best ideas in modern software design. Supervisors restart failed processes, the system self-heals, and life goes on. It’s like having a really good immune system. The problem is that a really good immune system can also hide chronic conditions. A GenServer crashing and restarting is working as designed.
  |  By Lance Erickson
Phoenix LiveView is one of those technologies that feels like cheating. You get rich, interactive UIs without writing JavaScript, and the server handles the state. It’s elegant. But that elegance comes with a trade-off that’s easy to forget: all that interactivity runs on your server.
  |  By Sarah Morgan
Rails applications have a specific set of performance challenges that make monitoring genuinely useful rather than just box-checking. ActiveRecord is convenient to use and also convenient to accidentally write N+1 queries with. Memory bloat in long-running processes, particularly when Sidekiq or Action Cable is involved, is a recurring production problem for a lot of teams. Background job performance tends to degrade quietly until it becomes noticeable.
  |  By Sarah Morgan
Last updated: March 2026 Python applications built on Django, Flask, FastAPI, and other frameworks have the same monitoring needs as applications built in any other language: you want to know which endpoints are slow, why the database is getting hammered, what errors are firing in production, and ideally all of that in a form that does not require three separate tools to reconstruct a single incident.
  |  By Sarah Morgan
The error monitoring scene has changed a ton over the past few years. We've gone from basic exception tracking to fully integrated platforms that correlate errors with performance metrics and logs. We’ve even got AI-powered debugging! But in the midst of the AI explosion, some things remain unchanged and most teams are still drowning in data with little actionability.
  |  By Quinn Milionis
With modern tooling and agentic coding assistants, straightforward bugs are almost a relief. If a test can catch it, or a user can reproduce it, chances are you can squash it quickly. The harder category — and the one worth writing about — are the bugs where everything looks correct. Your code runs, no exceptions are thrown, your debug statements confirm the right functions fire at the right times, and yet nothing works.
  |  By Jack Rothrock
A repository for this article can be found here.‍ When most developers think about request tracing, they picture instrumentation hooks inside familiar libraries. This allows us to track familiar metrics we see in application performance monitoring (APM) tools such as the duration of an HTTP call or how long a database query takes. But what if you could go deeper and instrument your own Ruby code automatically, without sprinkling timing calls everywhere?
  |  By Sarah Morgan
Errors get a bad rap, but they’re just trying to help. Remember, errors aren’t the enemy, they’re the messenger. Conventional wisdom tells you to think of errors as failures, as things that thwart progress and frustrate developers. The reality is that errors are actually there to help you. They prevent you from shipping broken code to production. They stop your application from continuing to operate incorrectly and costing you money.
  |  By Sarah Morgan
Average response time has become the default metric on many dashboards. It's easy to compute, easy to explain, and provides a single number to track over time. Of all the metrics available in application monitoring, this one feels closest to the actual user experience. But this simplicity can create a trap if you treat the average as a complete picture of system health. In fact, it’s really the starting point for a deeper investigation.
  |  By Aspen Clevenger
Regular monitoring practices can emphasize application response time, but queue time is also often an early and important warning sign. If it rises, you’ll quickly see downstream effects: tail latency, timeouts, and error spikes. This means that this metric can give you a head start tackling app issues before they become user problems. In this post, we’ll discuss queue time, how things can go off track, and practical steps to turn it around.
  |  By Scout
3 Key Benefits of switching to ScoutAPM over New Relic n+1 queries, Memory Bloat tabs show you easy performance enhancements.
  |  By Scout
Keeping an eye on our app’s performance through monitoring.
  |  By Scout
A short demo of Scout's database monitoring addon.

Monitoring for the modern development team.

No developer ever said "I hope I get to spend all day hunting down a performance issue". When the unavoidable happens, The Scout platform is focused on finding the root cause of performance problems as quickly as possible.

Scout is monitoring for fast-moving dev teams like us. We leverage the tools that help us get big things done - Github, PaaS services, dynamic languages, frequent releases - to build a tailored monitoring platform for modern teams.

Scout continually tracks down N+1 database queries, sources of memory bloat, performance abnormalities, and more.

Get back to coding with Scout.