Last year we introduced Live Tail — the ability to see a live feed of all the logs in your system, in real time, within Kibana. This ability to see a live stream of logs as they are being outputted from the different processes in a monitored environment was a greatly requested feature, and since being introduced we have received some excellent feedback from users that has allowed us to improve the basic functionality of Live Tail.
Here at Honeycomb, we spend lots of time thinking about how to help our users be more awesome at unearthing insights from their data so they can solve production issues in real-time. We think a lot about how to make running a query easy, how to guide users to wield our Query Builder effectively to find the needles in the haystacks of data that they send us.
Looking to learn about Logstash as quickly as possible? This article is for you: we’ll install Logstash and push some Apache logs to Elasticsearch in less than 5 minutes.
Logging is an important part of understanding the behavior of your applications. Your logs contain essential records of application operations including database queries, server requests, and errors. With proper logging, you always have comprehensive, context-rich insights into application usage and performance. In this post, we’ll walk through logging options for Rails applications and look at some best practices for creating informative logs.
In a previous post, we walked through how you can configure logging for Rails applications, create custom logs, and use Lograge to convert the standard Rails log output into a more digestible JSON format. In this post, we will show how you can forward these application logs to Datadog and keep track of application behavior with faceted log search and analytics, custom processing pipelines, and log-based alerting.
Today we are releasing Grafana 5.2.3 and Grafana 4.6.4. These patch releases includes a very important security fix for all Grafana installations which are configured to use LDAP or OAuth authentication.