Operations | Monitoring | ITSM | DevOps | Cloud

Monitoring

The latest News and Information on Monitoring for Websites, Applications, APIs, Infrastructure, and other technologies.

Automate and Virtualize the NOC: A Gannett/USA TODAY Network Case Study

Mission creep is a phenomenon that occurs after a project begins and gains momentum, but then gradually grows beyond the original, intended scope. One day you wake up and realize that, instead of an efficient, manageable project, you’ve got a monster on your hands. For enterprises in the midst of dynamic growth, IT infrastructure is often beset by mission creep. The incumbent organization acquires smaller operations, integrates their technology, and soon things are out of control.

How to Perform Python Remote Debugging

Debugging is the process of identifying, analyzing and removing errors in the software. It is a process that can start at any stage of the software development, even as early as the software has been written. Sometimes, remote debugging is necessary. In the simplest terms, remote debugging is debugging an application running in a remote environment like production and staging.

How to Clear Up Alert Storms by 90%?

Alerts are notifications from AIOps monitoring tools that indicate that there is an anomaly. IT teams get these alerts on their monitoring dashboard via emails or enterprise collaboration tools such as Slack or Teams. Service level agreements expect IT teams to analyze every alert within a specific timeframe and take appropriate action.

How to Control Alert Fatigue?

Alerts are indispensable to any IT operations system today. Site reliability engineers (SREs) or ITOps executives set up several monitoring tools for their IT landscape. When there is a change, high-risk action, or outage in any of these incidents, the monitoring tool triggers an automated alert. This could happen on the monitoring tool’s dashboard itself, via email, or enterprise collaboration tools like Slack or Teams.

The Five Data Pillars of Effective Root-Cause Analysis

The most effective way to understand an incident, resolve it and prevent it from occurring again is root-cause analysis. Simply put, root-cause analysis is the study performed by ITOps teams or site reliability engineers (SREs) to pinpoint the exact element/error that caused the unexpected behavior. Based on this, they plan remediation. Accurate and timely root-cause analysis can have a direct impact on the company’s top and bottom line.

Spotting and Avoiding Database Drift

Managing any database ecosystem is difficult enough: taking backups, maintaining statistics, and doing performance tuning all tax the time of the DBA or database developer. The job is complex even without considering the work you do to manage the various schema and data drifts that can occur. Unless you operate in a vacuum or within a single person organization (and even then, schema drift can occur), drift is going to manifest naturally and as the size of the environment expands.

Metrics now generally available in Honeycomb

Starting today, Honeycomb Metrics is now generally available to all Enterprise customers. You’ve adopted our event-based observability practices, in part to overcome the debugging roadblocks you hit when using custom metrics to identify application issues. But metrics do still provide value at the systems level. Now, you can easily see and use your metrics data alongside your event data in Honeycomb—all in one interface.

Tracing AWS Lambdas with OpenTelemetry and Elastic Observability

Open Telemetry represents an effort to combine distributed tracing, metrics and logging into a single set of system components and language-specific libraries. Recently, OpenTelemetry became a CNCF incubating project, but it already enjoys quite a significant community and vendor support. OpenTelemetry defines itself as “an observability framework for cloud-native software”, although it should be able to cover more than what we know as “cloud-native software”.