Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Data Quality Explained: Why Quality Is Critical to Using Your Data

Much like wine (😉), having data doesn’t mean you have quality data. Today it's easier than ever to get data on almost anything. But that doesn’t mean that data is inherently good data, let alone information or knowledge that you can use. In many cases, bad data can be worse than no data, and it can easily lead to false conclusions. So, how do you know that your data is reliable and productive? This is what we call data quality.

Grafana 9.2: Create, edit queries easier with the new Grafana Loki query variable editor

As part of the Grafana 9.2 release, we’re making it easier to create dynamic and interactive dashboards with a new and improved Grafana Loki query variable editor. Templating is a great option if you don’t want to deal with hard-coding certain elements in your queries, like the names of specific servers or applications. Previously, you had to remember and enter specific syntax in order to run queries on label names or values.

The Human Element of Tech Development

Opportunities for growth are all around us, but it takes the ability to be open and an eager growth mindset to see them. In this episode, David Noblet, Co-Founder + Chief Architect at ChaosSearch, shares how he and his team find innovative ways to improve digital services for their clients by constantly taking inspiration from their daily lives.

Pipeline Profiling: Or How I Learned to Stop Worrying and Isolate the Problem

It’s that time of year again! If you’re not a procrastinator, you’ve probably already blown out your sprinklers for winter and are looking forward to the snow and holidays ahead. Well done, irrigation purists! I, on the other hand, am an olympic-level procrastinator and will usually wait until the last moment before NWS forecasts a 10″ snow for the night then frantically search for my air compressor.

Reduce Data Costs: Log Sampling with OpenTelemetry and BindPlane OP

Redundant logs are a common nuisance in observability pipelines of all kinds. In large environments, excess logs can multiply data costs to unsustainable amounts. Log sampling is the process of randomly sampling logs to produce the same valuable insight with dramatically reduced data flow. Configuring agents in a pipeline to appropriately sample logs can be a pain. Pipeline managers, like BindPlane OP, make that process simple and scalable.

Edge + AppScope: Unlocking New Insights You Didn't Know Existed Was Never This Easy!

The moment has finally arrived! “Yes! I do” “Yes! I do” With great joy, I now introduce to you the newly married Edge and AppScope! Beginning the journey of a lifetime, let’s give it up for this power couple! Together they offer auto-discovery, central management, high scalability, high-fidelity data collection, and rich observability.

Choosing an Observability Pipeline

An observability pipeline is a tool or process that centralizes data ingestion, transformation, correlation, and routing across a business. Production engineers across ITOps, Development, and Security teams use them to more efficiently and cost-effectively transform their telemetry data to drive critical decisions. Businesses of all sizes can enjoy several benefits and gain a significant competitive advantage by implementing an observability pipeline.

ELK Review: ELK vs. MetricFire

PU, memory use, latency, network bandwidth. These are just some of the monitoring metrics businesses analyze for security and performance. But successful data-driven organizations delve deeper than this. These companies probe millions of real-time metrics for unexpected insights and predict outcomes weeks, months, and years into the future. ELK helps them do this. It's a data analytics platform from open-source developer Elastic.