Operations | Monitoring | ITSM | DevOps | Cloud

%term

The Dark Side of Real-Time: Privacy Concerns and Ethical Dilemmas in Retail

If retailers want to build trust with customers, they must be transparent about data collection and usage. This blog provides actionable guidance on gaining valuable insights from data while protecting customer privacy. The retail sector's increasing use of real-time data analytics is a double-edged sword in our privacy-first world. On one hand, it is a promising new revenue opportunity that leverages what retailers, by default, can already access.

Incident Template Library

We recently announced a new feature to enhance how you communicate with your users during maintenance, incidents, and general service updates. Status Page Templates allows you to save and re-use status updates - but how do you know what incidents might happen or what updates you need to keep users informed about until it's too late? We have put together a library of ready-to-use templates designed to keep your users informed with clear, concise and consistent messaging.

The Need for Speed: Highlights from IBM and Catchpoint's Global DNS Performance Study

Despite DNS being the backbone of Internet connectivity, reliable metrics for benchmarking DNS performance are surprisingly scarce. This gap often leaves IT teams navigating in the dark, unable to effectively gauge how their DNS configurations stack up against industry standards. To address this pressing need, Catchpoint worked with IBM NS1 Connect to provide a clear, data-driven picture of DNS performance.

Getting Started With Refinery: Rules File Template

Sampling is a necessity for applications at scale. We at Honeycomb sample our data through the use of our Refinery tool, and we recommend that you do too. But how do you get started? Do you simply a set rate for all data and a handful of drop and keep rules, or is there more to it? What do these rules even mean, and how do you implement them? To answer these questions, let’s look at a rules file template that we use for customers when first trying out Refinery.

What Small and Medium-sized Businesses Should Look for in a Data Lake

Data is wealth. Extracting insights from data is valuable for any organization—data aids in making informed decisions, optimizing operations and costs, and understanding how customers behave. However, reaping the benefits of data requires an investment in the right tools, resources, and people — something smaller organizations may not have the means to do.

The 4 Types Of Cloud Computing: Choosing The Best Model

There are four main cloud delivery models (cloud computing services) and four major cloud deployment models (types of cloud computing). Each model is unique, and that affects how it impacts cloud management, data security, and cost management, among other considerations. Here’s exactly what each cloud computing model provides and how to choose the best option for your needs.

Log Shipper - What Is It and Top 7 Tools

Centralizing logs (arranging all records in one place) is often challenging as we need to decide whether to use a log shipper or directly log from the application. If you are not familiar with a log shipper, logging directly from the library might be a suitable option for development (it is easy to configure). However, in production, you'll likely want to use one of the available log shippers, mainly due to buffers, since blocking the application or dropping data (immediately) may not be an option.

Implementing OpenTelemetry in React Applications

OpenTelemetry can be used to trace React applications for performance issues and bugs. You can trace user requests from your frontend web application to your downstream services. OpenTelemetry is an open-source project under the Cloud Native Computing Foundation (CNCF) that aims to standardize the generation and collection of telemetry data. Using OpenTelemetry Web libraries, you can instrument your React apps to generate tracing data.