Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

A Product Manager's Insights from KubeCon + CloudNativeCon Europe 2024

I recently had the privilege of attending KubeCon + CloudNativeCon Europe 2024 in Paris. The conference, hosted by the Linux Foundation, marked the 10th anniversary of Kubernetes. Here are the key takeaways and highlights from the conference.

Observability for Everyone

What do you need to achieve observability? Who you ask and the role they hold will influence the answer, but the answer likely follows this pattern: “You only need is how you define observability.” I cannot disagree with this logic. A specific use case may only need a specific type of telemetry. Experience and expertise allow engineers to quickly answer questions about a system without expanding into adjacent data types.

Hybrid Observability for health and life sciences: Top 6 challenges and how monitoring can help

As the healthcare industry has introduced more complex IT infrastructure, it now faces many challenges as it strives to deliver high-quality services to patients. From adapting to remote work and telemedicine to resource constraints, healthcare organizations must continually adapt to new technologies. Some of the nascent technologies, like remote triage of patients, telemedicine, and IoT, have all seen an acceleration in innovation as the industry pivots to visit patients remotely.

How Lack of Knowledge Among Teams Impacts Observability

Without a doubt, you’ve heard about the persistent talent gap that has troubled the technology sector in recent years. It’s a problem that isn’t going away, plaguing everyone from engineering teams to IT security pros, and if you work in the industry today you’ve likely experienced it somewhere within your own teams. Despite major changes in the tech landscape, it is clear that organizations are still having significant difficulty keeping their technical talent in-house.

Mastering Observability with OpenSearch: A Comprehensive Guide

Observability is the ability to understand the internal workings of a system by measuring and tracking its external outputs. In technical terms, it entails collecting and examining data from numerous sources within a system to attain insights into its behavior, performance, and health. All organizations are now familiar with how essential observability is to ensure optimal performance and availability of their IT infrastructure.

Introducing Relational Fields

We’re excited to bring you relational fields, a new feature that allows you to query spans based on their relationship to each other within a trace. Previously, queries considered spans in isolation: You could ask about field values on spans and aggregate them based on matching criteria, but you couldn’t use any qualifying relationships about where and how the spans appear in a trace.

Redis Monitoring Performance Metrics

Monitoring Redis, an in-memory data structure store, is crucial to ensure its performance, availability, and efficient resource utilization. By tracking metrics such as command latency, memory usage, CPU utilization, and throughput, you can identify areas for optimization and fine-tune your Redis configuration for optimal performance.

Control your log volumes with Datadog Observability Pipelines

Modern organizations face a challenge in handling the massive volumes of log data—often scaling to terabytes—that they generate across their environments every day. Teams rely on this data to help them identify, diagnose, and resolve issues more quickly, but how and where should they store logs to best suit this purpose? For many organizations, the immediate answer is to consolidate all logs remotely in higher-cost indexed storage to ready them for searching and analysis.

Aggregate, process, and route logs easily with Datadog Observability Pipelines

The volume of logs generated from modern environments can overwhelm teams, making it difficult to manage, process, and derive measurable value from them. As organizations seek to manage this influx of data with log management systems, SIEM providers, or storage solutions, they can inadvertently become locked into vendor ecosystems, face substantial network costs and processing fees, and run the risk of sensitive data leakage.

Dual ship logs with Datadog Observability Pipelines

Organizations often adjust their logging strategy to meet their changing observability needs for use cases such as security, auditing, log management, and long-term storage. This process involves trialing and eventually migrating to new solutions without disrupting existing workflows. However, configuring and maintaining multiple log pipelines can be complex. Enabling new solutions across your infrastructure and migrating everyone to a shared platform requires significant time and engineering effort.