Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Elastic Common Schema and OpenTelemetry - A path to better observability and security with no vendor lock-in

At KubeCon Europe, it was announced that Elastic Common Schema (ECS) has been accepted by OpenTelemetry (OTel) as a contribution to the project. The goal is to achieve convergence of ECS and OpenTelemetry’s Semantic Conventions (SemConv) into a single open schema that is maintained by OpenTelemetry. This FAQ details Elastic’s contribution of Elastic Common Schema to OpenTelemetry, how it will help drive the industry to a common schema, and its impact on observability and security.

Increasing Implications: Adding Security Analysis to Kubernetes 360 Platform

A quick look at headlines emanating from this year’s sold out KubeCon + CloudNativeCon Europe underlines the fact that Kubernetes security has risen to the fore among practitioners and vendors alike. As is typically the case with our favorite technologies, we’ve reached that point where people are determined to ensure security measures aren’t “tacked on after the fact” as related to the wildly-popular container orchestration system.

DevOps Pulse 2023: Increased MTTR and Cloud Complexity

Evolving DevOps maturity, mounting Mean-Time-to-Recovery (MTTR), and perplexing cloud environments – all these factors are shaping modern observability practices according to approximately 500 observability practitioners. While every organization faces its unique challenges, there are broadly impactful trends that arise.

OpenTelemetry-powered infrastructure monitoring: isolate and fix issues in minutes

The process of building and maintaining modern, cloud-based applications requires a new approach to infrastructure monitoring. Traditionally, engineers would try to isolate a specific infrastructure component causing an issue — and fix it alone, without diving into code. Today, DevOps engineers must understand how application performance is related to their infrastructure. Infrastructure, for DevOps engineers, is an enabler to deploy code.

Plan better and preempt bottlenecks with predict for metrics

Nothing is certain in this world except for death, taxes, and that you will eventually run out of disk space. You may have used our unique predict operator to query logs and forecast future values (we’ve even heard of customers predicting their ingest volume for Sumo Logic log data to better forecast their usage and budget!) — and wanted to do the same with metrics. With the recent general availability of the predict for metrics operator, you can.

Now you can forward logs to external endpoints from within the Console!

Our aim, like always, is to help users thrive. We want them to receive real value from all that we deliver through our various features. And it’s equally important to offer flexibility by providing all different ways to use those features. This way, you’re free to use the feature in the way that's most convenient. Driving this vision of ours, well, forward, we have now extended our Logs Forwarding experience from CLI to within the Console.

Optimizing Your Splunk Experience with Telemetry Pipelines

When it comes to handling and deriving insights from massive volumes of data, Splunk is a force to be reckoned with. Its ability to index, search, and analyze machine-generated data has made it an essential tool for organizations seeking actionable intelligence. However, as the volume and complexity of data continue to grow, optimizing the Splunk experience becomes increasingly important. This is where the power of telemetry pipelines, like Mezmo, comes into play.

Reducing Your Splunk Bill With Telemetry Pipelines

With 85 of their customers listed among the Fortune 100 companies, Splunk is undoubtedly one the leading machine data platforms on the market. In addition to its core capability of consuming unstructured data, Splunk is one of the top SIEMs on the market. Splunk, however, costs a fortune to operate – and those costs will only increase as data volumes grow over the years. Due to these growing pains, technologies have emerged to control the increasing costs of using Splunk.

How to Build a Culture of Data-Driven Product Management

Product-led growth (PLG) is on the rise, a discipline that relies on the product itself to drive user acquisition, expansion, conversion and retention. Today, 60% of Cloud 100 companies embrace a PLG strategy, because it’s an efficient method of growth with low customer acquisition costs. Plus, cloud-native companies have the unique opportunity to collect more data that shows exactly how customers are using their products, and where potential friction occurs.

Log Less, Achieve More: A Guide to Streamlining Your Logs

Businesses are generating vast amounts of data from various sources, including applications, servers, and networks. As the volume and complexity of this data continue to grow, it becomes increasingly challenging to manage and analyze it effectively. Centralized logging is a powerful solution to this problem, providing a single, unified location for collecting, storing, and analyzing log data from across an organization’s IT infrastructure.