Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

DevOps Pulse 2023: Increased MTTR and Cloud Complexity

Evolving DevOps maturity, mounting Mean-Time-to-Recovery (MTTR), and perplexing cloud environments – all these factors are shaping modern observability practices according to approximately 500 observability practitioners. While every organization faces its unique challenges, there are broadly impactful trends that arise.

OpenTelemetry-powered infrastructure monitoring: isolate and fix issues in minutes

The process of building and maintaining modern, cloud-based applications requires a new approach to infrastructure monitoring. Traditionally, engineers would try to isolate a specific infrastructure component causing an issue — and fix it alone, without diving into code. Today, DevOps engineers must understand how application performance is related to their infrastructure. Infrastructure, for DevOps engineers, is an enabler to deploy code.

Plan better and preempt bottlenecks with predict for metrics

Nothing is certain in this world except for death, taxes, and that you will eventually run out of disk space. You may have used our unique predict operator to query logs and forecast future values (we’ve even heard of customers predicting their ingest volume for Sumo Logic log data to better forecast their usage and budget!) — and wanted to do the same with metrics. With the recent general availability of the predict for metrics operator, you can.

Now you can forward logs to external endpoints from within the Console!

Our aim, like always, is to help users thrive. We want them to receive real value from all that we deliver through our various features. And it’s equally important to offer flexibility by providing all different ways to use those features. This way, you’re free to use the feature in the way that's most convenient. Driving this vision of ours, well, forward, we have now extended our Logs Forwarding experience from CLI to within the Console.

Optimizing Your Splunk Experience with Telemetry Pipelines

When it comes to handling and deriving insights from massive volumes of data, Splunk is a force to be reckoned with. Its ability to index, search, and analyze machine-generated data has made it an essential tool for organizations seeking actionable intelligence. However, as the volume and complexity of data continue to grow, optimizing the Splunk experience becomes increasingly important. This is where the power of telemetry pipelines, like Mezmo, comes into play.

Reducing Your Splunk Bill With Telemetry Pipelines

With 85 of their customers listed among the Fortune 100 companies, Splunk is undoubtedly one the leading machine data platforms on the market. In addition to its core capability of consuming unstructured data, Splunk is one of the top SIEMs on the market. Splunk, however, costs a fortune to operate – and those costs will only increase as data volumes grow over the years. Due to these growing pains, technologies have emerged to control the increasing costs of using Splunk.

How to Build a Culture of Data-Driven Product Management

Product-led growth (PLG) is on the rise, a discipline that relies on the product itself to drive user acquisition, expansion, conversion and retention. Today, 60% of Cloud 100 companies embrace a PLG strategy, because it’s an efficient method of growth with low customer acquisition costs. Plus, cloud-native companies have the unique opportunity to collect more data that shows exactly how customers are using their products, and where potential friction occurs.

Log Less, Achieve More: A Guide to Streamlining Your Logs

Businesses are generating vast amounts of data from various sources, including applications, servers, and networks. As the volume and complexity of this data continue to grow, it becomes increasingly challenging to manage and analyze it effectively. Centralized logging is a powerful solution to this problem, providing a single, unified location for collecting, storing, and analyzing log data from across an organization’s IT infrastructure.

Securely Connecting an Amazon S3 Destination to Cribl.Cloud and Hybrid Workers

There are several reasons you may want to route to Amazon S3 destinations, including routing to object storage for archival, routing to S3 buckets to utilize Cribl.Cloud’s Search feature, and archiving data that can be replayed later. When setting up Amazon S3 destinations in Cribl, there are three authentication methods: Auto, Manual, and Secret. Using the Auto authentication method paired with Assume Role is the most secure way to connect Amazon S3 to Cribl.

How to collect and query Kubernetes logs with Grafana Loki, Grafana, and Grafana Agent

Logging in Kubernetes can help you track the health of your cluster and its applications. Logs can be used to identify and debug any issues that occur. Logging can also be used to gain insights into application and system performance. Moreover, collecting and analyzing application and cluster logs can help identify bottlenecks and optimize your deployment for better performance.