Operations | Monitoring | ITSM | DevOps | Cloud

Logging

The latest News and Information on Log Management, Log Analytics and related technologies.

Cloud Monitoring Console's Health Dashboard: Maximize Your Monitoring Efficiency

Are you a Splunk Cloud admin tired of sifting through various tools and dashboards to monitor the health of your Splunk Cloud deployment? Do you often find yourself wondering what actions you can take to keep your Splunk Cloud deployment running smoothly? Are you looking for ways to be alerted before something impacts your deployment performance? Look no further than the Cloud Monitoring Console's Health Dashboard!

How the All-In Comprehensive Design Fits into the Cribl Stream Reference Architecture

Join Cribl's Ed Bailey and Ahmed Kira as they provide more detail about the Cribl Stream Reference Architecture, which is designed to help observability admins achieve faster and more valuable stream deployment. During this live stream discussion, Ed and Ahmed will explain the guidelines for deploying the comprehensive reference architecture to meet the needs of large customers with diverse, high-volume data flows. They will also share different use cases and discuss the pros and cons of using the comprehensive reference architecture.

How to Mask Sensitive Data in Logs with BindPlane OP Enterprise

Logs often contain sensitive data, including personally identifiable information (PII) such as names, email addresses, and phone numbers. To maintain security and comply with data protection regulations, it’s crucial to mask this data before storing it in your log analytics tool. BindPlane OP streamlines this process with the Mask Sensitive Data processor, ensuring your logs are safe and compliant.

Adding a Log Record Attribute

Check out how to standardize telemetry by adding metadata to the log record. By tagging appropriately, one can not only enrich, but also have the flexibility to route anywhere avoiding vendor lock-in. #telemetry #opensource #observability About ObservIQ: observIQ is developing the unified telemetry platform: a fast, powerful and intuitive next-generation platform built for the modern observability team. Rooted in OpenTelemetry, our platform is designed to help teams reduce, simplify, and standardize their observability data.

Search your logs efficiently with Datadog Log Management

In any type of organization and at any scale, logs are essential to a comprehensive monitoring stack. They provide granular, point-in-time insights into the health, security, and performance of your whole environment, making them critical for key workflows such as incident response, security investigations, auditing, and performance analysis. Many organizations generate millions (or even billions) of log events across their tech stack every day.

Retrace Logging Benefits

More than just APM, Logs inside of trace requests. Centralised Logging allows logs from many sources such as servers, files, applications all into Retrace. Search for log, go straight into trace. Tagging allows to group logs (based on client, developer etc.). Search and filter with any text, tag, regular expression. Save searches. Retrace allows unlimited users, all users can use saved searches. Live Tailing - what's going on the servers at any one time

New Logs Interface: Enhancing Debugging and Deployment Experience

I am excited to announce the release of our new logs interface inside Qovery. This feature is a crucial milestone in our journey to improve the debugging experience and provide better insights into deployment failures. As we are just about releasing parallel deployment, we revamped the interface to accommodate the concept of Deployment Pipeline, ensuring a seamless experience when deploying your applications.

How an Observability Pipeline Can Help With Cloud Migration

Do you want to confidently move workloads to the cloud without dropping or losing data? Of course, everyone does. But easier said than done. Cloud migration is tricky. There’s so much to think through and so much to worry about — how can you reconfigure architectures and data flows to ensure parity and visibility? How do you know the data in transit is safe and secure? How can you get your job done without getting in trouble with procurement?