Operations | Monitoring | ITSM | DevOps | Cloud

Logging

The latest News and Information on Log Management, Log Analytics and related technologies.

Tips and tricks for using new RegEx support in Cloud Logging

One of the most frequent questions customers ask is “how do I find this in my logs?”—often followed by a request to use regular expressions in addition to our logging query language. We’re delighted to announce that we recently added support for regular expressions to our query language — now you can search through your logs using the same powerful language selectors as you use in your tooling and software!

Shipping Metrics from Hashicorp Consul with ELK and Logz.io

Microservices interact in so many ways. Load balancers, security authentication, and service discovery are just the tip of the iceberg. It can get confusing, if not outright messy. But why be messy when you can be meshy? This is where service meshes come into play, linking the roles these tools have in a common ‘net’ that ties and weaves the whole architecture together. Hashicorp has produced one of the most popular of these organizational assets — Consul Connect.

Splunking Azure: NSG Flow Logs

Azure Network Security Groups (NSG) are used to filter network traffic to and from resources in an Azure Virtual Network. If you’re coming from AWS-land, NSG’s combine Security Groups and NACL’s. Splunking NSG flow log data will give you access to detailed telemetry and analytics around network activity to & from your NSG's. If that doesn’t sound appealing to you yet, here are some of the many things you could Splunk with your network traffic logs from Azure.

Is your team spending too much time on log maintenance?

Log maintenance has a hidden cost. Engineers optimize their instance types, storage, networking, dependencies, and much more. However, we rarely consider the engineers themselves. A DevOps culture encourages engineers to own the solutions they build. While this increases team autonomy, it risks splitting the precious bandwidth that the team has. Automation is what makes the DevOps cycle work, and it has to cover log analysis to do a thorough job of catching issues.

How we're making it easier to use the Loki logging system with AWS Lambda and other short-lived services

There are so many great things that can be said about Loki – I recently wrote about them here. But today, I want to talk about something technical that has been difficult for Loki users, and how we might make it easier: using Loki for short-lived services. Historically, one of Loki’s blind spots is ingesting logs from infrastructure you don’t control, because you can’t co-locate a forwarding agent like promtail with your application logs.

Manage Your Splunk Infrastructure as Code Using Terraform

Splunk is happy to announce that we now have a Hashicorp verified Terraform Provider for Splunk. The provider is publicly available in the Terraform Registry and can be used by referencing it in your Terraform configuration file and simply executing terraform init. If you're new to Terraform and Providers, the latest version of Terraform is available here. You will need to download the appropriate binaries and have Terraform installed before using the provider.

Monitor Alcide kAudit logs with Datadog

Kubernetes audit logs contain detailed information about every request to the Kubernetes API server and are critical to detecting misconfigurations and vulnerabilities in your clusters. But because even a small Kubernetes environment can rapidly generate lots of audit logs, it’s very difficult to manually analyze them.

Enriching data with GeoIPs from internal, private IP addresses

For public IPs, it is possible to create tables that will specify which city specific ranges of IPs belong to. However, a big portion of the internet is different. There are company private networks with IP addresses of the form 10.0.0.0/8, 172.16.0.0/12 or 192.168.0.0/16 scattered in every country in the world. These IP addresses tend to have no real information for the geographic locations.