Operations | Monitoring | ITSM | DevOps | Cloud

The Go client for Elasticsearch: Working with data

In our previous two blogs, we provided an overview of the architecture and design of the Elasticsearch Go client and explored how to configure and customize the client. In doing so, we pointed to a number of examples available in the GitHub repository. The goal of these examples is to provide executable "scripts" for common operations, so it's a good idea to look there whenever you're trying to solve a specific problem with the client.

9 Key Areas to Cover in Your Anomaly Detection RFP

Evaluating a new, unknown technology is a complicated task. Although you can articulate the goals you’re trying to achieve, you’re probably faced with multiple solutions that approach the problem in different ways and highlight varying features. To cut through the clutter, you need to figure out what questions to ask in order to evaluate which technology has the optimal capabilities to get the job done in your unique setting.

How Correlation Analysis Boosts the Efficacy of eCommerce Promotions

In the first part of the blog series, we discussed how correlation analysis can be leveraged to reduce time to detection (TTD) and time to remediation (TTR) by guiding mitigation efforts early. Further, correlation analysis helps to reduce alert fatigue by filtering out irrelevant anomalies and grouping multiple anomalies stemming from a single incident into one alert. In this part, we throw light on the applicability of correlation analysis in the realm of eCommerce, specifically, promotions.

Navigating CloudWatch Logs Effectively With Dashbird

To get some serious work done, we usually need to prepare for it. “Baby steps first,” they say. In our niche, these “baby steps” would be countless small jobs that need to be done before we can start with our main project. Proper preparation(try saying this 5 times fast) is the key to success, but after we’ve achieved our primary goal, there will always be something to do to keep it steady and flowing.

Financial Services companies are well positioned to embrace the Data Age

What exactly is the Data Age? Well, there is no single definition of what this means - but my interpretation is that it refers to the fact that data can now be used as a foundation for decision making in every department of every business. And with the volume of data generated forecast to continue to grow exponentially up until 2025 according to IDC, the possibilities for using data to drive informed decision making are only going to increase.

Splunking Azure: NSG Flow Logs

Azure Network Security Groups (NSG) are used to filter network traffic to and from resources in an Azure Virtual Network. If you’re coming from AWS-land, NSG’s combine Security Groups and NACL’s. Splunking NSG flow log data will give you access to detailed telemetry and analytics around network activity to & from your NSG's. If that doesn’t sound appealing to you yet, here are some of the many things you could Splunk with your network traffic logs from Azure.

Add Datadog alerts to your xMatters incident workflows

xMatters provides flexible, smart tools for incident response and management. With configurable workflows that bring together data from sources like Github, Jenkins, and Zendesk, you can automate crucial tasks and send enriched notifications to streamline team communications.

Introducing Boolean-filtered metric queries

Health and performance issues are easier to understand—and to troubleshoot—when you can use tags to aggregate your data across many overlapping scopes. But while some scopes come directly from your infrastructure, others are constantly evolving to reflect the needs of your product or organization. You can only track your data effectively if you can define—and redefine—your scopes on the fly.