Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

How to Stay Ahead of Data Retention Requirements - Part 2

In part 1 of this series, we tried to outline what data retention is and why it is needed to overcome increasing requirements for various regulatory standards. As detailed, there are some clear guidelines for organizations to take what we called a “data retention approach for compliance”. In this follow up post, outline some specific technological and procedural challenges you might face as well as some practical guidelines and strategies to overcome them.

Light monitoring with Elastic Metricbeat

Since Pandora FMS version 7.0 NG 712, thanks to the integration with Elastic, we have drastically improved the system of storage and visualization of records or logs, which allows us to increase the growth and speed in the presentation of information. But wait, there is still more to talk about before I go on to describe Light Monitoring…

Centralized Logging - Knowing When Less is More

A lot of firms collect massive amounts of data every day (up to billions of events) to improve their security efforts, enhance their business intelligence, and refine their marketing strategies. Their log storage drives are so big that some of them even brag about the size, to show their public and clients how advanced their technologies are.

How Graylog's Advanced Functionalities Help You Make Sense of All Your Data

The inherent limitations of most log managers and the need to work within the constraints of your current hardware may force your enterprise to make some hard choices. Less useful data may be left unchecked, old information will eventually get deleted, and the amount of data that is accessed in real-time is sacrificed to reduce excess workload.

Case Study: How the Largest Nordic Bank Improved Compliance & Ensured Comprehensive Data Protection

This bank needed to upgrade their customer recording communications analysis & troubleshooting abilities, to comply with required regulations. It was also important for them to identify and resolve problems proactively. By implementing XpoLog they managed to significantly shorten the ‘loss-of-recording’ durations, perform quick troubleshooting and get to the root cause fast. Their ability to analyze/monitor their environments became much simpler and more efficient.

Case Study: How did a leading ad-tech firm increase application quality & lower response time/AWS costs?

The firm runs hundreds of services which optimize online advertising. The company utilizes large amounts of data which is located both on-premise and on AWS. They wanted to: By using XpoLog the company created a single location that manages all the information from all the sources. The information is shipped to the XpoLog cluster and tagged to the relevant service/team. XpoLog is deployed and managed on AWS spot instances, reducing approximately 90% of the required hardware costs! Try XpoLog free.

Announcing Graylog v3.0 Beta 1

Today we are releasing the first public beta of Graylog v3.0. This release includes a whole new content pack system, an overhauled collector sidecar, new reporting capabilities, improved alerting with greater flexibility, support for Elasticsearch 6.x, a preview version of an awesome new search page called Views, and tons of other improvements and bug fixes.

Scalability Worst Practices - How Not to Design Applications

Scalability is a core requirement of modern applications. Applications need to be able to handle sudden changes in demand without losing resilience or performance. With the popularity of cloud computing and microservices, DevOps teams have a countless number of platforms and tools for deploying scalable applications. However, true scalability involves much more than just migrating an application to the cloud.