Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Creating a Free Data Lake with Coralogix

Like many cool tools out there, this project started from a request made by a customer of ours. Having recently migrated to our service, this customer had ~30TB of historical logging data. This is a considerable amount of operational data to leave behind when moving from one SaaS platform to another. Unfortunately, most observability solutions are built around the working assumption that data flows are future-facing.

We're Thrilled To Share - Coralogix has Received AWS DevOps Competency

At Coralogix, we believe in giving companies the best of the best – that’s what we strive for with everything we do. With that, we are happy to share that Coralogix has received AWS DevOps Competency! Coralogix started working with AWS in 2017, and our partnership has grown immensely in the years since. So, what is our new AWS DevOps Competency status, and what does it mean for you?

Istio Log Analysis Guide

Istio has quickly become a cornerstone of most Kubernetes clusters. As your container orchestration platform scales, Istio embeds functionality into the fabric of your cluster that makes monitoring, observability, and flexibility much more straightforward. However, it leaves us with our next question – how do we monitor Istio? This Istio log analysis guide will help you get to the bottom of what your Istio platform is doing.

Stay Alert! Building the Coralogix-Nagios Connector

Ask any DevOps engineer, and they will tell you about all the alerts they enable so they can stay informed about their code. These alerts are the first line of defense in the fight for Perfect Uptime SLA. With every good solution out there, you can find plenty of methods for alerting and monitoring events in the code. Each method has its own reasons and logic for how it works and why it’s the best option. But what can you do when you need to connect two opposing methodologies? You innovate!

Limit Coralogix usage per account using Azure Functions

At Payoneer, we use Coralogix to collect logs from all our environments from QA to PROD. Each environment has its own account in Coralogix and thus its own limit. Coralogix price modules are calculated per account. We as a company have our budget per account and we know how much we pay per each one. In case you exceed the number of logs assigned per account you will pay for the “extra” logs. You can see the exact calculation in this link.

Introducing Log Observability for Microservices

Two popular deployment architectures exist in software: the out-of-favor monolithic architecture and the newly popular microservices architecture. Monolithic architectures were quite popular in the past, with almost all companies adopting them. As time went on, the drawbacks of these systems drove companies to rework entire systems to use microservices instead.

Introducing Cloud Native Observability

The term ‘cloud native’ has become a much-used buzz phrase in the software industry over the last decade. But what does cloud-native mean? The Cloud Native Computing Foundation’s official definition is: From this definition, we can differentiate between cloud-native systems and monoliths which are a single service run on a continuously available server. Like Amazon’s AWS or Google Azure, large cloud providers can run serverless and cloud-native systems.

7 JSON Logging Tips That You Can Implement

When teams begin to analyze their logs, they almost immediately run into a problem and they’ll need some JSON logging tips to overcome them. Logs are naturally unstructured. This means that if you want to visualize or analyze your logs, you are forced to deal with many potential variations. You can eliminate this problem by logging out invalid JSON and setting the foundation for log-driven observability across your applications.

Configuring Kibana for OAuth

Kibana is the most popular open-source analytics and visualization platform designed to offer faster and better insights into your data. It is a visual interface tool that allows you to explore, visualize, and build a dashboard over the log data massed in Elasticsearch clusters. An Elasticsearch cluster contains many moving parts. These clusters need modern authentication mechanisms and they require security controls to be configured to prevent unauthorized access.