In today’s world, with Large tech giants and businesses looking forward to moving toward serverless architecture, there has been a significant demand for scaling the applications. It’s therefore no surprise that millions of companies worldwide have adopted, or are planning on migrating to a Kubernetes and AWS Lambda solution to take their serverless applications to the next level.
In our previous blog, we shared our firsthand experience of implementing a tracing collector API using serverless components. Drawing parallels with Amazon Prime Video’s architectural redesign, we discussed the challenges we encountered, such as cold-start delays and increased costs, which prompted us to transition to a non-serverless architecture for more efficient solutions.
As developers, we understand the immense value of having real-time access to live traces. It significantly enhances our ability to identify, debug, and troubleshoot potential issues within applications, streamlining the development and deployment process. Today, we are excited to introduce the new and improved Live Tail feature at Lumigo, which enhances your observability experience to a whole other level.
Amazon Elastic Container Service (ECS) is a versatile platform that enables developers to build scalable and resilient applications using containers. However, containerized services, like Node.js applications, may face challenges like memory leaks, which can result in container crashes. In this blog post, we’ll delve into the process of identifying and addressing memory leaks in Node.js containers running on ECS. First, let’s look closer at what a memory leak is.