Operations | Monitoring | ITSM | DevOps | Cloud

December 2024

Gain comprehensive visibility into your ECS applications with the ECS Explorer

Amazon Elastic Container Service (ECS) is a container orchestration service that enables you to efficiently deploy new applications or modernize existing ones by migrating them to a containerized environment. Building on ECS gives you the flexibility, scalability, and security that containers offer, but also presents challenges in monitoring and troubleshooting your applications and infrastructure.

Introducing Datadog's Next-Generation Rust-based Lambda Extension

In 2021, we announced the release of the Datadog Lambda extension, a simplified, cost-effective way for customers to collect monitoring data from their AWS Lambda functions. This extension was a specialized build of our main Datadog Agent designed to monitor Lambda executions.

State of Cloud Costs

Cloud spending continues to grow, but managing costs effectively remains a challenge for many organizations. In this video, Datadog Senior Product Manager Kayla Taylor dives into our recent State of Cloud Costs report—which analyzed AWS cloud cost data from hundreds of organizations—to understand the key factors driving cloud expenses. We explore the impact of adopting emerging compute technologies like Arm-based processors, GPUs, and AI capabilities, how usage patterns and previous-generation technologies affect cloud costs, and the role of AWS discount programs in cost management.

How Datadog migrated its Kubernetes fleet on AWS to Arm at scale

Over the past few years, Arm has surged to the forefront of computing. For decades, Arm processors were mainly associated with a handful of specific use cases, such as smartphones, IoT devices, and the Raspberry Pi. But the introduction of AWS Graviton2 in 2019 and the adoption of Arm-based hardware platforms by Apple and others helped bring about a dramatic shift, and Arm is now the most widely used processor architecture in the world.

Achieve total app visibility in minutes with Single Step Instrumentation

Datadog APM and distributed tracing provide teams with an end-to-end view of requests across services, uncovering dependencies and performance bottlenecks to enable real-time troubleshooting and optimization. However, traditional manual instrumentation, while customizable, is often time consuming, error prone, and resource intensive, requiring developers to configure each service individually and closely collaborate with SRE teams.

Monitor your OpenAI LLM spend with cost insights from Datadog

Managing LLM provider costs has become a chief concern for organizations building and deploying custom applications that consume services like OpenAI. These applications often rely on multiple backend LLM calls to handle a single initial prompt, leading to rapid token consumption—and consequently, rising costs. But shortening prompts or chunking documents to reduce token consumption can be difficult and introduce performance trade-offs, including an increased risk of hallucinations.