Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Leveraging observability to improve digital resilience

With increasing competition and a digitizing landscape, small and medium enterprises (SMEs) in Australia are being forced to level up their game using AI and modernization. This means eventually relying on cloud and AI integration to ensure agility and responsiveness. The diversity of applications and the complexity of tech architecture pose challenges like increasing costs, security risks, and scalability challenges.

What Developers Should Know about Observability

Peter is a serial entrepreneur and co-founder of Percona, FerretDB, and other tech companies. As a leading expert in open-source strategy and database optimization, Peter has applied his technical knowledge and entrepreneurial drive to contribute as a board member and advisor to several open-source startups. His insights into performance optimization and system reliability play a crucial role in shaping Coroot’s functionality.

Green Data: The Role of Observability in Shaping a Sustainable Future

Systems speak in data. Widespread digitization means systems communicate more than ever, while increasingly refined means of recording and interpreting their messages are revolutionizing IT management. Meanwhile, beyond the engine rooms of enterprises, our planet is trying to tell us something, too. In changing temperatures and rising sea levels, we see signs that our relationship with the natural world must change.

Overcoming Barriers to Achieving ZeroSec Observability

Achieving ZeroSec observability has long been the ultimate goal, yet it remains elusive despite countless hours and sleepless nights dedicated to the cause. A recent discussion with a client underscored the persistent challenges that many organizations continue to struggle with in this pursuit. They had all the right tools in place yet faced significant issues that prevented them from achieving a smooth run of the applications.

Optimizing observability costs with a DIY framework

Observability costs are exploding as businesses strive to deliver maximum customer satisfaction with high performance and 24/7 availability. Global annual spending on observability in 2024 is well over 2.4 billion USD and is expected to reach 4.1 billion USD by 2028. On an individual company basis, this is reflected by observability costs ranging from 10-30% of overall infrastructure spend. These costs will undoubtedly rise with digital environments expanding and becoming ever more complex.

Observability and incident response need resilience testing

There’s a reason why observability and incident response practices have become standard across modern software development. Anyone wanting to minimize downtime and deliver reliable, available applications needs to have fully instrumented systems and playbooks so they can respond quickly and effectively to outages or incidents. But there’s another piece to the reliability puzzle: resilience testing.

Understanding Traces and Spans: Span Filtering With ObserveNow and Grafana 10.4

ObserveNow, the leading open source-based observability stack, has recently enhanced its capabilities with the introduction of Span Filtering – a key feature in its latest upgrade to Grafana 10.4. This advancement significantly improves the platform’s ability to dissect and analyze traces, which are crucial for understanding the behavior and performance of distributed systems.

Navigating Software Engineering Complexity With Observability

In the not-too-distant past, building software was relatively straightforward. The simplicity of LAMP stacks, Rails, and other well-defined web frameworks provided a stable foundation. Issues were isolated, systems failed in predictable ways, and engineers had time to innovate on new features for the business. And it was good.

Free the data: Why US federal agencies should standardize on OpenTelemetry

In today's digital age, data is the lifeblood of modern organizations — and the US government is no exception. As agencies grapple with the ever-increasing volume and complexity of data, it is imperative to adopt a standardized approach to monitoring, analyzing, and understanding the behavior of complex IT systems. This is where OpenTelemetry, an open-source observability framework, comes into play.