Operations | Monitoring | ITSM | DevOps | Cloud

Analytics

AWS Partners with InfluxData to Bring InfluxDB Open Source to Developers Around the World

Today, AWS announced Amazon Timestream for InfluxDB, a new managed offering for AWS customers to run single-instance open source InfluxDB natively within the AWS console. This partnership represents a significant multi-year commitment by AWS to combine its global reach and accessibility with our industry-leading time series database, InfluxDB. AWS adding InfluxDB as a preferred time series database reflects the demand from AWS customers for InfluxDB and evidence of the time series market acceleration.

The Coexistence of Open Source and Proprietary Software: Striking the Balance

Discover how to build a technology infrastructure to get the best of both open source and proprietary software The debate on the cohabitation of open source software (OSS) and proprietary software has persisted as long as both have existed. OSS, designed for unrestricted access and usage, and proprietary software, its opposite, have often been positioned as opponents in the technology arena. However, the reality is far from this either/or dynamic.

Helping Enterprises Make Sense of Sustainability in an Ocean of Engineering Data

If I were to ask you what marine biologists and database engineers have in common, you’d think it was the beginning of a nerdy joke. There probably is one in there somewhere but, as someone that has sat on both sides of the fence – or operated on both sides of the water’s surface to continue the analogy – I can say with certainty that the parallels are more common than coral on the Great Barrier Reef.

Tale of the Tape: Data Historians vs Time Series Databases

It’s easy to pitch technology buying decisions as black or white, where one camp is the promised land and the other is a dystopian wasteland where companies and profits go to die. But that doesn’t match reality. Instead, organizations need to balance technical trade-offs with their needs. So, while it’s easy to stand atop the “rip and replace” mountain and shout the virtues of your new technology, that’s not something that most organizations are willing to do.

You Can Solve the Application Waste Problem

If you’re like most companies running large-scale data intensive workloads in the cloud, you’ve realized that you have significant quantities of waste in your environment. Smart organizations implement a host of FinOps activities to ameliorate or address this waste and the cost it incurs, things such as: … and the list goes on. These are infrastructure-level optimizations.

Understanding FinOps: Principles, Tools, and Measuring Success

FinOps is a cultural practice that brings financial accountability to the world of cloud computing. It’s a strategic approach that aids organizations in understanding their cloud costs and making informed business decisions. FinOps is a new way of managing costs in an IT environment that is increasingly shifting towards the variable cost model of cloud services. It combines the best of the technical and financial worlds, resulting in an effective model for managing cloud costs.

Continuous Data: The Complete Guide

Data is never just data. There are structured and unstructured data, qualitative and quantitative data. Among these varied types, continuous data stands out as a key player, especially in the quantitative realm. Continuous data, with its infinite possibilities and precision, captures the fluidity of the real world — from the microseconds of a website’s load time to the fluctuating bandwidth usage on a network.

An Introduction to Microservices Monitoring-Strategies, Tools, and Key Concepts

Users have higher expectations than ever when it comes to performance and reliability in the apps they use every day. A critical part of meeting these expectations is having a robust monitoring system in place. This article focuses on monitoring applications using a microservice architecture—it will go over key concepts, common challenges, and useful tools every engineer should know.

Apache Spark at Scale #datadog #shorts #security #observability

Datadog is an observability and security platform that ingests and processes tens of trillions of data points per day, coming from more than 22,000 customers. Processing that amount of data in a reasonable time stretches the limits of well known data engines like Apache Spark. In addition to scale, Datadog infrastructure is multi-cloud on Kubernetes and the data engineering platform is used by different engineering teams, so having a good set of abstractions to make running Spark jobs easier is critical.