Operations | Monitoring | ITSM | DevOps | Cloud

InfluxData

Querying Parquet with Millisecond Latency

We believe that querying data in Apache Parquet files directly can achieve similar or better storage efficiency and query performance than most specialized file formats. While it requires significant engineering effort, the benefits of Parquet’s open format and broad ecosystem support make it the obvious choice for a wide class of data systems.

How Prescient Devices Uses Time Series Data for IoT Automation

Companies need to consider both how fast they can put edge applications into action and update them, and how quickly they can process incoming data. Industrial processes are becoming increasingly automated as sensors on machines collect a growing amount of data. Much of this data is time-stamped and can help companies improve processes. This large volume of sensor data can become unwieldy if companies don’t manage it properly.

Catering to the Bespoke: How InfluxDB Meets Developers Where They Are

At InfluxData, we pride ourselves on building a platform – InfluxDB – for developers, by developers. It’s not enough to simply “talk the talk.” As an engineering leader, it’s really important to me that InfluxData “walks the walk,” too. This requires a holistic understanding of our users, their familiarity with time series, the environments in which they work, and the problems they’re trying to solve.

InfluxDB Cloud Features New Query Experience

If seeing is believing, then the new UI for the InfluxDB query experience is sure to convert you. We are working on a new query/script editor and want you to try it out. Feel free to share your feedback with us so we can make it even better! Here are just some of the highlights of the new editor.

How to Setup InfluxDB, Telegraf and Grafana on Docker: Part 2

This tutorial describes how to install the Telegraf plugin as a data-collection interface with InfluxDB 1.7 and Docker. In Part 1 of this tutorial series, we covered the steps to install InfluxDB 1.7 on Docker for Linux instances. We describe in Part 2 how to install the Telegraf plugin as a data-collection interface with InfluxDB 1.7 and Docker.

Resource Guide for InfluxDB and AWS

InfluxDB Cloud runs natively on AWS. This is great for users that already rely on AWS because it keeps everything (or at least most things, hopefully!) in one place. This can also reduce data latency, if the region you use is geographically close to your data sources. Plus, it’s super easy to get started using InfluxDB on AWS. One of the great things about AWS is that it has a ton of different services and features that allow you to do more with your data.

AWS and InfluxDB - Reflections on re:Invent 2022 Keynote

Amazon re:Invent is a major technology event every year. At this year’s re:Invent, the keynote by AWS CEO Adam Selipsky made a concerted effort to draw connections between technology and some of the key challenges that people around the world, and in some cases beyond the terra firma of Earth, face. While the presentation touched on a wide range of topics, one overarching theme was the intersection of the physical and digital worlds, and the role technology plays in bridging that divide.

Tracing with InfluxDB IOx

Tracing has always been a key use case for time series data. But admittedly, it’s also one that past versions of InfluxDB could not handle as well as we wanted. One of the roadblocks was the cardinality issue. Tracing data is, almost by definition, high cardinality data and prior to InfluxDB IOx, high cardinality data could affect query performance.

Visualizing Time Series Data with Chart.js and InfluxDB

Time series data is a sequence of data points generated through repeated measurements indexed over time. The data points originate from the same source and track changes at different points in time. Times series data includes data like stock exchange data, monthly inflation data, quarterly gross domestic product (GDP) data, and logs from IoT sensors.

An Introduction to Apache Parquet

A look at what Parquet is, how it works and some of the companies using its optimization techniques as a critical component in their architecture. As the amount of data being generated and stored for analysis grows at an increasing rate, developers are looking to optimize performance and reduce costs at every angle possible. At the petabyte scale, even marginal gains and optimizations can save companies millions of dollars in hardware costs when it comes to storing and processing their data.