Operations | Monitoring | ITSM | DevOps | Cloud

Analytics

5 AI search trends impacting developers in 2024

After an incredibly fast-moving 2023, what does the future hold for AI and search? Conversational generative AI leapt into the public consciousness over the past year, and organizations scrambled to define their strategy for capitalizing on the trend. AI-boosted relevance is reshaping the way users experience search — and elevating their expectations for the quality of the interaction.

AWS Cost and Usage Dashboards Operations Solution (CUDOS): A Deep Dive

CUDOS is one of the six specialized dashboards, in the AWS Cloud Intelligence Dashboards framework. The Cloud Intelligence Dashboards framework is focused on providing comprehensive usage and cost insights for AWS resources. It is a very crucial tool that provides deep insights that can be used to optimize AWS infrastructure.

Enhancing Log Analytics in Loki with Cribl Stream

First, when I mention Loki, I’m not talking about one of my favorite TV shows to binge-watch or the lead character played by Tom Hiddleston, who has arguably become one of my favorite characters in the Marvel universe. I’m talking about the Loki, which is a highly available, cost-effective log aggregation system that was inspired by Prometheus. While Prometheus is focused on metrics, Loki is focused on collection of logs.

Anodot Cloud Cost Update: Enhancing Anomaly Detection and Budgeting

February 20, 2024 We’re excited to announce the latest enhancements to Anodot’s Cloud Cost platform, bringing cutting-edge improvements in anomaly detection and budgeting capabilities. Our commitment to innovation continues to shape the way businesses approach data analysis and financial planning.

Avoiding the Data Roach Motel with Open Source

It's your data. You should be able to do whatever you want with it. However, vendor lock-in can trap your data in a single solution, making it extremely difficult to switch to something that better meets your needs. When your data goes in, but doesn't come out—that's a data roach motel. Open source technologies, and solutions built with open source tools, enable organizations to take control of their data, giving them the freedom to put it into and take it out of whatever databases or solutions they see fit.

How Time Series Databases and Data Lakes Work Together

In the fast-paced world of software engineering, efficient data management is a cornerstone of success. Imagine you’re working with streams of data that not only require rapid analysis but also need to store that data for long-term insights. This is where the powerful duo of time series databases (TSDBs) and data lakes can help.

How to speed up MySQL and PostgreSQL queries for FREE

How can I speed up a SQL query? This video showcases how to optimize a SQL query on MySQL or PostgreSQL for free using AI and a tool called EverSQL by Aiven. In the example shown, the query performance on a MySQL database went from 20 seconds to 0.5 seconds only by pasting the SQL and additional metadata in the EverSQL by Aiven UI and applying the indexes and SQL rewrite suggestions. Check out these resources to learn more:(links to any tools or resources you used in the video, our docs/trial if appropriate)

Using Time Series Data for Infrastructure Monitoring: Challenges and Advantages

Monitoring the performance and health of infrastructure is crucial for ensuring smooth operations. From data centers and cloud environments to networks and IoT devices, infrastructure monitoring plays a vital role in identifying issues, optimizing resource utilization, and maintaining high availability. However, traditional monitoring approaches often struggle to handle the volume and velocity of data generated by modern infrastructures. This is where time series databases, like InfluxDB, come into play.

Aiven workshop: Learn Apache Kafka with Python

What's in the Workshop Recipe? Apache Kafka is the industry de-facto standard for data streaming. An open-source, scalable, highly available and reliable solution to move data across companies' departments, technologies or micro-services. In this workshop you'll learn the basics components of Apache Kafka and how to get started with data streaming using Python. We'll dive deep, with the help of some prebuilt Jupyter notebooks, on how to produce, consume and have concurrent applications reading from the same source, empowering multiple use-cases with the same streaming data.