Operations | Monitoring | ITSM | DevOps | Cloud

AI

Paving the way for modern search workflows and generative AI apps

Elastic’s innovative investments to support an open ecosystem and a simpler developer experience In this blog, we want to share the investments that Elastic® is making to simplify your experience as you build AI applications. We know that developers have to stay nimble in today’s fast-evolving AI environment. Yet, common challenges make building generative AI applications needlessly rigid and complicated. To name just a few.

Generative AI explained

When OpenAI released ChatGPT on November 30, 2022, no one could have anticipated that the following 6 months would usher in a dizzying transformation for human society with the arrival of a new generation of artificial intelligence. Since the emergence of deep learning in the early 2010s, artificial intelligence has entered its third wave of development. The introduction of the Transformer algorithm in 2017 propelled deep learning into the era of large models.

How Generative AI Makes Observability Accessible for Everyone

We are pleased to share a sneak peek of Query Assistant, our latest innovation that bridges the world of declarative querying with Generative AI. Leveraging our large language models (LLMs), Coralogix’s Query Assistant translates your natural language request for insights into data queries. This delivers deep visibility into all your data for everyone in your organization.

Build Operational Resilience with Generative AI and Automation

For modern enterprises aiming to innovate faster, gain efficiency, and mitigate the risk of failure, operational resilience has become a key competitive differentiator. But growing complexity, noisy systems, and siloed infrastructure have created fragility in today’s IT operations, making the task of building resilient operations increasingly challenging.

Automate insights-rich incident summaries with generative AI

Does this sound familiar? The incident has just been resolved and management is putting on a lot of pressure. They want to understand what happened and why. Now. They want to make sure customers and internal stakeholders get updated about what happened and how it was resolved. ASAP. But putting together all the needed information about the why, how, when, and who, can take weeks. Still, people are calling and writing. Nonstop.

Using Honeycomb for LLM Application Development

Ever since we launched Query Assistant last June, we’ve learned a lot about working with—and improving—Large Language Models (LLMs) in production with Honeycomb. Today, we’re sharing those techniques so that you can use them to achieve better outputs from your own LLM applications. The techniques in this blog are a new Honeycomb use case. You can use them today. For free. With Honeycomb.

Sponsored Post

Building Exceptional Products: Almaden's Approach

In today's dynamic world of technology and innovation, building products that resonate with customers and stand the test of time is no easy feat. At Almaden, we've cultivated a unique, Customer-Centric Product Design, approach to product development that prioritizes the customer's perspective over mere technological prowess. In this blog post, we'll delve into the core principles that drive our product development process, emphasizing the importance of understanding objectives, agile methodologies, and the modern tools we use to bring our ideas to life.

AI Explainer: The Dirty Little Secret About ChatGPT

ChatGPT, developed by OpenAI and launched in November 2022, isn’t the only large language model that has received lots of attention lately, but it’s by far the most widely known. A previous blog post that listed a glossary of AI terms included this brief definition: You may have read over the past year that GPT-4 (the paid version of ChatGPT) has been able to pass many difficult exams. Here are just a few.