Operations | Monitoring | ITSM | DevOps | Cloud

AI

The Future of Serverless is AI WebAssembly by Matt Butcher - Navigate Europe 23

Join Matt Butcher as he explores the future of serverless computing, unveiling the power of WebAssembly and AI inferencing on Fermyon's innovative platform. Learn about the evolution from virtual machines and containers to serverless functions, understand serverless computing from a developer's perspective, and discover how Fermyon is making AI inferencing more accessible and efficient.

The Unplanned Show E20: LLM Observability w/Charity Majors & James Governor

Large language models (LLMs) are foundational to generative AI capabilities, but present new challenges from an observability perspective. Hear from observability thought leader and CTO/co-founder of Honeycomb, Charity Majors, and developer-focused analyst and co-founder of Redmonk, James Governor in this discussion about LLM observebility as more organizations are building business critical features on LLMs.

LM Co-Pilot: Your AI Co-Pilot for the Magical Streamlining of IT and Cloud Operations

LogicMonitor’s Generative Intelligence Solution for IT Teams Cutting-edge generative technologies have revolutionized our industry, paving the way for fresh and innovative approaches to deliver interactive and actionable experiences. At LogicMonitor, we firmly believe in leveraging these generative techniques across our platform, offering a uniquely dynamic support system for various aspects of our end-user experience.

AI Explainer: What Are Reinforcement Learning 'Rewards'?

In a previous blog post, which was a glossary of terms related to artificial intelligence, I included this brief definition of "reinforcement learning": I expect this definition would prompt many to ask, "What rewards can you give a machine learning agent?" A gold star? Praise? No, the short answer is: numerical values. In reinforcement learning, rewards are crucial for training agents to make decisions that maximize their performance in a given environment.

Elasticsearch and LangChain collaborate on production-ready RAG templates

For the past few months, we’ve been working closely with the LangChain team as they made progress on launching LangServe and LangChain Templates! LangChain Templates is a set of reference architectures to build production-ready generative AI applications. You can read more about the launch here.

Build and evaluate LLM-powered apps with LangChain and CircleCI

Generative AI has already shown its huge potential, but there are many applications that out-of-the-box large language model (LLM) solutions aren’t suitable for. These include enterprise-level applications like summarizing your own internal notes and answering questions about internal data and documents, as well as applications like running queries on your own data to equip the AI with known facts (reducing “hallucinations” and improving outcomes).

Quantifying the value of AI-powered observability

Organizations saw a 243% ROI and $1.2 million in savings over three years In today’s complex and distributed IT environments, traditional monitoring falls short. Legacy tools often provide limited visibility across an organization’s tech stack and often at a high cost, resulting in selective monitoring. Many companies are therefore realizing the need for true, affordable end-to-end observability, which eliminates blind spots and improves visibility across their ecosystem.

AI Explainer: What Are Generative Adversarial Networks?

I previously posted a blog that was a glossary of terms related to artificial intelligence. It included this brief definition of "generative AI": I expect for someone learning about AI, it's frustrating to read definitions of terms that include other terms you may not understand. In this case, generative adversarial networks — GANs — is probably a new term for many. This post will explain what GANs are for that reason — and also because they’re super cool.