Operations | Monitoring | ITSM | DevOps | Cloud

AI

EMA explores Elastic AI Assistant for Security

Spoiler alert: it’s great! Elastic Security has been making waves among busy security analysts everywhere with the launch of Elastic AI Assistant. Whether it’s synthesizing alert details and suggesting next steps, or the recent addition from Elastic 8.11 to generate ES|QL queries from natural language, there’s a lot to love about Elastic AI Assistant for security efforts.

AI at Splunk: Trustworthy Principles for Digital Resilience

There’s no doubt AI will radically reimagine the way we live, work and interact. It will empower new ways to solve business challenges and deliver customer value, but such a widespread impact requires a holistic approach. Building AI responsibly is one thing, but embedding trust into every aspect of our AI strategy is another entirely – and that’s what Splunk sets out to do.

Lessons learned from building our first AI product

Since the advent of ChatGPT, companies have been racing to build AI features into their product. Previously, if you wanted AI features you needed to hire a team of specialists to build machine learning models in-house. But now that OpenAI’s models are an API call away, the investment required to build shiny AI has never been lower. We were one of those companies. Here’s our journey to building our first AI feature, and some practical advice if you’ll be doing the same.

Make AI Writing Undetectable with These Helpful Tips

Artificial Intelligence (AI) has revolutionized the writing field, proving itself a capable author of anything from news articles to short stories. However, one of the common challenges people face is making AI-generated text sound as human and natural as possible. Namely, AI-generated text can be identified by its lack of personal touch, and writers need to create their content that could effectively engage and entice readers. So how can one make AI writing undetectable and incorporate it seamlessly into their work? Let's explore some helpful tips.

Unleashing the power of AI and automation for effective Cloud Cost Optimization in 2024

In the current dynamic business environment, cloud computing has emerged as the fundamental driver of innovation and scalability. As companies increasingly rely on the cloud for their business initiatives achieving cloud cost optimization remains a significant hurdle.

Supercharged with AI

One of the most painful parts of incident management is keeping on top of the many things that happen when you’re right in the middle of an incident. From figuring out and communicating what’s happening, to ensuring you learn from previous incidents, and even capturing the right actions – incidents are hard, but they don’t need to be this hard.

Challenges & limitations of LLM fine-tuning

Large Language Models (LLMs) like GPT-3 have revolutionized the field of artificial intelligence, offering unprecedented capabilities in natural language processing. Fine-tuning these models to specific tasks or datasets can enhance their performance. However, this process presents unique challenges and limitations that must be addressed. This article explores the intricacies of LLM fine-tuning, shedding light on the obstacles and constraints faced in this advanced AI domain.

What's in store for AI in 2024 with Patrick Debois

In this episode, Rob is joined by Patrick Debois, a seasoned industry expert and DevOps pioneer. Patrick shares his personal odyssey within the realm of DevOps, reflecting on the current state of the industry compared to his initial expectations. The conversation delves into the convergence of business analytics and technical analytics, exploring innovative approaches developers are adopting to integrate generative AI into their products.

Prompt engineering: A guide to improving LLM performance

Prompt engineering is the practice of crafting input queries or instructions to elicit more accurate and desirable outputs from large language models (LLMs). It is a crucial skill for working with artificial intelligence (AI) applications, helping developers achieve better results from language models. Prompt engineering involves strategically shaping input prompts, exploring the nuances of language, and experimenting with diverse prompts to fine-tune model output and address potential biases.