Operations | Monitoring | ITSM | DevOps | Cloud

AI

Lessons learned from building our first AI product

Since the advent of ChatGPT, companies have been racing to build AI features into their product. Previously, if you wanted AI features you needed to hire a team of specialists to build machine learning models in-house. But now that OpenAI’s models are an API call away, the investment required to build shiny AI has never been lower. We were one of those companies. Here’s our journey to building our first AI feature, and some practical advice if you’ll be doing the same.

Make AI Writing Undetectable with These Helpful Tips

Artificial Intelligence (AI) has revolutionized the writing field, proving itself a capable author of anything from news articles to short stories. However, one of the common challenges people face is making AI-generated text sound as human and natural as possible. Namely, AI-generated text can be identified by its lack of personal touch, and writers need to create their content that could effectively engage and entice readers. So how can one make AI writing undetectable and incorporate it seamlessly into their work? Let's explore some helpful tips.

Unleashing the power of AI and automation for effective Cloud Cost Optimization in 2024

In the current dynamic business environment, cloud computing has emerged as the fundamental driver of innovation and scalability. As companies increasingly rely on the cloud for their business initiatives achieving cloud cost optimization remains a significant hurdle.

Supercharged with AI

One of the most painful parts of incident management is keeping on top of the many things that happen when you’re right in the middle of an incident. From figuring out and communicating what’s happening, to ensuring you learn from previous incidents, and even capturing the right actions – incidents are hard, but they don’t need to be this hard.

Challenges & limitations of LLM fine-tuning

Large Language Models (LLMs) like GPT-3 have revolutionized the field of artificial intelligence, offering unprecedented capabilities in natural language processing. Fine-tuning these models to specific tasks or datasets can enhance their performance. However, this process presents unique challenges and limitations that must be addressed. This article explores the intricacies of LLM fine-tuning, shedding light on the obstacles and constraints faced in this advanced AI domain.

Emerging AI use cases in ITSM Knowledge management, chatbots and self-service

AI experts Louis Columbus and Susan Fung explore AI use cases in IT service management, highlighting how a symbiotic relationship between AI and human intelligence amplifies knowledge management capabilities and enhances user experiences by providing direct answers that synthesize complex information.

Navigating AI in SOC

With notable advancements in Artificial Intelligence (AI) within cybersecurity, the prospect of a fully automated Security Operations Center (SOC) driven by AI is no longer a distant notion. This paradigm shift not only promises accelerated incident response times and a limited blast radius but also transforms the perception of cybersecurity from a deterrent to that of an innovation enabler.

What's in store for AI in 2024 with Patrick Debois

In this episode, Rob is joined by Patrick Debois, a seasoned industry expert and DevOps pioneer. Patrick shares his personal odyssey within the realm of DevOps, reflecting on the current state of the industry compared to his initial expectations. The conversation delves into the convergence of business analytics and technical analytics, exploring innovative approaches developers are adopting to integrate generative AI into their products.

Prompt engineering: A guide to improving LLM performance

Prompt engineering is the practice of crafting input queries or instructions to elicit more accurate and desirable outputs from large language models (LLMs). It is a crucial skill for working with artificial intelligence (AI) applications, helping developers achieve better results from language models. Prompt engineering involves strategically shaping input prompts, exploring the nuances of language, and experimenting with diverse prompts to fine-tune model output and address potential biases.