Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

InvGate Service Management: 6 AI Superpowers to Augment Service Desk Agents' Capabilities

At InvGate, we are committed to harnessing the power of AI to redefine Service Management practices. Since launching AI Hub, featuring the first wave of AI-powered capabilities within InvGate Service Management and Asset Management, over 50% of our clients have adopted them and saved dozens of thousands of hours. Our most impactful solutions have focused on enhancing decision-making processes, enabling teams to work smarter and achieve better outcomes.

AI Strategies for Software Engineering Career Growth

Space.com sums up the Big Bang as our universe starting “with an infinitely hot and dense single point that inflated and stretched—first at unimaginable speeds, and then at a more measurable rate to the still-expanding cosmos that we know today,” and that’s kind of how I like to think about November 2022 for junior developers.

How MSPs Can Leverage AI to Increase Efficiencies and Increase Margins

The Managed Service Provider (MSP) industry is highly competitive. The growing demand for IT management and support has led to a proliferation of MSPs, ranging from small to established providers. This saturation intensifies pressure on profit margins and heightens expectations for delivering faster, more efficient services. With many MSPs competing for business, companies must find ways to differentiate themselves to attract and retain clients. At ScienceLogic, we know that AI holds the key to success.

What Are SLMs? Small Language Models, Explained

Large language models (LLMs) are AI models with billions of parameters, trained on vast amounts of data. These models are typically flexible and generalized. The volume and distribution of training data determines what kind of knowledge a large language model can demonstrate. By training these large models on a variety of information from all knowledge domains, these models can perform sufficiently well on all tasks.

The Next Generation of AI-Powered Observability

AI is changing our world, and its impact on observability is no different. This article discusses some of the components of a good observability platform, how AI is well-positioned to revolutionize observability, and how Lumigo Copilot Beta will provide substantial value to customers and partners.

From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines

Last week, I attended one of the last conferences of the year with team Mezmo: the Gartner IT Infrastructure, Operations & Cloud Strategies Conference in Las Vegas. Not surprisingly, there were over 20 sessions covering observability and how it is getting increasingly critical in the new complex distributed computing environment. Of course, there were many sessions, including all keynotes that addressed the advent and impact of AI on IT operations and observability.

How good is GitHub Copilot at generating Playwright code?

People keep asking us here at Checkly if and how AI can help create solid and maintainable Playwright tests. To answer all these questions, we started by looking at ChatGPT and Claude to conclude that AI tools have the potential to help with test generation but that "normal AI consumer tools" aren't code-focused enough. High-quality results require too complex prompts to be a maintainable solution.

What is RAG?

In a 2020 paper, Patrick Lewis and his research team introduced the term RAG, or retrieval-augmented generation. This technique enhances generative AI models by utilizing external knowledge sources such as documents and extensive databases. RAG addresses a gap in traditional Large Language Models (LLMs). While traditional models rely on static knowledge already contained within them, RAG incorporates current information that serves as a reliable source of truth for LLMs.