Operations | Monitoring | ITSM | DevOps | Cloud

AI

How to Use Generative AI for Knowledge Management

In the blog “How Generative AI Can Benefit Knowledge Management”, we looked at the benefits of AI to knowledge management to enhance the quality, automating the creation of content and enabling more engaging content. In enabling generative AI to become part of the knowledge management framework introduces concerns about accuracy, data bias, privacy and security. Now, it’s time to look at how we can make it work well together...

The Unplanned Show, Episode 3: LLMs and Incident Response

A software engineer, a data scientist, and a product manager walk into a generative AI project… Using technology that didn’t exist a year ago, they identify a customer pain point they might be able to solve, build on teammates’ experience with building AI features, and test how to feed inputs and constrain outputs into something useful. Hear the full conversation here.

Relativity uses Elasticsearch and Azure OpenAI to build futuristic search experiences, today

Elasticsearch® has been used by developers to build search experiences for over a decade. At Microsoft Build this year, we announced the launch of Elasticsearch Relevance Engine™ — a set of tools to enable developers to build AI-powered search applications. With generative AI, large language models (LLMs), and vector search capabilities gaining mindshare, we are delighted to expand our range of tools and enable our customers in building the next generation of search apps.

Cyberattack Prevention with AI

Cyberattack prevention involves proactive steps organizations take to protect their digital assets, networks, and systems from potential cyber threats. Preventive measures, such as a combination of best practices, policies, and technologies, are employed to identify and mitigate security breaches before they can cause significant damage.

Revolutionizing Homework with Technological Innovations: The Future of Learning

Hey there! Are you tired of the same old homework routine? Well, get ready to be blown away because the future of learning is here, and it's all about revolutionizing homework with technological innovations! Imagine a world where assignments are interactive, personalized, and captivating.

Open Source MLOps on AWS

With the rise of generative AI, enterprises are growing their AI budgets, looking for options to quickly set up the infrastructure and run the entire machine learning cycle. Cloud providers like AWS are often preferred to kick-start AI/ML projects as they offer the computing power to experiment without long-term commitments. Starting on the cloud takes away the burden of computing power, reducing start-up time and cost and allowing teams to iterate more quickly.

The generative AI societal shift

Once upon a time, not so long ago, the world was a different place. The idea of a "smartphone" was still a novelty, and the mobile phone was primarily a tool for making calls and, perhaps, sending the occasional text message. Yes, we had "smart" phones, but they were simpler, mostly geared toward business users and mostly used for, well, phone stuff. Web browsing? It was there, but light, not something you'd do for hours.

Top Trends in DevOps - ChatGPT

The world of DevOps is constantly evolving and adapting to the needs of the software development industry. With the increasing demand for faster and more efficient software delivery, organizations are turning to modern technologies and practices to help them meet these challenges. In a series of articles on the Kublr blog, we will take a look at some of today’s top DevOps trends.

How to observe your TensorFlow Serving instances with Grafana Cloud

The world of AI and machine learning has evolved at an accelerated pace these past few years, and the advent of ChatGPT, DALL-E, and Stable Diffusion has brought a lot of additional attention to the topic. Being aware of this, Grafana Labs prepared an integration for monitoring one of the most used machine learning model servers available: TensorFlow Serving. TensorFlow Serving is an open source, flexible serving system built to support the use of machine learning models at scale.