Operations | Monitoring | ITSM | DevOps | Cloud

August 2024

Trends in AI: RAFT and the Worlds First Network Language Model

In this video Product Marketing Evangelist John Capobianco explores retrieval augmented fine tuning and the world's first network language model (NML) fine-tuned by Selector AI! We will cover the evolution of operations as well as various inflection points in technology including artificial intelligence. Then we will deep dive into how, and why, techniques likes RAG and Fine Tuning augment human operations dealing with data at scale and complexity never seen before.

Modern Network Observability: Device Discovery, CMDB, and AIOps

Understanding the state of your network and infrastructure is a critical responsibility for operations teams. Without their ever-watchful eye, network issues can cause problems ranging from annoying performance issues to downtime. To detect, prevent, and address these issues, operations teams have relied on a combination of monitoring and manual correlation, leveraging whatever tools were available.

Networking Field Day 35: Selector AI Alerting Discussion with Nitin Kumar

Selector delivers consolidated, actionable alerts through your preferred collaboration platform, such as Slack or Teams. Alerts depend on Selector's powerful event correlation fueled by advanced AI/ML techniques. Automations can be leveraged to generate service tickets that include detailed summaries, root cause analysis, and even suggested remediations.

Networking Field Day 35: Selector AI Demo Part 2

In this demo, a user leverages Selector's Conversational AI, Selector Copilot, to investigate performance within their network infrastructure. The user first probes into the health of tenants located in a specific geographic region. Selector Copilot provides a visualization of the current state and summarization of the overall condition and afflicted tenants, along with probable root cause. The user then interacts with Selector Copilot to explore resource allocation, historical usage, and projected bandwidth. Each visualization provided by Selector Copilot can be copied and pasted onto a dedicated dashboard.

Networking Field Day 35: Democratization of Data Access Using Network LLMs with Selector AI

In this brief demo of the Selector platform, a user interacts with Selector Copilot to explore behavior within their network infrastructure. They first look into the latency of their transit routers, revealing a regional issue. The user drills down into network topology information to further investigate the latency, where they access details about devices, interfaces, sites, and circuits. Selector Copilot is then leveraged to surface circuit errors. Notably, each visualization provided by Selector Copilot can be copied and pasted onto a dedicated dashboard.

Networking Field Day 35: Selector AI Introduction with Debashis Mohanty

Selector's customer base includes 50 deployments across service providers as well as large enterprises in retail, media distribution, colocation services, and multi-cloud networking services. These customers aim to correlate events across their network, applications, and infrastructure; eliminate the need for human intervention in RCS and remediation; and democratize access to insights using conversational natural language interfaces. Selector delivers on these outcomes, while accelerating incident remediation through smart, actionable alerting and a GenAI-based conversational interface.

Networking Field Day 35: Solving the Query Problem with Selector AI

Selector translates English phrases to SQL queries through the use of an LLM. Each SQL query includes the table, or data set to be searched, along with filters, or conditions which prune the search results. We walk through a number of SQL queries and sample search results, before considering the LLM-based translation of a sample English phrase processed by Selector.

Networking Field Day 35: Selector AI and the Workings of an LLM

An LLM differs from a function in that it takes output and imputes, or infers, a function and its arguments. We first consider how this process works within Selector for an English phrase converted to a query. We then step through the design of Selector's LLM, which relies on a base LLM trained with English phrases and SQL translation, then fine-tuned, on-premises, with customer-specific entities. In this way, each of Selector's deployments relies on an LLM tailored to the customer at hand.