Operations | Monitoring | ITSM | DevOps | Cloud

How to

How to Build a Service Catalog in 5 Easy Steps

Building a service catalog is an absolute must to improve your company's IT self-service — and you can do it in just five simple steps! An ITIL service catalog benefits the business, consumers, and IT organizations alike, since it: In this video, InvGate Product Specialist Matt Beran walks you through creating a service catalog on InvGate Service Desk and highlights its differences from a service portfolio.

Cloud Visibility: Kentik Cloud Enhancements for AWS

Watch an in-depth walkthrough of using Kentik to streamline incident investigations and improve productivity when working with AWS. We demonstrate how to analyze data related to IP traffic denials across multiple VPCs, identifying security group issues and using the Kentik Data Explorer for enriched flow data visualization. The video also explains how Kentik’s cloud data reporting capabilities enable quick and efficient problem-solving, reducing the time to resolution. It’s perfect for IT teams looking to boost efficiency and tackle common issues related to security groups and access control lists.

Cloud Visibility: Announcing Kentik Map for Google Cloud

Learn how Kentik Cloud can improve efficiency in managing Google Cloud infrastructure. It showcases Kentik Map, which provides a constantly updated, detailed visualization of your hybrid cloud environment, illuminating how resources interact and are nested within each other. Watch as we demonstrate how to analyze real-time traffic flow, troubleshoot connection problems, and use the routing table to understand network communication issues. Whether you’re resolving network problems, onboarding new team members, or migrating applications to the cloud, the Kentik Map offers a powerful tool to enhance your productivity in Google Cloud.

What Is an Application Programming Interface (API)? - VMware Tanzu Fundamentals

What are APIs and why do they matter? The application programming interface is a key enabler of modern applications, and API use is increasing rapidly in virtually every industry, as software development accelerates to meet digital transformation goals. Businesses are embracing an API-first approach to application development and using APIs and microservices to create modern applications and to integrate new applications with legacy systems.

Setting Up a Data Loop using Cribl Search and Stream Part 1: Setting up the Data Lake Destination

In the very first video of the series, we delve into the concept of a data loop and why it is beneficial to use Cribl Search and Cribl Stream to optimize the use of a data lake. The video gives a concise overview of Cribl Search and Cribl Stream, and how they work in tandem to create a data loop. We then provide step-by-step instructions on how to configure the Cribl Stream "Amazon S3 Data Lake" Destination to transfer data from Stream to an S3 bucket that has been optimized specifically for Cribl Search's access. Finally, we demonstrate sending sample data to the S3 bucket and present a before-and-after view of the bucket to showcase the impact of the test data.

Setting Up a Data Loop using Cribl Search and Stream Part 2: Configuring Cribl Search

In the second video of our series, we delve into the nuts and bolts of configuring Cribl Search to access the data that we've stored in the S3 bucket. The video guides you step-by-step through the process of configuring the Search S3 dataset provider by using the Stream Data Lake destination as a model for the authentication information. From there, we proceed to walk through the process of creating a Dataset to access the Provider that we've just established. To wrap things up, we demonstrate how to search through the test data that we've previously stored in the S3 bucket.

Setting Up a Data Loop using Cribl Search and Stream Part 3: Send Data from Cribl Search to Stream

The third video of our series focuses on utilizing Cribl Stream to manage data. The presenter takes us through the process of configuring the Cribl Stream in_cribl_http source in tandem with the Cribl Search send operator to collect data. We are able to witness live data results being sent from Search to Stream. Afterward, we demonstrate creating a Route in Stream to direct the incoming data from Search (via the in_cribl_http) Source to the Data Lake by using the Amazon S3 Data Lake Destination. This step employs a passthrupipeline to ensure that the data is not altered in transit.

Setting Up a Data Loop using Cribl Search and Stream Part 4: Putting it All Together

The final section of our video series showcases how to put the data loop to use with a real-world dataset. We utilize the public domain “Boss of the SOC v3” dataset, which is readily available on GitHub. First, we employ Cribl Search to sift through and explore the BOTSv3 data that is stored in an S3 bucket to locate some specific data.