Operations | Monitoring | ITSM | DevOps | Cloud

Linux

What is HPC? An introduction to High-Performance Computing

High-Performance Computing, or HPC, is the procedure of combining computational resources together as a single resource. The combined resources are often referred to as a supercomputer or a compute cluster. The reason this is done is to make it possible to deliver computational intensity and the ability to process complex computational workloads and applications at high speeds and in parallel. Those workloads require computing power and performance that is often beyond the capabilities of a typical desktop computer or a workstation.

Linux Command Cheat Sheet

As we know that many of our users are system administrators, network and software engineers as well as cloud infrastructure leaders who use Linux primarily, we've created a helpful cheat sheet as a reference guide to help you with understanding the most common Linux commands. Feel free to save the sheet below and share it with any team members that you think would appreciate learning some of the most essential commands for Linux.

MLOps Pipeline with MLFlow, Seldon Core and Kubeflow

MLOps pipelines are a set of steps that automate the process of creating and maintaining AI/ML models. In other words, Data Scientists create multiple notebooks while building their experiments, and naturally the next step is a transition from experiments to production-ready code. The best way to do this is to build an effective MLOps pipeline. What’s the alternative, I hear you ask? Well, each time you want to create a model, you run your notebooks manually.

Canonical Experiences Record Channel Business Growth and Momentum

7 April 2022 – Canonical, the publisher of Ubuntu, announced today that its channel partner program has seen upwards of 240% growth within the past year. At the forefront of this momentum is the continued growth of the company’s partner-led business, with new and existing partners actively driving Canonical’s offerings into the market.