Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Managed autoscaling for all types of container workloads

A flexible architecture is critical for dynamic containerized applications but managing different infrastructure configurations to support different applications is a heavy lift, requiring significant time and effort. The major cloud providers do offer customers core capabilities to deploy, manage and scale cloud infrastructure through AWS Auto Scaling Groups (ASGs), GCP Instance groups and Azure Scale sets.

Introducing Predictive Rebalancing: An application-driven approach for reliably utilizing spot instances

Here at Spot by NetApp we’re continuously innovating our machine learning models used for identifying and predicting spot capacity usage and interruptions for all major public clouds (AWS, Azure and GCP). These proprietary algorithms expand the ability to utilize spot capacity for production and mission-critical workloads, allowing our customers to enjoy up to 90% cloud compute cost reduction with SLAs and SLOs that guarantee availability.

The future of cloud infrastructure: serverless meets storageless

Over the past 15 years, application architectures have evolved dramatically, from bare metal servers to virtualization technology, to elastic compute, auto scaling and containers. Recently, we’ve seen another shift to the serverless paradigm, where even the operating system is abstracted from developers and ops teams. Customers want fully-managed compute, but still want to use their existing application development tools and pipelines.

New Spot by NetApp documentation and API library

Today we’re happy to announce that Spot by NetApp documentation has been upgraded to improve both the user experience and the authoring environment. The upgraded site will speed up user access to information and make it easier to find what you need. In addition, the site is open sourced on GitHub so that users can easily suggest changes or updates on any page.

How to run your production workloads on Amazon's EC2 spot instances

As one of the most effective ways to dramatically reduce cloud compute infrastructure costs, EC2 spot instances have always played a role in managing cloud costs, with the eye-popping potential of up to 90% cost reduction. However, the fact that AWS can interrupt them at any given time, has not done much for their popularity. Yet in today’s volatile global economy, companies need to explore how to use this powerful means of cost reduction all while ensuring high availability.

Azure Kubernetes Service (AKS): Deployment, Tools and Best Practices

Kubernetes (k8s) is an open source platform. You can use it to automate, scale, and manage container workload distribution. K8s is ranked amongst the most widely used container orchestration tools. Many Kubernetes users run it in a public cloud, such as Microsoft Azure. You can use Azure resources for Kubernetes without worrying about lock-in.

Flexible control over instance storage

Ocean by Spot provides continuous optimization for the underlying infrastructure of containerized workloads. Launch specifications is a key feature that enables users to manage different types of workloads on the same Ocean cluster. With launch specs, cluster administrators can granularly set specific configurations per application, as needed.

Azure Spot VMs - How to enjoy their massive cost savings without suffering any interruptions

If you are running compute workloads in Azure and wondering how you can dramatically reduce costs and minimize infrastructure management all without affecting availability and performance, keep on reading. Back in May Azure introduced a new pricing model, called Azure Spot VMs, providing up to 90% cost savings in comparison to the pay-as-you-go pricing.