Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Introducing the Stackdriver Cloud Monitoring dashboards API

Using dashboards in Stackdriver Cloud Monitoring makes it easy to track critical metrics across time. Dashboards can, for example, provide visualizations to help debug high latency in your application or track key metrics for your applications. Creating dashboards by hand in the Monitoring UI can be a time-consuming process, which may require many iterations. Once dashboards are created, you can save time by using them in multiple Workspaces within your organization.

Kublr 1.16 supports rolling upgrades with zero downtime across clouds and on-prem

When evaluating Kubernetes providers, you’ll quickly see that they ALL support upgrades. But here’s a little dirty secret, no independent Kubernetes multi-cloud, multi-cluster platform supports rolling updates. Instead, you’ll need to deploy a different cluster and replicate your app to ensure service delivery while the original cluster is updated. That process is cumbersome and requires far too many resources.

Essential Open Source Serverless Code Libraries

Serverless applications, due to their distributed nature, are often stuck having to reinvent the wheel. While small utility scripts and functions are often easily instrumented and monitored, anything of a transactional nature will need to implement special code to provide developers with common tools like stack traces, atomicity, and other patterns that rely on a singular flow of control.

How Can I Check My ElastAlert Rule is Configured Correctly?

Making sure that your ElastAlert yaml file is formatted and configured correctly. All of the below points will prevent alerts from being fired but there may not be an error message associated with the problem. It is possible you may need to contact support to investigate this issue for you. Make sure to proof read the rule you have written to ensure that it is what you expect to see as most of the issues regarding ElastAlert not working correctly is related to the points above.

Think You're a Proactive MSP? Think Again!

MSPs generate more MRR (monthly recurring revenue) when they’re able to reduce reactive noise and dedicate their resources to more proactive MSP tasks. This is because proactive tasks are predictable. They’re scheduled into our days (e.g., regular technology alignment visits, business impact and strategy meetings, centralized services, and projects). We know how much they cost and how long they take to complete. We know the margins on proactive tasks (if we price them properly).

Scale-Up vs. Scale-Out Storage: Tips to Consider

In the Data Age 2025 report, worldwide data is expected to grow 61% to 175 zettabytes by 2025. The enterprise sector, in particular, generates more than 30% each year. To be ready for a digital future, consider the scaling strategy of data infrastructure beforehand. Scale-up and scale-out are the main ways to add capacity to your infrastructure.

"4:00 in the Morning" - Managing a VDI Migration

For all its benefits, Virtualization Desktop Infrastructure (VDI) continues to be a massive headache for many enterprise technology teams. From improper personas and sizing, long logon times, hangs, lost sessions, freezes and crashes—often one small mistake in a virtualized environment can set your entire enterprise ablaze.

Chaos engineering + monitoring, part 2: for starters

Oh man, did I get ahead of myself in my last post! I started chatting tools, and I realize now that I really should have been talking more about why I’m using Sensu and Gremlin. But it didn’t occur to me until last year at Monitorama. John Allspaw gave the keynote talk (Taking Human Performance Seriously). While you can watch the talk here, I’ll highlight a few points...

Announcing Stackery-Native Provisioned Concurrency Support

The seamless scaling and headache-free reliability of a serverless application architecture has become compelling to a broad community of cloud specialists. But for those who have yet to become converts, a specific issue related to service startup latency—Cold Starts—has been one of the cited key objections. Fortunately, the serverless marketplace is maturing.