Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Strategies for Kubernetes Cluster Administrators: Understanding Pod Scheduling

Kubernetes has revolutionized container orchestration, allowing developers to deploy and manage applications at scale. However, as the complexity of a Kubernetes cluster grows, managing resources such as CPU and memory becomes more challenging. Efficient pod scheduling is critical to ensure optimal resource utilization and enable a stable and responsive environment for applications to run in.

Announcing our improved Schedules & On-Call Rotations

Hey folks! We are super excited to announce that our schedules feature has gone through a bit of an update. Well, more than a bit 🙂. We’ve gone through the feature with a fine-toothed comb and introduced a bunch of UI and functional improvements which we hope will help you achieve one thing: set up, edit and manage your on-call schedules at scale in a matter of minutes (Yes, that was three things but it was tough to condense it to ONE thing)
Sponsored Post

What are Network Operation Centers (NOC) and how do NOC teams work?

Modern-day markets are highly competitive and in order to foster stronger customer relations, we see businesses striving hard to be always available and operational. Hence, businesses invest heavily to ensure higher uptime and to have dedicated teams that constantly monitor the performance of an organization's IT resources. In this blog, we will explore what NOC teams are and why they are important.

Sponsored Post

SLA Vs SLO: Tutorial & Examples

Service level agreements (SLA) and service level objectives (SLO) are increasing in popularity because modern applications rely on a complex web of sub-services such as public cloud services and third-party APIs to operate, making service quality measurement an operational necessity for serving a demanding market. This article focuses on the similarities and differences between SLAs and SLOs, explains the intricacies involved in implementing them, presents a case study, and finally recommends industry best practices for implementing them.

Looking back at our journey through 2022

We are on the cusp of breaking into 2023🗓️with a bag full of interesting memories. Before we wrap up this year end's celebrations we'd like to look back and highlight some notable events that took place at Squadcast. ‍ Squadcast has grown leaps and bounds over the 12 months in our journey towards becoming an integrated Reliability Workflow platform. 😎

Squadcast + Hund Integration: A Simplified Approach for effective Alert Routing

Hund is a versatile Service Monitoring & Communication tool. It helps monitor services and keeps your audience informed about any status changes automatically through a status page. If you use Hund for monitoring and management requirements, you can integrate it with Squadcast, an end-to-end incident response tool, to route detailed alerts from Hund to the right users in Squadcast.

Getting Amazon GuardDuty alerts via SNS Endpoint

Monitoring your infrastructure and safeguarding it against threats is not easy. Setting up the infrastructure, monitoring, collecting and analyzing information for threat detection, is indeed a cumbersome process. This is where a security monitoring service like Amazon GuardDuty can help. In this blog, we will explore Amazon GaurdDuty service and discuss how integrating it with Squadcast can help you route alerts to the right users for quick and efficient incident response.

Maximize efficiency with Terraformer: Manage Squadcast resources via IaC

Ever since Terraform was first launched by HashiCorp, infrastructure teams have been quick to leverage its functionality. Because deploying infrastructure via code became so much easier and error-free. This surely became a great way to deploy new infrastructure with custom configurations, but what about managing cloud infrastructure that is already defined? Can Terraform be used to make changes to them? Or can it be used to deploy the same configurations to new environments?
Sponsored Post

SRE Best Practices

Site Reliability Engineering (SRE) is a practice that emerged at Google because of its need for highly reliable and scalable systems. SRE unifies operations and development teams and implements DevOps principles to ensure system reliability, scalability, and performance. There's plenty of documentation on tactics for adopting automation and implementing infrastructure as code, but practical ops-focused SRE best practices based on real-world experience are harder to find. This article will explore 6 SRE best practices based on feedback from SREs and technical subject matter experts.