Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

Now in beta: alerting for modern DevOps teams

Although FireHydrant has spent five years focused on what happens after your team (erg, I mean service 🙄) gets paged, the topic of alerting often comes up in discussions with our community. People are tired of paying big bucks for software that’s expensive, bloated, and hasn’t seen much innovation. Clearly, there’s a problem here – and we’re tackling it head on.

Introducing Products: A Tool to Model Argo CD Application Relationships and Promotions

At Codefresh, we are always happy to see companies and organizations as they adopt Argo CD and get all the benefits of GitOps. But as they grow we see a common pattern: It is at this point that organizations come to Codefresh and ask how we can help them scale out the Argo CD (and sometimes Argo Rollouts) initiative in the organization. After talking with them about the blockers, we almost always find the same root cause.

Autocorrelate Alerts With Squadcast's Key-Based Deduplication

With the increasing complexity of technology stacks and monitoring tools, managing incidents can become overwhelming, leading to alert noise, alert fatigue, and delayed responses. This is where Key-Based Deduplication comes to the rescue, streamlining incident handling and enhancing the effectiveness of your Incident Management platform.

How Automation Can Support Threat Vulnerability Management + Reduce the Attack Surface

Threat vulnerability management, and managing your attack surface, are critical in the battle against cyberattacks. At some point before an successful attack, the internal process to manage threats and prevent access to sensitive data failed. How could they have done things differently? Were they just managing too much, too often, without the resources they needed?

AI Explainer: ChatGPT Doesn't Actually Understand Any Words

Computers understand numbers. So, how do large language models (LLMs) mimic human speech? Do LLMs like ChatGPT actually understand words? The short answer is no. LLMs process and represent words using numerical embeddings. These numerical representations enable the model to perform computations, make predictions and generate text. However, it's essential to clarify that the model doesn't possess a true understanding of words in the way humans do. Here's a breakdown of the process.