Operations | Monitoring | ITSM | DevOps | Cloud

Making the Case for Vendor-Backed Puppet Core

Thousands of organizations rely on open source community builds for infrastructure automation. But if you're tasked with certifying, maintaining, and patching those builds yourself, you know the burden firsthand. The reality is that managing open source internally consumes time, introduces risk, and diverts resources from higher-value initiatives. When critical vulnerabilities emerge, your team scrambles to assess, test, and deploy fixes, all while keeping production environments stable.

Introducing Knowledge Discovery in InvGate Service Management

AI in Service Management promises faster resolutions and fewer repetitive tickets. In reality, its performance depends on something far less glamorous: a reliable knowledge layer. Many service desks already have the answers; they’re just buried inside resolved tickets instead of organized in a way AI can use.Turning those resolutions into structured documentation takes time that rarely fits into daily operations.

Agentless Discovery For Windows Devices: How to Set up WMI in InvGate Asset Management

Agentless discovery for Windows devices plays an important role in maintaining visibility across modern IT environments. Why? Because according to data from StatCounter, Windows accounts for about 67% of desktop operating systems worldwide, and its presence is typically even higher in corporate environments. This is where agentless discovery becomes especially valuable.

Database Governance with OPA in Harness DB DevOps | Harness Blog

Harness Database DevOps integrates Open Policy Agent (OPA) to enforce database governance through policy as code. By embedding compliance rules directly into CI/CD pipelines, teams can automatically prevent risky database changes, maintain audit trails, and meet regulatory requirements without slowing down development. Database systems store some of the most sensitive data of an organization such as PII, financial records, and intellectual property, making strong database governance non-negotiable.

The architecture advantage: Why the data layer decides the AI race

Dozens of startups are sprinting to build the next “agentic SIEM” that can autonomously detect, investigate, and respond to threats. They’re well-funded, well-marketed, but structurally hollow. Here’s what it usually looks like: an LLM layer on top of a thin orchestration engine on top of fragmented or customer-hosted data lakes. While it looks impressive in a demo, it quickly falls apart in production. Why? It’s not built on a strong foundation.

Smarter Postgres Monitoring: Compare Queries, Spot Unused Indexes, and Diagnose Waits

This is a guest post from Adrian Tan. Over recent months, we’ve been steadily improving PostgreSQL monitoring in Redgate Monitor, with a singular focus: to help Postgres users diagnose performance problems faster, with less manual investigation. The latest updates and new features tackle this problem in a few different ways.

How AI Agents Communicate: Understanding the A2A Protocol for Kubernetes

Since the rise of Large Language Models (LLMs) like GPT-3 and GPT-4, organizations have been rapidly adopting Agentic AI to automate and enhance their workflows. Agentic AI refers to AI systems that act autonomously, perceiving their environment, making decisions, and taking actions based on that information rather than just reacting to direct human input.

Say Goodbye to ZooKeeper

Automated, Zero-Downtime KRaft Migrations Now Available on Aiven The Apache Kafka ecosystem has been steadily moving toward a simpler, more scalable architecture with KRaft (Kafka Raft), leaving ZooKeeper behind. In March 2025, Kafka 4.0 dropped support for ZooKeeper entirely. Since June 2025, all new Aiven for Apache Kafka services have been deployed with KRaft by default, allowing our users to benefit from faster partition scaling and simplified cluster management.

Context is the New Currency: Building a Context-aware Enterprise with Agentforce

Corporate investment in Generative AI is outpacing value realization. While Large Language Models (LLMs) possess vast general reasoning capabilities, they suffer from a critical blind spot: they are pre-trained on the public internet, yet completely blind to your enterprise reality. This context gap renders even the most advanced models ineffective, forcing them to guess (hallucinate) rather than reason based on your specific business rules.

Bring Clarity and Confidence Back to Ops: How Trustworthy Guidance Sets a New Standard

For years, enterprises have chased the promise of artificial intelligence as a remedy for growing operational complexity. It seemed logical that if environments were expanding faster than teams could keep up, smarter models could fill the gap. But early deployments of generic AI proved a difficult truth. Intelligence alone does not create operational clarity. It does not guarantee safety.