Operations | Monitoring | ITSM | DevOps | Cloud

Achieving Comprehensive Network Observability for VMware Cloud Foundation

Private cloud infrastructure adoption is accelerating rapidly. This move is driven by the ongoing “cloud reset” as leaders rethink their hybrid and multi-cloud strategies, seeking greater control, security, and flexibility for their IT workloads. As a matter of fact, leaders in 69% of organizations are considering repatriating workloads, and one-third already have.

Data points per minute in Grafana Cloud: What you need to know about DPM

If you’re working with metrics in Grafana Cloud, chances are you’ve come across DPM (data points per minute). It shows up in usage dashboards, invoice breakdowns, and occasionally pops up in Slack when your ingestion numbers start looking suspicious. DPM can also be seen in the Grafana Cloud billing and usage dashboard, which is available by default in every Grafana Cloud account. It helps you understand how much data you’re sending—and whether it’s more than you need.

Infrastructure Management: When to Pick Bare Metal or Virtualized Servers

Infrastructure management isn't about taking sides. Too often, teams get pulled into “X is better than Y” debates that miss the bigger picture: your compute stack should serve your needs, not industry hype. A common decision point in the past has been the choice between bare metal or cloud hyperscalar virtualization. Nowadays, the answer isn't 1 or 0.

Sustaining the demand for AI in Asia with investment in subsea cable infrastructure

Across the Asia Pacific region significant investment is going into new subsea cable infrastructure that will help sustain the long-term demand for AI. We’ve written a lot on this blog about the impact of AI on networks and how AI workloads require low latency, high-capacity data transfer. This in turn puts more pressure on existing network infrastructure and in particular subsea cable systems - which provide the global backbone for cloud platforms and data centres.

Introducing Environment Policy- Gain Unified Control Over Compliance Requirements Across Your Runtime Environments

In modern software development, different environments often have different compliance requirements. Your development environment might allow more flexibility, while production demands strict controls around security scans, testing, and code review. Environment Policy helps you codify these requirements and enforce them consistently.

5 Ways to Optimize Your OpenSearch Cluster

OpenSearch is a powerful, scalable search and analytics engine that can do amazing things for logging, observability, and full-text search. But like any distributed system, it only performs well if you keep it properly tuned and healthy. Ignore it, and you risk slower queries, higher costs, and even data loss. Here are five practical tips to keep your OpenSearch cluster running smoothly and efficiently.

Guided by Trust: ScienceLogic Earns TrustRadius Top Rated for the Sixth Year Running

In a world where IT complexity is accelerating, trust has never been more essential. At ScienceLogic, trust isn’t just a value—it’s our compass. It guides how we innovate, how we serve, and how we grow alongside our customers. That’s why we’re proud to share that ScienceLogic SL1 has once again been named a Top Rated product on TrustRadius—for the sixth consecutive year. This recognition is more than a milestone.

Achieving Sovereign AI with the JFrog Platform and NVIDIA Enterprise AI Factory

Sovereign AI ensures control over AI/ML data, models, and infrastructure, which is now essential for enterprises, regulated industries, and national interests. JFrog and NVIDIA have collaborated to deliver a secure, scalable solution for sovereign AI. NVIDIA provides the accelerated computing and AI software while JFrog ensures trusted DevSecOps and MLOps practices across the entire AI lifecycle, from model development and security scanning to deployment at the edge and in air-gapped environments.

A Simple Guide To GKE Cost Allocation And Cluster Spend

Running workloads on Google Kubernetes Engine (GKE) delivers impressive scalability and flexibility. Yet, it can also introduce a tricky challenge: tracking GKE costs accurately. Remember, GKE costs rarely scale linearly. Overprovisioned nodes, idle autoscalers, and orphaned workloads can quietly balloon your bill in the background. And while GKE’s native tools offer some visibility, they often miss the full picture.