Operations | Monitoring | ITSM | DevOps | Cloud

Tech Talk - Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Find out how Splunk Attack Analyzer can help you quickly and efficiently investigate potential malware and phishing incidents by automatically tracking each stage of complex attack chains and expediting your response efforts. Hear directly from Product Manager Aditya Raj as he demonstrates how to combine Splunk Attack Analyzer with Splunk Enterprise Security and Splunk SOAR for even greater threat detection and response power.

Energy-Efficient Computing: How To Cut Costs and Scale Sustainably in 2026

With AI the centerpiece of technology and innovation today, energy efficient computing is quietly becoming one of the most urgent challenges. In this article, we will discuss what makes energy efficient computing relevant for your organization, especially when modern resource-intensive AI workloads play an important role in driving your business operations and services.

Introducing the Splunk Technology Add-on for Ollama: Illuminating Shadow AI Deployments

Without strong visibility and governance, local LLMs risk replicating the fragmented, unsupervised sprawl once seen in shadow IT, complicating security postures and making it difficult for organizations to ensure proper oversight and compliance as these powerful AI tools become embedded in daily workflows. To address this challenge, The Splunk Threat Research Team has released the Splunk Technology Add-on for Ollama that provides comprehensive monitoring and observability capabilities specifically designed for local LLM deployments.

Tech Talk - Observability Unlocked: Kubernetes Monitoring with Splunk Observability Cloud

In this Tech Talk, discover how they’re leveraging Splunk Infrastructure Monitoring (IM) to supercharge their Kubernetes operations, detect issues within minutes, and resolve them 90% faster — all while optimizing and scaling like pros.

Artificial Intelligence as a Service AIaaS (AIaaS): What is Cloud AI & How Does it Work?

Today, organizations looking to build AI products and services using large language models (LLMs), agentic AI, and generative AI often start by investing in artificial intelligence as a service (AIaaS), also known as cloud AI. AIaaS provides a scalable, flexible, and cost-effective way for businesses of all sizes to access advanced AI technologies without the need for extensive in-house expertise or infrastructure.

RED Metrics & Monitoring: Using Rate, Errors, and Duration

The RED method is a streamlined approach for monitoring microservices and other request-driven applications, focusing on three critical metrics: Rate, Errors, and Duration. Originating from the principles established by Google's "Four Golden Signals," the RED monitoring framework offers a pragmatic and user-centric perspective on service assurance and service performance.

ISP Monitoring Explained: How to Measure, Manage, and Improve Internet Performance

Reliable internet connectivity isn’t a convenience. It’s mission-critical infrastructure for modern organizations. Every organization today depends on high-speed, reliable internet access for daily operations—from cloud collaboration and data transfer to streaming, remote work, and customer engagement. As digital transformation accelerates, the rise of AI, large language models (LLMs), IoT, and device sprawl has massively increased bandwidth demand and network complexity.

From Idea to Deployment: How To Build a Practical AI Roadmap

AI is being adopted at a faster rate than ever across the business world. According to Stanford, 78% of organizations had implemented AI in some form by 2024. And if that’s not convincing enough, 92% of companies plan to expand their AI investment over the next three years. Practically everyone, including your competitors, is already using AI to gain a competitive edge. If you don’t act soon, there's a real risk of falling behind.

LLM Observability Explained: Prevent Hallucinations, Manage Drift, Control Costs

Large Language Models (LLMs) are transforming how businesses interact with users, automate workflows, and deliver insights in real time. But as powerful as these models are, running them at scale comes with unique challenges, from hallucinations and latency spikes to cost overruns and user trust issues.