Operations | Monitoring | ITSM | DevOps | Cloud

October 2023

IT Operations Management (ITOM): The Basics

What is ITOM? Information technology operations management (ITOM) is the administration and management of an organization’s hardware, network, applications and technology needs. Generally regarded as the true meaning of “tech support,” it is a service-centric approach to IT infrastructure, IT support operations, IT networking and end user support.

What Is ITSM? IT Service Management Explained

ITSM, which stands for IT service management, is a strategy for delivering IT services and support to an organization, its employees, customers and business partners. ITSM focuses on understanding end users’ expectations and improving the quality of both IT services and their delivery. In the early days of computers, employees relied on the company IT department for help whenever a computer issue arose.

DevOps & DORA Metrics: The Complete Guide

In in order to achieve DevOps success, you must measure how well your DevOps initiatives work. Tracking the right DevOps metrics will help you evaluate the effectiveness of your DevOps practices. In this article, I’ll explain many DevOps metrics, including their significance, the key metrics for various goals, and — best of all — tips for improving the score of each DevOps metric discussed here.

OKRs, KPIs, and Metrics: Understanding the Differences

In the world of business management and performance tracking, OKRs, KPIs, and Metrics are common terms thrown around. Each plays a distinct role in helping organizations define their vision, measure their progress, and improve their performance. Let's dive deep into understanding the nuanced differences between these three concepts.

What is Observability? An Introduction

Simply put: Observability is the ability to measure the internal states of a system by examining its outputs. A system is considered “observable” if the current state can be estimated by only using information from outputs, namely sensor data. More than just a buzzword, the term “observability” originated decades ago with control theory (which is about describing and understanding self-regulating systems).

Availability: A Beginner's Guide

Availability is the amount of time a device, service or other piece of IT infrastructure is usable — or if it’s available at all. Because availability, or system availability, identifies whether a system is operating normally and how effectively it can recover from a crash, attack or some other type of failure, availability is considered one of the most essential metrics in information technology management. It is a constant concern.

API Monitoring: A Complete Introduction

At the most basic level, application programming interface (API) monitoring checks to see if API-connected resources are available, working properly and responding to calls. API monitoring has become even more important (and complicated) as more elements are added to the network and the environment evolves, including multiple types of devices, microservices as a key part of application delivery, and, of course, the widespread move to the cloud.

What is Infrastructure as Code? An Introduction to IaC

Infrastructure as Code, or IaC, is the practice of automatically provisioning and configuring infrastructure using code and scripts. IaC allows developers to automate the creation of environments to generate infrastructure components rather than setting up the necessary systems and devices manually.

Coffee Talk with SURGe: The Interview Series featuring Michael Rodriguez

Join Mick Baccio and special guest Michael Rodriguez, Principal Strategic Consultant for Google Public Sector, for a conversation about Michael’s career path into cybersecurity, the origin of his nickname “Duckie,” and his work as a cybersecurity subject matter expert for Google Space.

Real-Time Analytics: Definition, Examples & Challenges

Businesses need to stay agile and make data-driven decisions in real time to outperform their competitors. Real-time analytics is emerging as a game-changer, with 80% of companies showing an increase in revenue due to real-time data analytics as companies can gain valuable insights on the fly. This blog post will explore the concept of real-time analytics, its examples, and some challenges faced when implementing it. Read on for a detailed explanation of this exciting area in data analytics.

Industry Cloud Platforms, Explained

Cloud computing changed the way enterprise IT works. Investments in public technologies are forecasted to grow by 21.7% to reach the $600 billion mark by the end of this year. The trend is driven by two major factors: Business organizations view these capabilities as an imperative for digital transformation — especially the domain-specific IT services that solve problems unique to their industry verticals.

Maturity Models for IT & Technology

Setting meaningful goals for your technology investment decisions requires an understanding of your requirements. Primarily, that’s… Measuring your IT maturity is one way to advance your IT performance — in a way that aligns with your organizational goals and minimizes the risk of failure. You can compare your current situation to a group of peers or competitors and also to industry benchmarks. Let’s take a look.

Predictive Maintenance: A Brief Introduction

Predictive maintenance is a maintenance strategy that uses machine learning algorithms trained with Industrial Internet of Things (IIoT) data to make predictions about future outcomes, such as determining the likelihood of equipment and machinery breaking down. Using a combination of data, statistics, machine learning and modeling, predictive maintenance is able to optimize when and how to execute maintenance on industrial machine assets.

CapEx vs OpEx for Cloud, IT Spending, & More

Capital expenditures (CapEx) and operational expenditures (OpEx) are two ways organizations categorize their business expenses. Every organization has a variety of expenses, from office rent to IT infrastructure costs to wages for their employees. To simplify accounting, they organize these costs into different categories, two of the most common being CapEx and OpEx.

Container Orchestration: A Beginner's Guide

Container orchestration is the process of managing containers using automation. It allows organizations to automatically deploy, manage, scale and network containers and hosts, freeing engineers from having to complete these processes manually. As software development has evolved from monolithic applications, containers have become the choice for developing new applications and migrating old ones.

Centralized Logging & Centralized Log Management (CLM)

Centralized logging provides visibility into the system by consolidating all the log data in a single all-in-one source. It supports two particular enterprise needs: Once all the data is ingested in a central location, you can seamlessly identify the problems in systems and troubleshoot them. But with ease comes challenges, too. For example, your team members may struggle with locating their desired details from this sea of data.

Predictive Network Technology in 2024

IT networks generate large volumes of information in the form of security, network, system and application logs. The volume and variety of log data makes traditional network monitoring capabilities ineffective — especially for monitoring use cases that require proactive decision making. These decisions are based on things like: All of this makes large-scale and complex enterprise IT networks a suitable use case for advanced AI and machine learning capabilities.

Telemetry 101: An Introduction To Telemetry

Understanding system performance is critical for gaining a competitive advantage. Telemetry provides deeper insights into the system, helping business owners make better decisions. This article take a comprehensive look at the topic of telemetry. We’ll look at its functionality and telemetry types. We’ll also look at all the things telemetry data can help you with — plus the challenges companies with telemetry systems might face.

Listen, Learn and Adapt: The Keys to a Nimble Customer Experience Strategy

In celebration of Customer Experience Day 2023, this post is part of a series on customer experience and the ways that Splunk strives to deliver superior customer experience at every level. Any resilient customer experience (CX) team knows that in order to create superior customer experiences, listening is the first step. This is made apparent when you consider that 73% percent of customers expect companies to have a firm grasp on their unique needs and expectations.

Harmonizing Digital Channels and Business Operations to Deliver a Good Customer Experience

In celebration of Customer Experience Day 2023, this post is part of a series on customer experience and the ways that Splunk strifves to deliver superior customer experience at every level. Today, customers interact with brands through a variety of channels and platforms. In fact, 57% of customers prefer to engage with brands through digital channels first.

Cloud Migration Basics: A Beginner's Guide

What is a cloud migration? A cloud migration is the practice of moving IT workloads (data, applications, security, infrastructure, and other objects) to a cloud environment. Quick Links: Cloud migration can take many forms, including: There is also another type of cloud migration called a reverse cloud migration (also known as cloud repatriation or cloud exit) where existing applications are moved from a public cloud back to an on-premises data center.

Anomaly Detection in 2024: Opportunities & Challenges

Anomaly detection is the practice of identifying data points and patterns that may deviate significantly from an established hypothesis. As a concept, anomaly detection has been around forever. Today, detecting anomalies today is a critical practice. That’s because anomalies can indicate important information, such as: Let’s talk a look at the wide world of anomaly detection.

Enterprises Realize Benefits from Migrating to Cloud with Splunk

Today, for a lot of organizations, moving to the cloud provides the best strategy to drive higher business efficiency and scale. But moving to the cloud can be challenging. IT leaders are continuously looking for ways to focus more on driving business value while moving to the cloud.

Why Does Observability Need OTel?

To successfully observe modern digital platforms, a new data collection approach was needed. And OpenTelemetry (OTel) was the answer - an industry-agreed open standard - not a single vendor's approach - on how observability (O11y) data should be collected from a platform. This separates out data collection from the vendors’ platform of data processing and visualisation, making the data collecting approach vendor agnostic.

Predictive vs. Prescriptive Analytics: What's The Difference?

Imagine being able to foresee future trends, anticipate customer behaviour, optimize your operations, and take actions that are not just reactive — they shape the future of the market. In the world of data-driven decision-making, we're able to do all that by paying attention to the information we analyze from predictive and prescriptive analytics. A large and growing field, data analytics is often broken into four categories — of which predictive and prescriptive are two!

Announcing Splunk Federated Search for Amazon S3 Now Generally Available in Splunk Cloud Platform

Splunk is pleased to announce the general availability of Federated Search for Amazon S3, a new capability that allows customers to search data from their Amazon S3 buckets directly from Splunk Cloud Platform without the need to ingest it. Enterprises rely heavily on cloud object storage services as the de facto destination for their new data to leverage the cost, compliance, security, scalability and manageability benefits that cloud platforms can offer.