Apache web server monitoring: Key metrics and how to monitor them
According to a survey by Web Technology Surveys, around 29.5% of the world's active websites are powered by Apache HTTP Server (often referred to as Apache web server or just Apache), making it one of the most popular web servers. Apache's flexible and scalable nature allows it to handle workloads that range from small-scale blogs to commercial web services.
Let us dive deeper and explore the Apache web server infrastructure and learn about the crucial performance indicators you need to pay attention to while monitoring Apache web servers.
How do Apache web servers work?
The basic duty of web servers is to accept incoming requests and deliver corresponding webpage components to the client. When a request is received, the Apache web server uses its configuration files—including the main configuration file, virtual host configuration files, and various module configurations—to determine how to handle the request. It retrieves the requested files from the server based on their URLs and serves them back to the client. Apache can perform various functions—including URL rewriting, caching, and encryption—through its modular architecture, which allows for the dynamic loading and unloading of modules to tailor its functionality. Communication is secured with SSL/TLS protocols and certificates, ensuring secure data transactions between clients and servers.
Key Apache monitoring metrics: A mini user manual
Let's segregate Apache performance metrics into three specific categories:
- Performance metrics
- Resource metrics
- System metrics
Performance metrics
These metrics are the most fundamental of all metrics. Any anomalies in performance metrics could indicate a trend of performance decline or a potential performance issue that may be affecting the end user experience. Here are the three important performance metrics that are crucial to monitor:
- Request processing time
Request processing time is the time taken by the server to handle an incoming request, from the moment it is received to the moment it is delivered to the web client.
- High processing time is bad for the servers. When high processing time is observed, the first rule of monitoring is to check your databases, query wait time, deadlocks, and any metric that might indicate a resource inefficiency. If everything appears to be running optimally, then you might want to check the thread activity and the status of the Apache web server. If the threads are configured with a high timeout, the worker threads would keep the connections alive for longer than required, allowing the web server to accommodate inactive connections. This reduces the responsiveness of the server.
- Low processing time is good, but not always. It could mean that there is a lack of dynamic content in the webpages or the failure of basic functionalities. If the processing time is less than optimum, check whether the content in the site matches with the intent of the components. It is also good to check whether the sites are optimized—you don't want to leave a chance for excessive resource consumption.
- Requests per minute
Requests per minute (RPM) indicates the number of HTTP requests handled by the server in one minute. RPM denotes the server's capacity to handle traffic, helping admins understand server efficiency and scale for various loads at different time periods. The RPM is often impacted by other factors like request size, server-side code complexity, and available system resources. Failure to keep this metric within optimal limits can result in unfavorable circumstances like high page load and response times.
- An increase in RPM indicates a growing workload on the server. Optimize resource allocation or consider scaling (upgrades or adding servers) to ensure your server handles the increased workload efficiently and maintains optimal performance.
- On the other hand, a declining RPM indicates the server's inability to provide for the request. It is usually difficult to identify the root cause for a low RPM, but the potential reasons could be resource constraints, a sudden surge in server traffic, database downtime, disk swapping, slow database queries, and network latency.
- Availability
Availability is the server's ability to be operational and accessible to users without interruption is called availability. It is measured in percentages to indicate the server's uptime and usability, showing whether the web services are consistently available to handle client requests. The availability of a server should always be at maximum, ensuring peak uptime and seamless end-user experience. Adopting the following measures can help improve the availability of your Apache web servers:
- Load balancing: Balancing or distributing workloads over multiple Apache web servers prevents a single server from overloading. Even if a server is scheduled for maintenance, one of the others can take up the requests to ensure high availability at any given time.
- Clustering: Configuring multiple servers as a cluster makes each one of the servers a failover for the others. This can help the admins handle larger workloads without bothering about downtime.
- Regular optimization: Scheduling maintenance and optimizing resources and workloads for the servers regularly helps admins keep the health of the servers at optimum. Observing the traffic and scheduling maintenance and updates during off-peak hours helps improve server efficiency and enhances uninterrupted availability.
Resource metrics
These metrics explain the resource consumption and necessity of the web servers. If you observe any performance indicator straying from its usual behavior, these are the metrics you should cross-check for further analysis. Resource metrics are the most important data for any given IT component. Because if there is a resource outage, a server outage follows, followed by application crashes and failing websites.
Here are three resource metrics—apart from memory and storage—that you need to keep a close watch on:
- Worker resource metrics: Worker resource metrics help in understanding processes or threads responsible for handling incoming requests. Monitoring these metrics provides clear insights about the resource utilization, activity, and availability of the threads. Optimizing worker usage by analyzing resource consumption, refining code, and considering server scaling helps improve overall server performance. Here's how worker threads impact server performance:
- Idle workers: A limited number of idle workers is acceptable in the server, as they are acknowledged as spare capacity to accommodate requests during unexpected traffic surges. But if the number is high, the server might create more worker processes than needed, leading to resource stagnation. The idle worker count can be optimized by fine-tuning the "MaxRequestWorkers" directive to expected traffic and existing server resources.
- Busy workers: If all workers are busy handling existing requests, incoming requests will be queued, increasing wait time and response time. This can severely impact the end user experience and slow down the servers. Optimizing the "MaxRequestWorkers" directive according to the expected workload helps the server to handle requests without queuing them for a long time.
- Connections: Understanding the performance of connections is important to maintain server performance at optimum, identify potential issues, and ensure overall web server health. Here is how monitoring connections in Apache servers helps:
- Understand workloads: Monitoring connections will help you understand the server load and identify excessive resource usage in asynchronous operations.
- Optimize resource usage: Tracking connection volume provides you an idea about bandwidth usage and helps you identify potential bottlenecks. For example, writing connections are utilized to respond to clients with requested data. An increase in writing connections indicates insufficient bandwidth or inefficient server codes that are resulting in slow data transfer. A clear understanding of the behavior of writing connections helps admins to identify resource over-utilization and optimize connection workloads to improve overall performance of Apache servers.
- Improve user experience: Tracking "KeepAlive" connections helps in ensuring smooth functioning of connections by maintaining the number of active connections and promotes seamless end-user experience by improving response time.
- Troubleshoot and quick-fix: Identify performance errors that affect connection health and activity by monitoring connection closure rates. Higher values of the connections that terminate without a trigger or command indicate application glitches or network issues.
System metrics
System load: Monitoring the system load of an Apache web server is crucial for maintaining optimal performance and efficient request handling. System load indicates the workload on the server's CPU, with statistics like the average number of processes in a uninterruptible state. Load averages are calculated over various time intervals (one, five, and fifteen minutes) to provide an accurate understanding of system behavior. This helps you to identify and fix bottlenecks, plan upgrades, and ensure the smooth operation of the servers.
Monitoring Apache web servers with Applications Manager
Manually tracing all these metrics and observing Apache web servers in real time is an impossible task. Native performance monitoring solutions and command line interfaces limit your visibility into your Apache web servers. That is why organizations resort to third-party performance monitoring tools.
ManageEngine Applications Manager is one such Apache monitoring tool that offers extensive monitoring capabilities for Apache servers. Applications Manager also provides in-depth application performance monitoring (APM) for applications running on Apache. Plus, it maps and monitors all dependencies between your application and infrastructure elements, giving you a holistic view of your entire IT infrastructure. It supports over 150 technologies that include various IT elements like cloud apps, web apps, servers, VMs, databases, ERPs, web services, app servers, URLs, and many more.
To get started, you may download a free 30-day trial version of Applications Manager for a hands-on experience.