Over the years, a myriad of web servers have been written.
Apache, being one of the oldest, is the most widely used today. However, it was implemented with a scaling mechanism that can be relatively inefficient in some ways. Some newer web servers have come out in the last few years that attempt to address some of the shortcomings of Apache. One of the most prominent of these is Nginx.
This article examines the relative performance of each web server to see how they compare head-to-head. Keep in mind that this is not meant to be an exhaustive be-all end-all performance comparison. The hope is to simply give a relative performance comparison under common conditions.
- The method to test these web servers uses ApacheBench (an HTTP server benchmarking tool).
- In each test, 25,000 requests are made for a 5k PNG file locally from a VPS to remove potentially variable network conditions from the equation.
- Before each test the web server in question was restarted to clear out any potential caching or other issues that may interfere with results.
- Each test was run with different numbers of concurrent requests to gauge performance at different levels of usage. (Consider that it's very common for browsers to allow up to 6 concurrent connections per single user you have browsing a site. Therefore 10 users browsing your site at the same time would amount to approximately 60 concurrent connections).
The commands that are used in this test follow this format:
[server]$ ab -n 25000 -c 50 http://www.example.com/dreamhost_logo.png
-c flag increased with the concurrency level.
This is important to measure, especially on a VPS where your memory usage has a hard cap and raising it costs you additional money. (Note that memory is in megabytes.)
As you can see, Nginx comes out as clear leader in this test. In fact, the difference is SO dramatic it's almost a bit unbelievable. How can such a huge disparity exist?
It has to do with how Apache handles scaling with more incoming requests. To handle additional requests, it spawns new threads (i.e., processes). As more and more connections come in, more and more Apache processes are spawned to handle them. This causes memory usage to grow fairly quickly.
In comparison, you see that Nginx has fairly static memory usage and stays fairly static across the board from start to finish.
Requests per second
This is essentially a measure of how fast the server can receive and serve requests at different levels of concurrency. The more requests they can handle per second, the more able the server is to handle large amounts of traffic. Here's how the servers compare in this arena:
Nginx clearly dominates in the raw number of requests per second it can serve. At higher levels of concurrency, it can handle fewer requests per second, but still more than Apache.
Remember, the results shown are good only for measuring relative (and not absolute) performance, as the tests were conducted locally on the server.
Remember, Apache supports a larger toolbox of things it can do immediately and is probably the most compatible across all web software out there today. Furthermore, most websites really don't get so many concurrent hits as to gain large performance/memory benefits from Nginx – but you can check it out to see if it works best for your needs.