Know your node.js

As part of a follow up to last months column about PHP vs Node.js, I hit some problems with Node under load. As with all technologies, Node.js does have some limitations that may or may not be a problem for your specific use case. If the last column about comparing PHP and Node.js had a deeper message, that message would be that if you want to scale you have to know your stack. To be completely clear, when I say stack I mean the layers of technology used to server http requests. One of the most common stacks out there are simply called LAMP – (L)inux (A)pache2 (M)ySQL (P)HP (or Perl). You now see a lot of references to LNMP, where Apache2 is replaced with Nginx. When building Node.js applications, things can vary a lot since node.js comes with it’s own http server. In my previous text, I used Node.js together with MySQL on a Linux box, so I guess we can dub that the LNM stack if we absolutely need to have a name for it. And when I say Know your stack. I mean that if you want to produce better than average performance numbers, you have to be better than average in understanding how the different parts in your stack works together. There are hundreds of little things that most of us never knew mattered that suddenly becomes important when things come under load. As it happens, watching your application work under load is a great way to force yourself to know your stack a little better.

Background

When testing Apache/PHP against Node.js, I found that the raw performance of Node.js as well as the ability to handle many concurrent clients was excellent. Faster and more scalable than Apache2/PHP. One reader pointed out that the test wasn’t very realistic since there was just one single resource being queried and there was no static content involved. Apache2/PHP could very well relatively better if some of the content was static. So I set up a test to check this and while running this. Node.js crashed. As in stopped working. As in would not server any more http reqeusts without manual intervention. So to keep it shord, Apach2/PHP won that round. But in the spirit of ‘know your stack’, we need to understand why Node.js crashed. The error message I got was this:

Unhandled 'error' event "events.js:71"

First of all, it took a fair amout of googling to figure out what that the error message was really about. Or, rather, the error message was saying that something happened and there’s no error handler for it. So good luck.

Fixing it.

The first indication I got via Google and Stack Overflow was that this may be an issue with Node.js before 0.8.22 and sure enough, I was running 0.8.19. So the first thing I did was upgrade to version 0.8.22. But that did not fix the problem at all (but a later and greater version is of course a nice side effect). With almost all other software involved being up to date, this actually required some structured problem solving.

Back to the drawing board

I eventually managed to trace the error message down to a ‘too many open files’ problem which is Interesting as it answers the crucial question: What went wong? This happened at roughly 250 concurrent users with a test that was accessing 6 different static files. This is what it looks like in LoadImpact:

node_failed_test

So a little depending on timing, and exactly when each request comes in, it would roughly indicate that some 1500 (6 files times 250 users) files can be open at the same time. Give or take. Most Linux systems are, by default, configured to allow relatively small number of open files, e.g. 1024. The Linux command to check this is ulimit:

$ ulimit -n
1024

1024 is the default on a lot of distros, including Ubuntu 12.10 that I was running the tests on. So my machine had 1024 as the limit but it appears that I had 1500 files open at the same time. Does this make any sense? Well, sort of, there are at least 3 factors involved here that would affect the results:

  1. Load Impact simulates real browsers (Virtual Users). A VU user only opens 4 concurrent connections to the same server even if the script tells it to download 6 resources. The other 2 resources are simply queued.
  2. Each open TCP socket counts as an open file. So each concurrent TCP connection is an open file. Knowing that our limit is 1024, that would indicate that node.js could handle up to 256 concurrent users if each user uses the maximum of 4 open connections.
  3. In our sample, the requests for static resources also opens a file and thereby occupies another file handle. This file is open for less time than the actual connection, but still, under a certain time, a single request can consume 2 open file handles.

So in theory, the limit for concurrent simulated browser users should be 256 or less. But in reality, I saw the number of concurrent users go all the way up to 270 before the Node.js process died on me. The explanation to that is more likely than anything just timing. Not all VU’s will hit the server at exactly the same time. At the end, hitting problems when running about 250 concurrent users reasons well with the open files limit being the problem. Luckily, the limit of number of open files per process is easy to change:

$ ulimit -n 2048

The next test shows real progress. Here’s the graph:

node_better_test

Problem solved (at least within the limits of this test).

Summary

Understanding what you build upon is important. If you choose to rely on node.js, you probably want to be aware of how that increases your dependency on various per process limitations in the operating system in general and max number of open files in particular. You are more affected by these limitations since everything you do takes place inside a single process. And yes. I know. There are numerous of more or less fantastic ways to work around this particular limitation. Just as there are plenty of ways to work around limitations in any other web development stack. The key thing to remember is that when you select your stack, framework, language or server, you also select all the limitations that comes with it. There’s (still) no silver bullet, even if some bullets are better out of the box than other. Having spent countless of hours with other web development languages, I think I’m in a good position to compare and yes indeed! Node.js delivers some amazing performance. But at present, it comes with a bigger responsibility to ‘Know Your stack’ than a lot of the others.

Node.js vs PHP – using Load Impact to visualize node.js efficiency

It could be said that Node.js is the new darling of web server technology. LinkedIn have had very good results with it and there are places on the Internet that will tell you it can cure cancer.

In the mean time, the old work horse language of the Internet, PHP, gets a steady stream of criticism. and among the 14k Google hits for “PHP sucks” (exact term), people will say the most funny terrible things about the language while some of the critique is actually quite well balanced. Node.js introduces at least two new things (for a broader audience). First, the ability to write server side JavaScript code. In theory this could be an advantage since JavaScript is more important than ever on the client side and using the same language on server and browser would have many benefits. That’s at least quite cool.

The other thing that makes Node.js different is that it’s completely asynchronous and event driven. Node is based on the realization that a lot of computer code actually just sits idle and wait for I/O most of the time, like waiting for a file to be written to disk or for a MySQL query to return data. To accomplish that, more or less every single function in Node.js is non-blocking.

When you ask for node to open a file, you don’t wait for it to return. Instead, you tell node what function to pass the results to and get on with executing other statements. This leads to a dramatically different way to structure your code with deeply nested callbacks and anonymous function and closures. You end up with something  like this:

doSomething(val, function(err,result){
  doSomethingElse(result,function(err,res){
    doAbra();
    doKadabra(err, res, function() {
      ...
      ...
    });
  });
});

It’s quite easy to end up with very deep nesting that in my opinion sometimes affects code readability in a negative way. But compared to what gets said about PHP, that’s very mild critique. And.. oh! The third thing that is quite different is that in Node.js, you don’t have to use a separate http(s) server. It’s quite common to put Node.js behind a Nginx, but that’s not strictly needed. So the heart of a typical Node.js web application is the implementation of the actual web server.

A fair way to compare

So no, it’s not fair to say that we compare Node.js and PHP. What we really compare is Node.js and PHP+Apache2 (or any other http server). For this article, I’ve used Apache2 and mod_php since it’s by far the most common configuration. Some might say that I’d get much better results if I had used Nginx or Lighthttpd as the http server for PHP. That’s most likely very true, but at the end of the day, server side PHP depends on running in multiple separate processes. Regardless if we create those processes with mod_php or fastcgi or any other mechanism. So, I’m sticking with the standard server setup for PHP and I think that makes good sense.

The testing environment

So we’re pitting PHP+Apache2 against a Node.js based application. To keep things reasonable, I’ve created a very (really, very) simple application in both PHP5 and Node.js. The application will get 50 rows of data from a WordPress installation and output it as a json string. That’s it, nothing more. The benefit of keeping it this simple was (a) that I didn’t have to bother about too many implementation details between the two languages and (b) more important that we’re not testing my ability to code, we’re really testing the difference in architecture between the two. The server we’re  using for this test is a virtual server with:

  • 1 x Core Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
  • 2 Gb RAM.
  • OS is 64 Bit Ubuntu 12.10 installed fresh before running these tests.
  • We installed the Load Impact Server metric agent.

For the tests, we’re using:

  • Apache/2.2.22 and
  • PHP 5.4.6.
  • Node.js version 0.8.18 (built using this script)
  • MySQL is version 5.5.29.
  • The data table in the tests is the options table from a random WordPress blog.
The scripts we’re using:

Node.js (javascript):

// Include http module,
var http = require('http'),
mysql = require("mysql");

// Create the connection.
// Data is default to new mysql installation and should be changed according to your configuration.
var connection = mysql.createConnection({
   user: "wp",
   password: "****",
   database: "random"
});

// Create the http server.
http.createServer(function (request, response) {
   // Attach listener on end event.
   request.on('end', function () {

      // Query the database.
      connection.query('SELECT * FROM wp_options limit 50;', function (error, rows, fields) {
         response.writeHead(200, {
            'Content-Type': 'text/html'
         });
         // Send data as JSON string.
         // Rows variable holds the result of the query.
         response.end(JSON.stringify(rows));
      });
   });
// Listen on the 8080 port.
}).listen(8080);

PHP code:

<!--?php $db = new PDO('mysql:host=localhost;dbname=*****',     'wp',     '*****'); $all= $db--->query('SELECT * FROM wp_options limit 50;')->fetchAll();
echo json_encode($all);

The PHP script is obviously much shorter, but on the other hand it doesn’t have to implement a full http server either.

Running the tests

The Load Impact test configurations are also very simple, these two scripts are after all typical one trick ponies, so there’s not that much of bells and whistles to use here. To be honest, I was surprised how many concurrent users I had to use in order to bring the difference out into the light. The test scripts had the following parameters:

  • The ramp up went from 0-500 users in 5 minutes
  • 100% of the traffic comes from one source (Ashburn US)
  • Server metrics agent enabled
The graphics:
On the below images. the lines have the following meanings:
  • Green line: Concurrent users
  • Blue line: Response time
  • Red line: Server CPU usage

Node.js up to 500 users.

The first graph here shows what happens when we load test the Node.js server. The response time (blue) is pretty much constant all through the test. My back of a napkin analysis of the initial outliers is that they have to do with a cold MySQL cache. Now, have a look at the results from the PHP test:

Quite different results. It’s not easy to see on this screen shot, but the blue lines is initially stable at 320 ms response time up to about 340 active concurrent users. After that, we first see a small increase in response time but after additional active concurrent users are added, the response time eventually goes through the roof completely.

So what’s wrong with PHP/Apache?

Ok, so what we’re looking at is not very surprising, it’s the difference in architecture between the two solutions. Let’s think about what goes on in each case.

When Apache2 serves up the PHP page it leaves the PHP execution to a specific child process. That child process can only handle one PHP request at a time so if there are more requests than than, the others have to wait. On this server, there’s a maximum of 256 clients (MaxClients) configured vs 150 that comes standard. Even if it’s possible to increase MaxClients to well beyond 256, that will in turn give you a problem with internal memory (RAM). At the end, you need to find the correct balance between max nr of concurrent requests and available server resources.

But for Node, it’s easier. First of all, in the calm territory, each request is about 30% faster than for PHP, so in pure performance in this extremely basic setup, Node is quicker. Also going for Node is the fact that everything is in one single process on the server. One process with one active request handling thread. So thre’s no inter process communication between different instances and the ‘mother’ process. Also, per request, Node is much more memory efficient. PHP/Apache needs to have a lot of php and process overhead per concurrent worker/client while Node will share most of it’s memory between the requests.

Also note that in both these tests, CPU load was never a problem. Even if CPU loads varies with concurrent users in both tests it stays below 5% (and yes, I did not just rely on the graph, I checked it on the server as well). (I’ll write a follow up on this article at some point when I can include server memory usage as well). So we haven’t loaded this server into oblivion in any way, we’ve just loaded it hard enough for the PHP/Aapache architecture to start showing some of it’s problems.

So if Node.js is so good…

Well of course. There are challenges with Node, both technical and cultural. On the technical side, the core design idea in Node is to have one process with one thread makes it a bit of a challenge to scale up on a multi core server. You may have already noted that the test machine uses only one core which is an unfair advantage to Node. If it had 2 cores, PHP/Apache would have been able to use that, but for Node to do the same, you have to do some tricks.

On the cultural side, PHP is still “everywhere” and Node is not. So if you decide to go with Node, you need to prepare to do a lot more work yourself, there’s simply nowhere near as many coders, web hotels, computer book authors, world leading CMS’es and what have you. With PHP, you never walk alone.

Conclusion

Hopefully, this shows the inherit differences in two different server technologies. One old trusted and one young and trending. Hopefully it’s apparent that your core technical choices will affect your server performance and in the end, how much load you can take. Designing for high load and high scalability begins early in the process, before the first line of code is ever written.

And sure, in real life, there are numerous of tricks available to reduce the effects seen here. In real life, lots of Facebook still runs on PHP.

WordPress load testing part 3 – Multi language woes

Understanding the effects of memory starvation.

This is the third part in a series of posts about WordPress and performance. In part 1,
we took a look at WordPress in general. In part 2 and part 2.5 we reviewed a couple of popular caching plugins that can boost performance. In this part, we’ll start looking at how various plugins can have a negative effect on performance and if anything can be done about it.

In the comments for one of the previous posts in this series, Yaakov Albietz asked us if we used our own service www.loadimpact.com for the tests. I realize that I haven’t been that obvious about that, but yes, absolutely, we’re exlusively using our own service. The cool thing is that so can you! If you’re curious about how your own web site handles load, take it for a spin using our service. It’s free.

We started out by looking for plugins that could have a negative effect on WordPress performance, thinking, what are the typical properties of a bad performer plugin? Not so obvious as one could think. We installed, tested and tinkered with plenty of suspects without finding anything really interesting to report on. But as it happens, a friend of a friend had just installed the WordPress Multi Language plugin and noted some performance issues. Worth taking a look at.

The plugin in question is WordPress Multi Language (WPML). It’s got a high rating among the WordPress community wich makes it even more interesting to have look at. Said and done, we installed WPML and had it for a spin.

The installation is really straight forward. As long as your file permissions are set up correctly and the WordPress database user have permissions to create tables, it’s a 5-6 click process. Install, activate, select default language and at least one additional language and your done. We’re eager to test, so as soon as we had the software in place, we did our first test run on our 10 post WordPress test blog. Here’s the graph:

Average load times 10 to 50 users

Ops! The baseline tests we did for this WordPress installation gave a 1220 ms response time when using 50 concurrent users. We’re looking at something completely different here. At 40 concurrent users we’re getting 2120 ms and at 50 users we’re all the way up to 5.6 seconds or 5600 ms. That needs to be examined a bit more.

Our first suspicion was that WPML would put additional load on the MySQL server. Our analysis was actually quite simple. For each page that needs to be rendered, WordPress now have to check if any of the posts or pages that appears on that page have a translated version for the selected language. WPML handles that magic by hooking into the main WordPress loop. The hook rewrites the MySQL query about to be sent to the database so that instead of a simple “select foo from bar” statement (over simplified), it’s a more complex JOIN that would typically require more work from the database engine. A prime performance degradation suspect unless it’s carefully written and matched with sensible indexes.

So we reran the test. While that test was running we sat down and had a look at the server to see if we could easily spot the problem. In this case, looking at the server means log in via ssh and run the top command (if it had been a Microsoft Windows box, we’d probably have used the Sysinternals Process Exporer utility) to see what’s there. Typically, we’d want to know if the server is out of CPU power, RAM memory or some combination. We were expecting to see the mysqld process consume lots of CPU and verify our thesis above. By just keeping an unscientific eye on top and writing down the rough numbers while the test was running, we saw a very clear trend but it was not related to heavy mysqld CPU usage:

20 users: 65-75% idle CPU 640 MB free RAM
30 users: 50-55% idle CPU 430 MB free RAM
40 users: 45-50% idle CPU 210 MB free RAM
50 users: 0%   idle CPU  32 MB free RAM

As more and more users was added we saw CPU resource usage go up and free memory availability go down, as one would expect. The interesting things is that at 50 users we noted that memory was extremely scarce and that the CPU had no idle time at all. Memory consumption increases in a linear fashion, but CPU usage suddenly peaks. That sudden peak in CPU usage was due to swapping. When the server comes to the point where RAM is running low, it’s going to do a lot more swapping to disk and that takes time and eats CPU. With this background information in place, we just had to see what happended when going beyond 50 users:

That’s very consistent with what we could have expected. Around 50 concurrent users, the server is out of memory and there’s a lot of swapping going on. Increasing the load above 50 users will make the situation even worse. Looking at top during the later stages of this test confirms the picture. The kswapd process is using 66% percent of the server CPU resources and there’s a steady queue of apache2 processes waiting to get their share. And let’s also notice that mysqld is nowhere to be seen (yes, this image is only showing the first 8 processes, you just have to take my word for it).

 

The results from this series of tests are not WPML specific but universal. As we put more and more stress on the web server, both memory and CPU consumption will rise. At some point we will reach the limit of what the server can handle and something got to give. When it does, any linear behavior we may have observed will most likely change into something completely different.

There isn’t anything wrong with WPML, quite the opposite. It’s a great tool for anyone that want a multi language website managed by one of the easiest content management systems out there. But it adds functionality to WordPress and in order to do so, it uses more server resources. It seems WPML is heavier on memory than on CPU, so we ran out of memory first. It’s also interesting to see that WPML is actually quite friendly to the database, at no point during our tests did we see MySQL consume noticeable amounts of CPU.

 

Conclusion 1: If you’re interested in using WPML on your site. Make sure you have enough server RAM. Experience of memory requirements from “plain” WordPress will not apply. From the top screen shot above, we conclude that one apache2 instance running WordPress + WPML will consume roughly 17 Mb RAM, we havent examined how that differs with number of posts, number of comments etc, so lets use 20Mb as an estimate. If your server is set up to handle 50 such processes at the same time, you’re looking at 1000 Mb just for Apache. So bring out your calculators and calculate how much memory your will need for your server by multiplying the peak number of users you expect with 20.

Conclusion 2: This blog post turned out a little different that we first expected and instead of blaming on poor database design we ended up realizing that we were watching a classic case of memory starvation. As it turned out, we also showed how we could use our load testing service to provide a reliable source of traffic volume to create an environment where we could watch the problem as it happens. Good stuff, something that we will appear as a separate blog post shortly.

 

Feedback

We want to know what you think. Are there any other specific plugins that you want to see tested? Should we focus on tests with more users, more posts in the blog, more comments? Please comment on this post and tell us what you think.

 

WordPress load test part 2 – amendment

This is the second and a half part in a series of posts about WordPress and performance. In part 1, we took a look at WordPress in general. In part 2 we reviewed a couple of popular caching plugins that can boost performance. In this follow up post, we’ll tell you a bit about what we learned after part 2 was published.

 

Revisiting W3 Total Cache

After publishing our results in part 2, we received a concerned Twitter message from Frederick Townes, the W3 Total Cache (W3TC) author. He thought we had done something wrong since the Disk enhanced cache mechanism used in W3TC should be at least as effective as the WordPress Super Cache plugin (WPSC). After a brief Twitter discussion, we understood that he was absolutely right. The mod_rewrite magic that WPSC uses to achieve the amazing results was indeed present in W3TC as well (and I might as well add that the mod_rewrite rules added by Freredick’s plugin is neater than the ones added by the test winner).

The mistake we made in our first test is that we didn’t realize the significant difference between the two different disk based page caching methods available. There’s “Basic” caching which is the one we tested, and there’s “Enhanced mode”. In Basic mode, W3TC will work pretty much the same way as the standard wp-cache plugin which involves invoking a PHP script. In our server benchmark, we’ve already seen that our server will consume in the region of 80ms for doing that so we’re glad if we could avoid it in the elegant manner that WordPress Super Cache does.

In Enhanced mode, surprise surprise, avoiding invoking PHP is exactly what W3 Total Cache will do. The basic concept is the same in W3TC and WPSC, both plugins will add a bunch of lines to the .htaccess file that tells Apache2 to go look for a static copy of (a.k.a cached) version of the requested file/resource. And as noted above, W3TC does this with a slightly more elegant addition to .htaccess. In our defense, even though the documentation provided with W3TC is good, we didn’t find anything in particular that explained this significant difference between Basic and Enhanced.

How Load Impact can affect the results

Naturally, we took W3TC back to the lab to see how fast it is in enhanced mode. But before telling you about the results, we want to explain a few details about how Load Impact works. When we ask Load Impact to simulate the load of 50 concurrent web users, that is exactly what Load Impact will do. The second the test starts, exactly 50 virtual users will load the page at the same time and Load Impact will note how long time it takes before the web server responds and the content is completely transferred. Then, each virtual user will make a random pause and try again. Depending on the accuracy settings for the test, this will be repeated over and over again. In a “Fast” test, there will be very few repetitions and in a “Accurate” test, there will be lots and lots of repetitions. The more repetitions, the more data points to use when calculating the average load time. This configuration setting allows users to prioritize test completion time over accuracy if they want to. This behavior actually have some impact when testing cache plugins. When we test, the first time when 50 virtual users comes storming to the test web server at once Apache will fire up as many child processes as it’s configured for, 30 in our case. All of these processes will go look in the cache and quite likely discover that there is no cached version of the requested file. So PHP is invoked, WordPress will generate the page and the cache plugin kicks in and stores the rendered version of the page in the cache. Not only does creating the cached version take more time than a normal page request does, in our scenario, there’s a risk that this is done up to 30 times. And to make things even worse, 30 child processes writing to a file based cache exactly the same time will cause a lot of file locking problems that will end up taking even more time.

The conclusion is that depending on the state of the cache when the test begins, the response time of the first 30 data points may vary. And this is exactly what we saw when we took W3 Total Cache back to the lab.

Testing W3 Total Cache again

We retested W3TC again and arrived at these numbers:

WordPress baseline: 1220 ms

W3 Total Cache (basic disk): 256 ms (-79.0%)

W3 Total Cache (enhanced disk): 188 ms (-84.6%)

That’s a solid improvement so we contacted Frederick again with the good news only to be turned down again, “something is still wrong” he told us. Then we redid the test for Enhanced mode  and over again with minor tweaks to the W3TC settings. After every tweak, we cleared the cache so that any cached pages specifics wouldn’t interfere with the current settings. We saw slightly higher average load times as well as slightly lower, but we were never close to the 112 ms record set when using the WordPress Super Cahce plugin. Until the “warm vs cold” cache issue hit us and we did a test with a warm cache. And boom! The average load time decreased all the way down to 109 ms, better than what WPSC would acheive. So let’s add how W3TC performs using enhanced disk caching:

Using Enhanced disk cache:

Average load time 50 users: 109 ms

Baseline difference: -1111 ms

Baseline difference %: -91.1%

 

 

Summary

Results

Before updating the results table,  we retested the other results as well, but the number we ended up with in the retests was all within a 5ms difference from the original test result, so we’re sticking with the results from our first round of tests. But we’re reducing to using just 2 significant figures:

Plugin Avg. load time Difference Difference %
Standard WordPress 1220 0 0 %
wp-cache 210 -1010 -83 %
batcache 537 -683 -56 %
WP Super Cache 112 -1108 -91 %
W3 Total Cache (disk basic) 256 -964 -79 %
W3 Total Cache (disk enhanced) 109 -1111 -91 %
W3 Total Cache (memcache) 367 -853 -70 %

 
That’s it.

WordPress load test part 2

NOTE: This post was updated after it was first published. Please click here for explanation.

This is the second part in a series of posts about WordPress and performance. In part 1, we took a look at WordPress in general. In this part, we’ll continue the investigation and look at a few specific plugins that can help improve performance.

First things first, in part 1, we used a 8Gb Quad core server for the tests. From now on, we’ve moved to a KVM virtual server. The main purpose of that is that we change the machine configuration when something interesting is discovered. For instance, if we discover a performance problem and suspect RAM memory to be the bottleneck, we can add memory to the machine and rerun the test. The obvious downside is that the baseline established in part 1 isn’t valid anymore. So the first task is to examine how this virtual machine handles load as described in part 1.

The base configuration for the virtual server is 2 CPU cores running at 2.1 GHz with 1024 MB RAM memory. The OS Ubuntu JEOS upgraded to 9.04. Apache2 is at version ___, PHP5 is up to version  . The MySQL server is located on the same machine and is running 5.xxx. WordPress is upgraded to version 2.9.1.

The baselines. A simple PHP script that sends 10 bytes of data back to the user has an average load time of 85 ms when running 80 concurrent users. That’s actually pretty much the same number as we saw on the 8Gb Quad core machine, we had 80.9 ms on that machine.

Next thing we looked at in the first part was the average load time for a basic empty WordPress install. On the Quad core box, we saw an average load time of 357 ms for 80 users. On the virtual machine, not so good. A ramp up test going from 50 to 80 concurrent users shows load times at 691 ms for 50 users and more or less infinite at 60 users. At that load level, the kswapd process was eating away a good 66% of all available CPU, meaning that the server spent most of it’s time swapping pages back and forth between RAM and disk. Even if nothing actually crashed, we aborted the test and concluded that the current config can’t handle more than 50 concurrent users.

For the final baseline test we added 10 posts into the WordPress install and made a new measurement. On our virtual machine, 50 users gave us a load time of 1220 ms, the same load on the Quad core machine gave us 470 ms response times. Clearly, taking away 2 processor cores and slashing the RAM memory to 1/8th affects average load times badly which is not surprising at all. Anyway, we now know that our current test environment is unlikely to handle more than 50 concurrent users and we also know what happens if we add RAM and/or CPU cores.

 

Tweaking WordPress performance

There are numerous of ways to increase wordpress performance and we’ll have a look at how the numbers gets affected in this particular installation. Now, WordPress wouldn’t be WordPress if the most interesting performance tweaks was already packaged as easy to use plugins, so instead of digging deep into the WordPress core, we ended up evaulating a set of interesting plugins, here they are:

wp-cache plugin

The wp-cache plugin have become very popular way to add a chache to WordPress. WordPress used to have a built in object cache, but that got cancelled in WordPress 2.5. So today, the wp-cache plugin is one of the most obvious plugins that come to mind when wanting to tweak WordPress performance (and yes, we’ll look at wp-super-cache as well). The test result with wp-cache is very good. As we’ve seen above, this server will need 85 ms to server the simplest possible PHP script and the wp-cache plugin gets us fairly close to that ideal number.

Average load time 50 users: 210 ms

Baseline difference: -1010 ms

Baseline difference %: -82.9%

 

batcache plugin

Batcache was written to help WordPress.com cope with the massive and prolonged traffic spike on Gizmodo’s live blog during Apple events. Live blogs were famous for failing under the load of traffic. Gizmodo’s live blog stays up because of Batcache. The developers of Batcache actually refer to WP Super Cache themselves as a better alternative, but in some cases with multiple servers and where memcached is available, Batcache may be a better solution. The performance gains with Batcache is actually not up to par with what wp-cache or WP Super Cache delivers, but it’s still a lot better than a standard WordPress install.

Average load time 50 users: 537 ms

Baseline difference: -683 ms

Baseline difference %: -56.0%

 

WP Super Cache plugin

The WP Super cache plugin takes things a few step further compared to the standard wp-cache. Most notably, by using a set of Apache2 mod_rewrite rules, WP Super cache is able to serve most of your WordPress content without ever invoking the PHP engine, instead the content is served at the same speed as it would serve static content such as graphics or javacsript files. Installing this plugin is a little bit more complicated and it requires both mod_headers and mod_expires Apache2 modules to be enabled. But once installed, it really works, just look at the numbers! If using the WP Super Cache plugin works on your server, it’s probably the easiest and most powerful way to boost your WordPress performance numbers. And if it doesn’t work as intended on your server, the good thing is that it reverts back to the functionality provided by the standard wp-cache plugin.

Average load time 50 users: 112 ms

Baseline difference: -1108 ms

Baseline difference %: -90.8%

 

 

W3 Total Cache plugin

The W3 Total Cache plugin is a powerful plugin that takes the best from wp-cache and batcache and adds a few additional features to improve performance. W3 Total cache allows the user to choose between disk and memory based caching (using memcached). It also supports minifying HTML, JS and CSS files as well as the various types of http compression (deflate, gzip etc.). Finally, W3 Total cache supports placing content on a content delivery network (CDN) that can speed up loading of static content even further. W3 Total Cache have a lot of configuration options and we did not take the time to fully investigate them all. We did test the performance difference when using disk based caching and memory based caching and the difference is actually notable. We enabled minifying and compression but we’ve pretty much used everything else ‘out of the box’.

Using disk cache:

Average load time 50 users: 256 ms

Baseline difference: -964 ms

Baseline difference %: -79.0%

 

Using memory cache:

Average load time 50 users: 367 ms

Baseline difference: -853 ms

Baseline difference %: -70.0%

Summary

Results

NOTE: This table was updated after it was first published. Please click here for explanation.

Plugin Avg. load time Difference Difference %
Standard WordPress 1220 0 0 %
wp-cache 210 -1010 -83 %
batcache 537 -683 -56 %
WP Super Cache 112 -1108 -91 %
W3 Total Cache (disk basic) 256 -964 -79 %
W3 Total Cache (disk enhanced) 109 -1111 -91 %
W3 Total Cache (memcache) 367 -853 -70 %

 

Conclusions

The various performance related plugins for WordPress all revolves around caching. The most impressive results was acheived using WP Super Cache and W3 Total Cache. Among the other plugins, the choice is between disk based caching and memcached based caching. Our tests actually show that disk is faster, but that’s something that needs to be explored further. The tests have been done on a blog with very little data in it and Linux uses a fair amount of disk caching that is probably more effective with these particular amounts of data. Whenever WP Super Cache is not possible to use (or simply feels too exotic for you), we suspect that a perfectly tuned W3 Total Cache is the best choice. W3 Total Cache shows the most potential for tuning and we like the overall ‘look-and-feel’ of it. UPDATE: Actually, after retesting W3 Total Cache, we think it may an even better alternative than WP Super cache. The one negative thing we’ve picked up so far is a potential compatibility issue with WordPress Multi User (WPMU), but we have not been able to confirm that.

 

Feedback

We want to know what you think. Are there any other specific plugins that you want to see tested? Should we focus on tests with more users, more posts in the blog, more comments? Please comment on this post and tell us what you think.

 

 

 

Load testing WordPress

This is the first part in a series of posts about WordPress and performance. In part 1, we’ll look at WordPress in general, in later instalments, we’ll look at how performance is affected by popular plugins and tweaks. (click here for part 2)

WordPress is probably the most popular blogging platform out there today powering a countless number of blogs and other web sites. WordPress was first released back in 200x and quickly became a popular tool for bloggers. Part of it’s success is due to the fact that it’s remarkably easy to install and configure. Another big hit with WordPress is the community of users that contribute to the development by creating plugins. There are plugins for just about anything, display Google ads, search engine optimization, statistics, integration with social media just to name a few.

There are also downsides to WordPress but the one that interests us the most is performance. WordPress was once known to have lacklustre performance properties. It especially had big problems handling a lot of concurrent users. Imagine the disapointment from a young and aspiring blogger that writes endless amounts of excellent blogposts without being able to reach out to a bigger crowd. When he finally catches a break and gets that link from news.com, the WordPress powered blog dies under the pressure and before the blog is back up again, that link from news.com is yesterdays news.

But WordPress have gotten better. The default installation is faster out of the box and there are numerous of tips, tricks and guides on how to optimize WordPress performance beyond what should be possible. And of course, there are also plugins that helps WordPress to perform better. Our mission is to find out what the state of WordPress performance is today. Let’s start.

The tools

The tools we brought to the lab to do this series of tests are fairly simple. We have an out of the box WordPress 2.8.6 blog installed on a single server. The server run Ubuntu Linux 9.04 on a Intel Quad Core 2.1 GHz machine with 8Gb RAM memory. The web server is the standard Apache2 that comes with Ubuntu and the database server is MySQL 5.1 located on the same machine. PHP is up to version 5.2.10. And the most important piece we brought was naturally a LoadImpact.com account to run the tests.

 

Establish a base line for the server

There are lot of moving parts in a setup like this. We first want to establish a baseline that tells us the maximum possible throughput on a PHP page in this specific setup. To do that, we created an simple php script that echoes exactly 10 bytes of data back to the browser. By load testing this simple script we get an understanding of how much of the page load time that is spent just on sending requests back and forth over the Internet, how well Apache2 can fire up the PHP processes and how much time PHP needs to initialize itself.

The baseline test and all the other tests we will be running is a ramp up from 50 up to 80 concurrent users. This is what the graph from the test look like:

Base line test. The performance of the server itself

As you can see. The response times actually gets better with more concurrent users (that’s caching), overall it stays at or under 100 ms. So before putting WordPress into the picture, we see response times at just under 100 ms. That’s the best possible response time we could ever achieve with PHP at this load level on this particular server located at this particular IP.

 

Establish a baseline for WordPress

Ok, next step is to see what we get when we install WordPress. A standard WordPress install will first and foremost run a whole lot more code than the previous script. It also connects to the database and looks for blog posts, comments and a lot of meta data such as what categories that are in use etc. So naturally we expect to see longer response times. We placed the same amount of load on the front page of the WordPress installation as we did on empty.php, here’s what an empty WordPress blog looks like:

Performance when WordPress is installed

 

The response times now begins at just over 300 ms at 50 concurrent users and at 80 the response times are just over 350 ms. But that’s not very interesting, we need to add a few posts so that the scripts and database gets some actual work done. Here’s what the graph looks like when 10 posts are added to the blog:

Wordpress performance with 10 posts added

That’s a bit more interesting. We response times now starts at 425 ms, dips down to 364 ms at 60 concurrent users (mysql caching is our best guess here). At 70 and 80 concurrent users, the response times start rising quite sharply to 439 ms and 601 ms respectively. That actually starts looking like it’s a “knee”, the point where performance starts to really degrade and the server risks grinding to a halt.  Let’s see what happens if we add even more load:

Wordpress load test with more clients

Yes indeed. With more clients, the response times increases even more, as expected.

In absolute numbers, we’re still talking about very good response times here even if this test is using a server with more CPU and RAM memory than the typical WordPress installation have exclusive access to. And we are also looking at fairly high load levels. Getting 150 concurrent users on your blog is not going to happen to a lot of people and maintaining a reponse time of well under 2s is not bad at all.

The second thing to notice is that what we first suspected was a “knee” in the response time chart between 60 and 70 users does not look like a knee at all any more. The response times increases more or less proportionally to the load which is quite good. A really really high performing web site out there would display a more or less flat line for this load levels, but our setup is no where near that k

 

Conclusion

We’ve established a base line for WordPress performance. We’re going to keep testing this setup with various types of tweaks and write about it. The next part of this series will look at different plugins and how they affect performance, we’ve already tested a few of the most popular ones and some of them do affect performance quite a bit.

Feedback

We want to know what you think. Are there any other specific plugins that you want to see tested? Should we focus on tests with more users, more posts in the blog, more comments? Please comment on this post and tell us.

 

(click here for part 2)

 

About Load Impact

Load Impact is the leading cloud-based load testing software trusted by over 123,000 website, mobile app and API developers worldwide.

Companies like JWT, NASDAQ, The European Space Agency and ServiceNow have used Load Impact to detect, predict, and analyze performance problems.
 
Load Impact requires no download or installation, is completely free to try, and users can start a test with just one click.
 
Test your website, app or API at loadimpact.com

Enter your email address to follow this blog and receive notifications of new posts by email.