← Front page

Benchmarking web benchmark tools

At work we are fairly close at releasing a complete rewrite of a fairly popular community site. One thing that is still to be done is performance testing in the production environment. Previously I've used JMeter and NeoLoad for that, but now those are not valid options.

We wanted something simpler, something less resource hungry on client side. So we sat down and looked for other options. I've played with Apache Bench (ab) before, and it felt fit for our purpose. One other team member has used siege before, so that went on our list also. httperf we just threw in the soup to broaden our options.

All those tools are simple command line tools to perform HTTP verbs against given server. So from user point of view they are the same.

To get some sort of idea what kind of testing rig we would need and if there are any performance differences between those tools, I decided to spent some time to test those.

The setup

Web server:



Both client and server were hooked to the switch through consumer grade gigabit NIC's using Cat 5e cables, the switch was also consumer grade gigabit switch. I did the test at home using my own equipment, so that's why no professional level devices were used.

Once the networking was done, I verified that the switch did not do some funny stuff like route all the traffic through Internet.


Pinging with 32 bytes of data:
Reply from bytes=32 time<1ms TTL=64
Reply from bytes=32 time<1ms TTL=64
Reply from bytes=32 time<1ms TTL=64
Reply from bytes=32 time<1ms TTL=64

Ping statistics for
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 0ms, Average = 0ms


Tracing route to ubuntu.lan []
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  ubuntu.lan []

Trace complete.



ab, ver 2.3

ab -n 100000 -c 100

This translates to 100 concurrent users performing in total 100000 GETs to given URL. So each user does 1000 requests.

httperf, ver 0.9.0

httperf --server --wsess 100,1000,0 --burst-length 100 --hog 

Means 100 concurrent users each performing 10000 GETs, so 100000 requests in total.

siege, ver 2.70

siege -r1000 -c100 -d0 > /dev/null

Again 100 users and 1000 requests, total 100000 . Since siege wants to print stuff while performing the tests, and printing to console can be slow, I redirected those prints to /dev/null.


Before the first test and between each test, I cleared all Apache logs, rebooted all the devices and performed one GET from the client to the server, just to make sure that everything is OK.

I ran the tests with empty file, and with 75 kB HTML file.


First number tells the time in seconds it took to run the test, and second number tells how many request per second was done. Both numbers are reported by each tool.

Results table
empty file 75 kB file
ab 64.447 s, 1551.67 req/s 83.489 s, 1197.77 req/s
httperf 25.928 s, 4612.1 req/s 120.948 s, 1004.7 req/s
siege 65.72 s, 1521.61 req/s 83.32 s, 1200.19 req/s


It looks like ab and siege perform pretty much the same, and httperf performance depends heavily on the file size. Personally I think ab gives easier to read results than siege, but I can't find any big differences between those tools.

EDIT [2012-11-16]: Wrong file size

Initially I wrote that the larger file was 0.5 MB, but that was not true. There is no way that 0.5 MB file could be transferred over 1000 times per second in a gigabit network. The file was actually 75 kB.