At work we are fairly close at releasing a complete rewrite of a fairly popular community site. One thing that is still to be done is performance testing in the production environment. Previously I've used JMeter and NeoLoad for that, but now those are not valid options.
We wanted something simpler, something less resource hungry on client side. So we sat down and looked for other options. I've played with Apache Bench (ab) before, and it felt fit for our purpose. One other team member has used siege before, so that went on our list also. httperf we just threw in the soup to broaden our options.
All those tools are simple command line tools to perform HTTP verbs against given server. So from user point of view they are the same.
To get some sort of idea what kind of testing rig we would need and if there are any performance differences between those tools, I decided to spent some time to test those.
- Windows 7 64 bit
- AMD Athlon II X4 630 @ 2.8 GHz
- 4 GB of RAM
- Apache 2.4.2
- Ubuntu 12.04 64 bit
- Intel Core i3-2100 @ 3.10 GHz x 4
- 4 GB of RAM
- D-Link DIR 665 switch
Both client and server were hooked to the switch through consumer grade gigabit NIC's using Cat 5e cables, the switch was also consumer grade gigabit switch. I did the test at home using my own equipment, so that's why no professional level devices were used.
Once the networking was done, I verified that the switch did not do some funny stuff like route all the traffic through Internet.
C:\Users\emma>ping 192.168.1.91 Pinging 192.168.1.91 with 32 bytes of data: Reply from 192.168.1.91: bytes=32 time<1ms TTL=64 Reply from 192.168.1.91: bytes=32 time<1ms TTL=64 Reply from 192.168.1.91: bytes=32 time<1ms TTL=64 Reply from 192.168.1.91: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.1.91: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\emma>tracert 192.168.1.91 Tracing route to ubuntu.lan [192.168.1.91] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms ubuntu.lan [192.168.1.91] Trace complete. C:\Users\emma>
ab, ver 2.3
ab -n 100000 -c 100 http://192.168.1.92/
This translates to 100 concurrent users performing in total 100000 GETs to given URL. So each user does 1000 requests.
httperf, ver 0.9.0
httperf --server 192.168.1.92 --wsess 100,1000,0 --burst-length 100 --hog
Means 100 concurrent users each performing 10000 GETs, so 100000 requests in total.
siege, ver 2.70
siege 192.168.1.92 -r1000 -c100 -d0 > /dev/null
Again 100 users and 1000 requests, total 100000 . Since siege wants to print stuff while performing the tests, and printing to console can be slow, I redirected those prints to
Before the first test and between each test, I cleared all Apache logs, rebooted all the devices and performed one GET from the client to the server, just to make sure that everything is OK.
I ran the tests with empty file, and with 75 kB HTML file.
First number tells the time in seconds it took to run the test, and second number tells how many request per second was done. Both numbers are reported by each tool.
|empty file||75 kB file|
|ab||64.447 s, 1551.67 req/s||83.489 s, 1197.77 req/s|
|httperf||25.928 s, 4612.1 req/s||120.948 s, 1004.7 req/s|
|siege||65.72 s, 1521.61 req/s||83.32 s, 1200.19 req/s|
It looks like ab and siege perform pretty much the same, and httperf performance depends heavily on the file size. Personally I think ab gives easier to read results than siege, but I can't find any big differences between those tools.
EDIT [2012-11-16]: Wrong file size
Initially I wrote that the larger file was 0.5 MB, but that was not true. There is no way that 0.5 MB file could be transferred over 1000 times per second in a gigabit network. The file was actually 75 kB.