Skip to content
pramode edited this page Apr 26, 2011 · 3 revisions

Python "fork" test

Fork 1000 python processes:

import os, time
for i in range(1000):
        if os.fork() == 0:
                time.sleep(10000)

According to "top"/"htop", the 1000 processes in total consume about 200Mb of RAM (tested on Debian 6)

uwsgi pre-fork test

10 uwsgi processes were started:

~/src/uwsgi-0.9.7.2/uwsgi -H ~/pyland/ -s /tmp/uwsgi.socket -p 10 -C --module hello2 --callable app

"htop" reports an increase in memory usage of around 15Mb.

Stress testing with "Apache benchmark" tool (ab) on local network

The tool "ab" can be used for stress testing http servers.

We use "ab" to test nginx+uwsgi running a simple "hello world" pyramid app.

Note that in "uwsgi", concurrency is achieved by pre-forking. It seems that "uwsgi" does not fork on receiving a request - so pre-forking is essential to increase concurrency.

A no-load test, single process

We first test with the simplest of "pyramid" loads - a "GET / " returns a simple "hello world" response. Only one "uwsgi" process is present.

Here is the commandline:

ab -c 500 -n 10000 http://192.168.1.3/

And here is the output:

Server Software:        nginx/1.0.0
Server Hostname:        192.168.1.3
Server Port:            80

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      500
Time taken for tests:   6.868 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1680000 bytes
HTML transferred:       120000 bytes
Requests per second:    1456.04 [#/sec] (mean)
Time per request:       343.398 [ms] (mean)
Time per request:       0.687 [ms] (mean, across all concurrent requests)
Transfer rate:          238.88 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       12  178 492.8    104    3126
Processing:    33  106  20.2    108     236
Waiting:       33  105  20.2    108     235
Total:         54  284 493.7    219    3250

Percentage of the requests served within a certain time (ms)
  50%    219
  66%    220
  75%    220
  80%    221
  90%    223
  95%    240
  98%   3199
  99%   3224
 100%   3250 (longest request)

Key point is: 1450 requests / second were processed for about 7 seconds without error.

Adding a 0.1 second delay to the pyramid request handler (uwsgi single process)

A time.sleep(.1) was added to the pyramid "GET / " handler to introduce a .1 second delay. Now, we get a response rate of around 10 requests / second maximum.

Similar to the above case, but with 10 pre-forked processes

Now we get a response rate of around 100 requests / second