From: Roger Luethi <rl@hellgate.ch>
To: William Lee Irwin III <wli@holomorphy.com>,
linux-kernel@vger.kernel.org,
Albert Cahalan <albert@users.sf.net>, Paul Jackson <pj@sgi.com>
Subject: [BENCHMARK] nproc: netlink access to /proc information
Date: Sat, 28 Aug 2004 21:45:46 +0200 [thread overview]
Message-ID: <20040828194546.GA25523@k3.hellgate.ch> (raw)
In-Reply-To: <20040827162308.GP2793@holomorphy.com>
On Fri, 27 Aug 2004 09:23:08 -0700, William Lee Irwin III wrote:
> than speed benefits. How many processes did you microbenchmark with?
Executive summary: I wrote a benchmark to compare /proc and nproc
performance. The results are as expected: Parsing even the most simple
strings is expensive. /proc performance does not scale if we have to
open and close many files, which is the common case.
In a situation with many processes p and fields/files f the delivery
overhead is roughly O(p) for nproc and O(p*f) for /proc.
The difference becomes even more pronounced if a /proc file request
triggers an expensive in-kernel computation for fields that are not
of interest but part of the file, or if human-readable files need to
be parsed.
Benchmark: I chose the most favorable scenario for /proc I could think
of: Reading a single, easy to parse file per process and find every data
item useful. I picked /proc/pid/statm. For nproc, I chose seven fields
that are calculated with the same resource usage as the fields in statm:
NPROC_VMSIZE, NPROC_VMLOCK, NPROC_VMRSS, NPROC_VMDATA, NPROC_VMSTACK,
NPROC_VMEXE, and NPROC_VMLIB.
Numbers:
* The first run is basically lseek+read:
/proc/pid/statm for 1000 processes, 1000 times, lseek
CPU time : 7.080000s
Wall time: 7.636732s
* The second run adds a simple sscanf call to dump seven values
into seven variables:
/prod/pid/statm for 1000 processes, 1000 times, lseek (scanf)
CPU time : 10.230000s
Wall time: 10.958432s
* If we watch p processes with f files each, we typically hit the
file descriptor limit before p * f == 1024. From then on, lseek is
useless, we have to resort to opening and closing files:
/prod/pid/statm for 1000 processes, 1000 times, open
CPU time : 14.920000s
Wall time: 16.087339s
* Again, parsing the string comes at a cost:
/prod/pid/statm for 1000 processes, 1000 times, open (scanf)
CPU time : 18.110000s
Wall time: 19.457451s
* What happens if we need to read 2 simple /proc files (14 fields)
per process?
/prod/pid/statm (2x) for 1000 processes, 1000 times, open (scanf)
CPU time : 30.250000s
Wall time: 32.650314s
* 10000 processes at 3 files each (27 fields)
/prod/pid/statm (3x) for 10000 processes, 1000 times, open (scanf)
CPU time : 450.630000s
Wall time: 500.265503s
* nproc delivering said 7 fields:
nproc for 1000 processes, 1000 times, one process per request
CPU time : 7.910000s
Wall time: 8.473371s
* 200 processes per request, but still 1000 reply messages. If we stuffed
a bunch of them into every message, performance would improve further.
nproc for 1000 processes, 1000 times, 200 processes per request
CPU time : 6.350000s
Wall time: 6.817391s
* There's no large penalty if we need additional fields:
14 nproc fields for 1000 processes, 1000 times, one process per request
CPU time : 8.680000s
Wall time: 9.328828s
27 nproc fields for 10000 processes, 1000 times, one process per request
CPU time : 88.270000s
Wall time: 98.664330s
Roger
next prev parent reply other threads:[~2004-08-28 19:48 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-08-27 12:24 [0/2][ANNOUNCE] nproc: netlink access to /proc information Roger Luethi
2004-08-27 12:24 ` [1/2][PATCH] " Roger Luethi
2004-08-27 13:39 ` Roger Luethi
2004-08-27 12:24 ` [2/2][sample code] nproc: user space app Roger Luethi
2004-08-27 14:50 ` [0/2][ANNOUNCE] nproc: netlink access to /proc information James Morris
2004-08-27 15:26 ` Roger Luethi
2004-08-27 16:23 ` William Lee Irwin III
2004-08-27 16:37 ` Albert Cahalan
2004-08-27 16:41 ` William Lee Irwin III
2004-08-27 17:01 ` Roger Luethi
2004-08-27 17:08 ` William Lee Irwin III
2004-08-28 19:45 ` Roger Luethi [this message]
2004-08-28 19:56 ` [BENCHMARK] " William Lee Irwin III
2004-08-28 20:14 ` Roger Luethi
2004-08-29 16:05 ` William Lee Irwin III
2004-08-29 17:02 ` Roger Luethi
2004-08-29 17:20 ` William Lee Irwin III
2004-08-29 17:52 ` Roger Luethi
2004-08-29 18:16 ` William Lee Irwin III
2004-08-29 19:00 ` Roger Luethi
2004-08-29 20:17 ` Albert Cahalan
2004-08-29 20:46 ` William Lee Irwin III
2004-08-29 21:45 ` Albert Cahalan
2004-08-29 22:11 ` William Lee Irwin III
2004-08-29 21:41 ` Roger Luethi
2004-08-29 23:31 ` Albert Cahalan
2004-08-30 7:16 ` Roger Luethi
2004-08-30 10:31 ` Paulo Marques
2004-08-30 10:53 ` William Lee Irwin III
2004-08-30 12:23 ` Paulo Marques
2004-08-30 12:28 ` William Lee Irwin III
2004-08-30 13:43 ` Paulo Marques
2004-08-29 19:07 ` Paul Jackson
2004-08-29 19:17 ` William Lee Irwin III
2004-08-29 19:49 ` Roger Luethi
2004-08-29 20:25 ` William Lee Irwin III
2004-08-31 10:16 ` Roger Luethi
2004-08-31 15:34 ` [BENCHMARK] nproc: Look Ma, No get_tgid_list! Roger Luethi
2004-08-31 19:38 ` William Lee Irwin III
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040828194546.GA25523@k3.hellgate.ch \
--to=rl@hellgate.ch \
--cc=albert@users.sf.net \
--cc=linux-kernel@vger.kernel.org \
--cc=pj@sgi.com \
--cc=wli@holomorphy.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox