public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Con Kolivas <conman@kolivas.net>
To: Andrew Morton <akpm@digeo.com>
Cc: linux-kernel@vger.kernel.org, rcastro@ime.usp.br,
	ciarrocchi@linuxmail.org
Subject: load additions to contest
Date: Sun, 6 Oct 2002 15:38:08 +1000	[thread overview]
Message-ID: <200210061538.43778.conman@kolivas.net> (raw)
In-Reply-To: <3D9F3A52.4FB46701@digeo.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

I've added some load conditions to an experimental version of contest 
(http://contest.kolivas.net) and here are some of the results I've obtained 
so far:

First an explanation
The time column shows how long it took to conduct the kernel compile in the 
presence of the load
The cpu% shows what percentage of the cpu the kernel compile managed to use 
during compilation
Loads shows how many times the load managed to run while the kernel compile 
was happening
Lcpu% shows the percentage cpu the load used while running
Ratio shows a ratio of kernel compilation time to the reference (2.4.19)

Use a fixed width font to see the tables correctly.

tarc_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.4.19 [2]              88.0    74      50      25      1.31
2.4.19-cc [1]           86.1    78      51      26      1.28
2.5.38 [1]              91.8    74      46      22      1.37
2.5.39 [1]              94.4    71      58      27      1.41
2.5.40 [1]              95.0    71      59      27      1.41
2.5.40-mm1 [1]          93.8    72      56      26      1.40

This load repeatedly creates a tar of the include directory of the linux 
kernel. You can see a decrease in performance was visible at 2.5.38 without a 
concomitant increase in loads, but this improved by 2.5.39.


tarx_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.4.19 [2]              87.6    74      13      24      1.30
2.4.19-cc [1]           81.5    80      12      24      1.21
2.5.38 [1]              296.5   23      54      28      4.41
2.5.39 [1]              108.2   64      9       12      1.61
2.5.40 [1]              107.0   64      8       11      1.59
2.5.40-mm1 [1]          120.5   58      12      16      1.79

This load repeatedly extracts a tar  of the include directory of the linux 
kernel. A performance boost is noted by the compressed cache kernel 
consistent with this data being cached better (less IO). 2.5.38 shows very 
heavy writing and a performance penalty with that. All the 2.5 kernels show 
worse performance than the 2.4 kernels as the time taken to compile the 
kernel is longer even though the amount of work done by the load has 
decreased.


read_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.4.19 [2]              134.1   54      14      5       2.00
2.4.19-cc [2]           92.5    72      22      20      1.38
2.5.38 [2]              100.5   76      9       5       1.50
2.5.39 [2]              101.3   74      14      6       1.51
2.5.40 [1]              101.5   73      13      5       1.51
2.5.40-mm1 [1]          104.5   74      9       5       1.56

This load repeatedly copies a file the size of the physical memory to 
/dev/null. Compressed caching shows the performance boost of caching more of 
this data in physical ram - caveat is that this data would be simple to 
compress so the advantage is overstated. The 2.5 kernels show equivalent 
performance at 2.5.38 (time down at the expense of load down) but have better 
performance at 2.5.39-40 (time down with equivalent load being performed). 
2.5.40-mm1 seems to exhibit the same performance as 2.5.38.


lslr_load:
Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
2.4.19 [2]              83.1    77      34      24      1.24
2.4.19-cc [1]           82.8    79      34      24      1.23
2.5.38 [1]              74.8    89      16      13      1.11
2.5.39 [1]              76.7    88      18      14      1.14
2.5.40 [1]              74.9    89      15      12      1.12
2.5.40-mm1 [1]          76.0    89      15      12      1.13

This load repeatedly does a `ls -lR >/dev/null`. The performance seems to be 
overall similar, with the bias towards the kernel compilation being performed 
sooner.

These were very interesting loads to conduct as suggested by AKPM and 
depending on the feedback I get I will probably incorporate them into 
contest.

Comments?
Con 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.7 (GNU/Linux)

iD8DBQE9n8xCF6dfvkL3i1gRAqQMAJwJ1lgYI0ebW1yw7frZt7lncYBFVQCeIsYN
NNgrrWyrqTWGLO11IlxtyPs=
=Ldnh
-----END PGP SIGNATURE-----

  parent reply	other threads:[~2002-10-06  5:35 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-05 18:28 [BENCHMARK] contest 0.50 results to date Paolo Ciarrocchi
2002-10-05 19:15 ` Andrew Morton
2002-10-05 20:56   ` Rodrigo Souza de Castro
2002-10-06  1:03   ` Con Kolivas
2002-10-06  5:38   ` Con Kolivas [this message]
2002-10-06  6:11     ` load additions to contest Andrew Morton
2002-10-06  6:56       ` Con Kolivas
2002-10-06 12:07       ` Con Kolivas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200210061538.43778.conman@kolivas.net \
    --to=conman@kolivas.net \
    --cc=akpm@digeo.com \
    --cc=ciarrocchi@linuxmail.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rcastro@ime.usp.br \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox