public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Con Kolivas <conman@kolivas.net>
To: Andrew Morton <akpm@digeo.com>
Cc: linux kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: [BENCHMARK] 2.5.44-mm6 contest results
Date: Tue, 29 Oct 2002 20:11:48 +1100	[thread overview]
Message-ID: <1035882708.3dbe50d48e92f@kolivas.net> (raw)
In-Reply-To: <3DBE2EBE.DC860105@digeo.com>

Quoting Andrew Morton <akpm@digeo.com>:

> Con Kolivas wrote:
> > 
> > io_load:
> > Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
> > 2.5.44 [3]              873.8   9       69      12      12.24
> > 2.5.44-mm1 [3]          347.3   22      35      15      4.86
> > 2.5.44-mm2 [3]          294.2   28      19      10      4.12
> > 2.5.44-mm4 [3]          358.7   23      25      10      5.02
> > 2.5.44-mm5 [4]          270.7   29      18      11      3.79
> > 2.5.44-mm6 [3]          284.1   28      20      10      3.98
> 
> Jens, I think I prefer fifo_batch=16.  We do need to expose
> these in /somewhere so people can fiddle with them.
> 
> >...
> > mem_load:
> > Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
> > 2.5.44 [3]              114.3   67      30      2       1.60
> > 2.5.44-mm1 [3]          159.7   47      38      2       2.24
> > 2.5.44-mm2 [3]          116.6   64      29      2       1.63
> > 2.5.44-mm4 [3]          114.9   65      28      2       1.61
> > 2.5.44-mm5 [4]          114.1   65      30      2       1.60
> > 2.5.44-mm6 [3]          226.9   33      50      2       3.18
> > 
> > Mem load has dropped off again
> 
> Well that's one interpretation.  The other is "goody, that pesky
> kernel compile isn't slowing down my important memory-intensive
> whateveritis so much".  It's a tradeoff.
> 
> It appears that this change was caused by increasing the default
> value of /proc/sys/vm/page-cluster from 3 to 4.  I am surprised.
> 
> It was only of small benefit in other tests so I'll ditch that one.

I understand the trade off issue. Since make -j4 bzImage is 4 cpu hungry
processes, ideally I'm guessing mem_load should only extend the duration and
drop the cpu by 25%.

> (You're still testing with all IO against the same disk, yes?  Please
> rememeber that things change quite significantly when the swap IO
> or the io_load is against a different device)

Yes I am. Sorry I just dont have the hardware to do anything else.

Con

      parent reply	other threads:[~2002-10-29  9:05 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-29  1:43 [BENCHMARK] 2.5.44-mm6 contest results Con Kolivas
2002-10-29  6:46 ` Andrew Morton
2002-10-29  7:40   ` Jens Axboe
2002-10-29  7:51     ` Andrew Morton
2002-10-29  8:22   ` Giuliano Pochini
2002-10-29  9:11   ` Con Kolivas [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1035882708.3dbe50d48e92f@kolivas.net \
    --to=conman@kolivas.net \
    --cc=akpm@digeo.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox