public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@suse.de>
To: Con Kolivas <conman@kolivas.net>
Cc: Andrea Arcangeli <andrea@suse.de>,
	linux kernel mailing list <linux-kernel@vger.kernel.org>,
	marcelo@conectiva.com.br
Subject: Re: [BENCHMARK] 2.4.{18,19{-ck9},20rc1{-aa1}} with contest
Date: Sun, 10 Nov 2002 11:06:56 +0100	[thread overview]
Message-ID: <20021110100656.GF31134@suse.de> (raw)
In-Reply-To: <200211102058.46883.conman@kolivas.net>

On Sun, Nov 10 2002, Con Kolivas wrote:
> >> Well this is interesting. 2.4.20-rc1 seems to have improved it's ability
> >> to do IO work. Unfortunately it is now busy starving the scheduler in the
> >> mean time, much like the 2.5 kernels did before the deadline scheduler was
> >> put in.
> >>
> >> read_load:
> >> Kernel [runs]           Time    CPU%    Loads   LCPU%   Ratio
> >> 2.4.18 [3]              102.3   70      6       3       1.43
> >> 2.4.19 [2]              134.1   54      14      5       1.88
> >> 2.4.19-ck9 [2]          77.4    85      11      9       1.08
> >> 2.4.20-rc1 [3]          173.2   43      20      5       2.43
> >> 2.4.20-rc1aa1 [3]       150.6   51      16      5       2.11
> >
> >What is busy starving the scheduler? This sounds like it's again just an
> >evelator benchmark. I don't buy your scheduler claims, give more
> >explanations or it'll take it as vapourware wording, I very much doubt
> >you can find any single problem in the scheduler rc1aa2 or that the
> >scheduler in rc1aa1 has a chance to run slower than the one of 2.4.19 in
> >a I/O benchmark, ok it still misses the numa algorithm, but that's not a
> >bug, just a missing feature and it'll soon be fixed too and it doesn't
> >matter for normal smp non-numa machines out there.
> 
> Ok I fully retract the statement. I should not pass judgement on what part of 
> the kernel has changed the benchmark results, I'll just describe what the 
> results say. Note however this comment was centred on the results of io_load 
> above. Put simply : if I am writing a large file and then try to compile the 
> kernel (make -j4 bzImage) it is 16 times slower.

In Con's defence, I think he meant io scheduler starvation and not
process scheduler starvation. Otherwise the following wouldn't make a
lot of sense:

"Unfortunately it is now busy starving the scheduler in the mean time,
much like the 2.5 kernels did before the deadline scheduler was put in."

In indeed, 2.5 kernels had the exact same io scheduler algorithm in 2.5
as 2.4.20-rc has, so this makes perfect sense from the io scheduler
starvation POV.

There are inherent problems in the 2.4 io scheduler for these types of
work loads, the ugly and nausea-inducing read-latency hack that akpm did
attempts to work-around that.

Andrea is obviously talking about process scheduler, note the numa
reference among other things.

-- 
Jens Axboe


  reply	other threads:[~2002-11-10 10:00 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-11-09  2:00 [BENCHMARK] 2.4.{18,19{-ck9},20rc1{-aa1}} with contest Con Kolivas
2002-11-09  2:36 ` Andrew Morton
2002-11-09  3:26   ` Con Kolivas
2002-11-09  4:15     ` Andrew Morton
2002-11-09  5:12       ` Con Kolivas
2002-11-09 11:21         ` Jens Axboe
2002-11-09 13:09           ` Con Kolivas
2002-11-09 13:35             ` Stephen Lord
2002-11-09 13:54             ` Jens Axboe
2002-11-09 21:12               ` Arador
2002-11-10  2:26                 ` Andrea Arcangeli
2002-11-09 21:53               ` Con Kolivas
2002-11-10 10:09                 ` Jens Axboe
2002-11-10 16:23                   ` Andrea Arcangeli
2002-11-11  4:26                   ` Con Kolivas
2002-11-10 10:12               ` Kjartan Maraas
2002-11-10 10:17                 ` Jens Axboe
2002-11-10 16:27                 ` Andrea Arcangeli
2002-11-09 11:20       ` Jens Axboe
2002-11-10  2:44 ` Andrea Arcangeli
2002-11-10  3:56   ` Matt Reppert
2002-11-10  9:58   ` Con Kolivas
2002-11-10 10:06     ` Jens Axboe [this message]
2002-11-10 16:21       ` Andrea Arcangeli
2002-11-10 16:20     ` Andrea Arcangeli
2002-11-10 19:32   ` Rik van Riel
2002-11-10 20:10     ` Andrea Arcangeli
2002-11-10 20:52       ` Andrew Morton
2002-11-10 21:05         ` Rik van Riel
2002-11-11  1:54           ` Andrea Arcangeli
2002-11-11  4:03             ` Andrew Morton
2002-11-11  4:06               ` Andrea Arcangeli
2002-11-11  4:22                 ` Andrew Morton
2002-11-11  4:39                   ` Andrea Arcangeli
2002-11-11  5:10                     ` Andrew Morton
2002-11-11  5:23                       ` Andrea Arcangeli
2002-11-11  7:58                       ` William Lee Irwin III
2002-11-11 13:56                       ` Rik van Riel
2002-11-11 13:45             ` Rik van Riel
2002-11-11 14:09               ` Jens Axboe
2002-11-11 15:48                 ` Andrea Arcangeli
2002-11-11 15:43               ` Andrea Arcangeli
2002-11-10 20:56       ` Andrew Morton
2002-11-11  1:08         ` Andrea Arcangeli
  -- strict thread matches above, loose matches on Subject: below --
2002-11-09  3:44 Dieter Nützel
2002-11-09  3:54 ` Con Kolivas
2002-11-09  4:02   ` Dieter Nützel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20021110100656.GF31134@suse.de \
    --to=axboe@suse.de \
    --cc=andrea@suse.de \
    --cc=conman@kolivas.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marcelo@conectiva.com.br \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox