public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: Jens Axboe <axboe@suse.de>
Cc: Andrew Morton <akpm@osdl.org>,
	linux-kernel@vger.kernel.org, nickpiggin@yahoo.com.au
Subject: Re: Time sliced CFQ io scheduler
Date: Wed, 8 Dec 2004 01:37:36 +0100	[thread overview]
Message-ID: <20041208003736.GD16322@dualathlon.random> (raw)
In-Reply-To: <20041202195232.GA26695@suse.de>

On Thu, Dec 02, 2004 at 08:52:36PM +0100, Jens Axboe wrote:
> with its default io scheduler has basically zero write performance in

IMHO the default io scheduler should be changed to cfq. as is all but
general purpose so it's a mistake to leave it the default (plus as Jens
found the write bandwidth is not existent during reads, no surprise it
falls apart in any database load). We had to make the cfq the default
for the enterprise release already. The first thing I do is to add
elevator=cfq on a new install. I really like how well cfq has been
designed, implemented and turned, Jens's results with his last patch are
quite impressive.

BTW, a bit of historic that may be funny to read (and I believe nobody
on l-k knows about it): the first sfq I/O elevator idea (sfq is the
ancestor of cfq, cfq still fallbacks in sfq mode in the unlikely case
that no atomic memory is available during I/O) started at an openmosix
conference in Bologna, when I was listening to one guy fixing the
latency of some videogame app migrating from server to server with
openmosix. So they could use a few boxes clustered to host some hundred
videogames servers migrating depending the the load (I recall they said
the users tend to move from one game to the other all at the same time).
I never heard of sfq before, but when I understood how it worked for the
packet scheduler and how they were using it to fix a latency issue in
the responsiveness of their game while the server was migrating, I
immediatly got the idea I could use the very same sfq algorightm for the
disk elevator too (at that time it was being used only in the networking
qdisc packet scheduler). I wasn't really sure at first if it would work
equally well for disk too (network pays nothing for seeks). But
conceptually it did worth mentioning the idea to Jens so he could
evaluate it (I think he was already working on something similar but I
hope I did provide him with some useful hint). You know the rest, he
quickly turned it into a cfq and did numerous improvements. The funny
thing I meant to say, is that if I didn't incidentally listen to the
videogame talk (a talk I'd normally avoid) we wouldn't have cfq today in
the I/O scheduler in its current great status (of course we could have
it since Jens was already working on something similar, but perhaps it
would be at least a bit in the past compared to his current great
development). 

Even a videogame server may turn out to be very useful ;). I'm quite
sure the developer doing the videogame openmosix speech doesn't know his
speech had an impact on the kernel I/O scheduler ;). Hope he reads this
email.

  parent reply	other threads:[~2004-12-08  0:38 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-12-02 13:04 Time sliced CFQ io scheduler Jens Axboe
2004-12-02 13:48 ` Jens Axboe
2004-12-02 19:48   ` Andrew Morton
2004-12-02 19:52     ` Jens Axboe
2004-12-02 20:19       ` Andrew Morton
2004-12-02 20:19         ` Jens Axboe
2004-12-02 20:34           ` Andrew Morton
2004-12-02 20:37             ` Jens Axboe
2004-12-07 23:11               ` Nick Piggin
2004-12-02 22:18         ` Prakash K. Cheemplavam
2004-12-03  7:01           ` Jens Axboe
2004-12-03  9:12             ` Prakash K. Cheemplavam
2004-12-03  9:18               ` Jens Axboe
2004-12-03  9:35                 ` Prakash K. Cheemplavam
2004-12-03  9:43                   ` Jens Axboe
2004-12-03  9:26               ` Andrew Morton
2004-12-03  9:34                 ` Prakash K. Cheemplavam
2004-12-03  9:39                 ` Jens Axboe
2004-12-03  9:54                   ` Prakash K. Cheemplavam
     [not found]                   ` <41B03722.5090001@gmx.de>
2004-12-03 10:31                     ` Jens Axboe
2004-12-03 10:38                       ` Jens Axboe
2004-12-03 10:45                         ` Prakash K. Cheemplavam
2004-12-03 10:48                           ` Jens Axboe
2004-12-03 11:27                             ` Prakash K. Cheemplavam
2004-12-03 11:29                               ` Jens Axboe
2004-12-03 11:52                                 ` Prakash K. Cheemplavam
2004-12-08  0:37       ` Andrea Arcangeli [this message]
2004-12-08  0:54         ` Nick Piggin
2004-12-08  1:37           ` Andrea Arcangeli
2004-12-08  1:47             ` Nick Piggin
2004-12-08  2:09               ` Andrea Arcangeli
2004-12-08  2:11                 ` Andrew Morton
2004-12-08  2:22                   ` Andrea Arcangeli
2004-12-08  6:52               ` Jens Axboe
2004-12-08  2:00             ` Andrew Morton
2004-12-08  2:08               ` Andrew Morton
2004-12-08  6:55                 ` Jens Axboe
2004-12-08  2:20               ` Andrea Arcangeli
2004-12-08  2:25                 ` Andrew Morton
2004-12-08  2:33                   ` Andrea Arcangeli
2004-12-08  2:33                   ` Nick Piggin
2004-12-08  2:51                     ` Andrea Arcangeli
2004-12-08  3:02                       ` Nick Piggin
2004-12-08  6:58                     ` Jens Axboe
2004-12-08  7:14                       ` Nick Piggin
2004-12-08  7:20                         ` Jens Axboe
2004-12-08  7:29                           ` Nick Piggin
2004-12-08  7:32                             ` Jens Axboe
2004-12-08  7:30                           ` Andrew Morton
2004-12-08  7:36                             ` Jens Axboe
2004-12-08 13:48                         ` Jens Axboe
2004-12-08  6:55               ` Jens Axboe
2004-12-08  7:08                 ` Nick Piggin
2004-12-08  7:11                   ` Jens Axboe
2004-12-08  7:19                     ` Nick Piggin
2004-12-08  7:26                       ` Jens Axboe
2004-12-08  9:35                         ` Jens Axboe
2004-12-08 10:08                           ` Jens Axboe
2004-12-08 12:47                           ` Jens Axboe
2004-12-08 10:52                 ` Helge Hafting
2004-12-08 10:49                   ` Jens Axboe
2004-12-08  6:49           ` Jens Axboe
2004-12-02 14:28 ` Giuliano Pochini
2004-12-02 14:41   ` Jens Axboe
2004-12-04 13:05     ` Giuliano Pochini
  -- strict thread matches above, loose matches on Subject: below --
2004-12-03 20:52 Chuck Ebbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20041208003736.GD16322@dualathlon.random \
    --to=andrea@suse.de \
    --cc=akpm@osdl.org \
    --cc=axboe@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nickpiggin@yahoo.com.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox