From: Helge Hafting <helge.hafting@hist.no>
To: Jens Axboe <axboe@suse.de>
Cc: Andrew Morton <akpm@osdl.org>, Andrea Arcangeli <andrea@suse.de>,
nickpiggin@yahoo.com.au, linux-kernel@vger.kernel.org
Subject: Re: Time sliced CFQ io scheduler
Date: Wed, 08 Dec 2004 11:52:24 +0100 [thread overview]
Message-ID: <41B6DCE8.5030304@hist.no> (raw)
In-Reply-To: <20041208065534.GF3035@suse.de>
Jens Axboe wrote:
>On Tue, Dec 07 2004, Andrew Morton wrote:
>
>
>>Andrea Arcangeli <andrea@suse.de> wrote:
>>
>>
>>>The desktop is ok with "as" simply because it's
>>> normally optimal to stop writes completely
>>>
>>>
>>AS doesn't "stop writes completely". With the current settings it
>>apportions about 1/3 of the disk's bandwidth to writes.
>>
>>This thing Jens has found is for direct-io writes only. It's a bug.
>>
>>
>
>Indeed. It's a special case one, but nasty for that case.
>
>
>
>>The other problem with AS is that it basically doesn't work at all with a
>>TCQ depth greater than four or so, and lots of people blindly look at
>>untuned SCSI benchmark results without realising that. If a distro is
>>
>>
>
>That's pretty easy to fix. I added something like that to cfq, and it's
>not a lot of lines of code (grep for rq_in_driver and cfq_max_depth).
>
>
>
>>always selecting CFQ then they've probably gone and deoptimised all their
>>IDE users.
>>
>>
>
>Andrew, AS has other issues, it's not a case of AS always being faster
>at everything.
>
>
>
>>AS needs another iteration of development to fix these things. Right now
>>it's probably the case that we need CFQ or deadline for servers and AS for
>>desktops. That's awkward.
>>
>>
>
>Currently I think the time sliced cfq is the best all around. There's
>still a few kinks to be shaken out, but generally I think the concept is
>sounder than AS.
>
>
I wonder, would it make sense to add some limited anticipation
to the cfq scheduler? It seems to me that there is room to
get some of the AS benefit without getting too unfair:
AS does a wait that is short compared to a seek, getting some
more locality almost for free. Consider if CFQ did this, with
the added limitation that it only let a few extra read requests
in this way before doing the next seek anyway. For example,
allowing up to 3 extra anticipated read requests before
seeking could quadruple read bandwith in some cases. This is
clearly not as fair, but the extra reads will be almost free
because those few reads take little time compared to the seek
that follows anyway. Therefore, the latency for other requests
shouldn't change much and we get the best of both AS and CFQ.
Or have I made a broken assumption?
The max number of requests to anticipate could even be
configurable, jut set it to 0 to get pure CFQ.
Helge Hafting
next prev parent reply other threads:[~2004-12-08 10:45 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-12-02 13:04 Time sliced CFQ io scheduler Jens Axboe
2004-12-02 13:48 ` Jens Axboe
2004-12-02 19:48 ` Andrew Morton
2004-12-02 19:52 ` Jens Axboe
2004-12-02 20:19 ` Andrew Morton
2004-12-02 20:19 ` Jens Axboe
2004-12-02 20:34 ` Andrew Morton
2004-12-02 20:37 ` Jens Axboe
2004-12-07 23:11 ` Nick Piggin
2004-12-02 22:18 ` Prakash K. Cheemplavam
2004-12-03 7:01 ` Jens Axboe
2004-12-03 9:12 ` Prakash K. Cheemplavam
2004-12-03 9:18 ` Jens Axboe
2004-12-03 9:35 ` Prakash K. Cheemplavam
2004-12-03 9:43 ` Jens Axboe
2004-12-03 9:26 ` Andrew Morton
2004-12-03 9:34 ` Prakash K. Cheemplavam
2004-12-03 9:39 ` Jens Axboe
2004-12-03 9:54 ` Prakash K. Cheemplavam
[not found] ` <41B03722.5090001@gmx.de>
2004-12-03 10:31 ` Jens Axboe
2004-12-03 10:38 ` Jens Axboe
2004-12-03 10:45 ` Prakash K. Cheemplavam
2004-12-03 10:48 ` Jens Axboe
2004-12-03 11:27 ` Prakash K. Cheemplavam
2004-12-03 11:29 ` Jens Axboe
2004-12-03 11:52 ` Prakash K. Cheemplavam
2004-12-08 0:37 ` Andrea Arcangeli
2004-12-08 0:54 ` Nick Piggin
2004-12-08 1:37 ` Andrea Arcangeli
2004-12-08 1:47 ` Nick Piggin
2004-12-08 2:09 ` Andrea Arcangeli
2004-12-08 2:11 ` Andrew Morton
2004-12-08 2:22 ` Andrea Arcangeli
2004-12-08 6:52 ` Jens Axboe
2004-12-08 2:00 ` Andrew Morton
2004-12-08 2:08 ` Andrew Morton
2004-12-08 6:55 ` Jens Axboe
2004-12-08 2:20 ` Andrea Arcangeli
2004-12-08 2:25 ` Andrew Morton
2004-12-08 2:33 ` Andrea Arcangeli
2004-12-08 2:33 ` Nick Piggin
2004-12-08 2:51 ` Andrea Arcangeli
2004-12-08 3:02 ` Nick Piggin
2004-12-08 6:58 ` Jens Axboe
2004-12-08 7:14 ` Nick Piggin
2004-12-08 7:20 ` Jens Axboe
2004-12-08 7:29 ` Nick Piggin
2004-12-08 7:32 ` Jens Axboe
2004-12-08 7:30 ` Andrew Morton
2004-12-08 7:36 ` Jens Axboe
2004-12-08 13:48 ` Jens Axboe
2004-12-08 6:55 ` Jens Axboe
2004-12-08 7:08 ` Nick Piggin
2004-12-08 7:11 ` Jens Axboe
2004-12-08 7:19 ` Nick Piggin
2004-12-08 7:26 ` Jens Axboe
2004-12-08 9:35 ` Jens Axboe
2004-12-08 10:08 ` Jens Axboe
2004-12-08 12:47 ` Jens Axboe
2004-12-08 10:52 ` Helge Hafting [this message]
2004-12-08 10:49 ` Jens Axboe
2004-12-08 6:49 ` Jens Axboe
2004-12-02 14:28 ` Giuliano Pochini
2004-12-02 14:41 ` Jens Axboe
2004-12-04 13:05 ` Giuliano Pochini
-- strict thread matches above, loose matches on Subject: below --
2004-12-03 20:52 Chuck Ebbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=41B6DCE8.5030304@hist.no \
--to=helge.hafting@hist.no \
--cc=akpm@osdl.org \
--cc=andrea@suse.de \
--cc=axboe@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=nickpiggin@yahoo.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox