From: Willy Tarreau <willy@w.ods.org>
To: Rik van Riel <riel@conectiva.com.br>
Cc: Willy Tarreau <willy@w.ods.org>, Jens Axboe <axboe@suse.de>,
Andrew Morton <akpm@digeo.com>,
Marc-Christian Petersen <m.c.p@wolk-project.de>,
linux-kernel@vger.kernel.org, Con Kolivas <conman@kolivas.net>
Subject: Re: [PATCH] 2.4.20-rmap15a
Date: Tue, 3 Dec 2002 07:21:49 +0100 [thread overview]
Message-ID: <20021203062149.GA10479@alpha.home.local> (raw)
In-Reply-To: <Pine.LNX.4.44L.0212022107421.15981-100000@imladris.surriel.com>
On Mon, Dec 02, 2002 at 09:10:03PM -0200, Rik van Riel wrote:
> On Mon, 2 Dec 2002, Willy Tarreau wrote:
>
> > - not one, but two elevators, one for read requests, one for write requests.
> > - we would process one of the request queues (either reads or writes), and
> > after a user-settable amount of requests processed, we would switch to the
>
> OK, lets for the sake of the argument imagine such an
> elevator, with Read and Write queues for the following
> block numbers:
>
> R: 1 3 4 5 6 20 21 22 100 110 111
>
> W: 2 15 16 17 18 50 52 53
>
> Now imagine what switching randomly between these queues
> would do for disk seeks. Especially considering that some
> of the writes can be "sandwiched" in-between the reads...
Well, I don't speak about "random switching". My goal is exactly to reduce seek
time *and* to bound latency. Look at your example, considering that we put no
limit on the number of consecutive requests, just processing them in order would
give :
R(1), W(2), R(3,4,5,6), W(15,16,17,18), R(20,21,22), W(50,52,53), R(100,110,111)
This is about what is currently done with a single elevator. Now, if we try to
detect long runs of consecutive accesses based on seek length, we could
optimize it this way :
W(2), R(1-22), W(15-53), R(100-111) => we only do one backwards seek
And now, if you want to lower latency for a particular usage, with a 3:1
read/write ratio, this would give :
R(1,3,4), W(2), R(5,6,20), W(15), R(21,22,100), W(16), R(110,111), W(17-53)
Of course, this won't be globally optimal, but could perhaps help *some*
processes to wait less time for their data, which is the goal of inserting read
requests near the head of the queue, isn't it ?
BTW, just for my understanding, what would your example look like with the
current elevator (choose the ordering you like) ?
Cheers,
Willy
next prev parent reply other threads:[~2002-12-03 6:14 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-12-01 20:56 [PATCH] 2.4.20-rmap15a Marc-Christian Petersen
2002-12-01 21:25 ` Rik van Riel
2002-12-01 21:41 ` Marc-Christian Petersen
2002-12-01 21:56 ` Con Kolivas
2002-12-02 0:18 ` Con Kolivas
2002-12-02 8:15 ` Jens Axboe
2002-12-02 8:51 ` Andrew Morton
2002-12-02 8:56 ` Jens Axboe
2002-12-02 12:38 ` Rik van Riel
2002-12-02 20:45 ` Willy Tarreau
2002-12-02 23:10 ` Rik van Riel
2002-12-03 6:21 ` Willy Tarreau [this message]
2002-12-02 21:46 ` Bill Davidsen
-- strict thread matches above, loose matches on Subject: below --
2002-12-01 20:35 Rik van Riel
2002-12-03 13:55 ` Miquel van Smoorenburg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20021203062149.GA10479@alpha.home.local \
--to=willy@w.ods.org \
--cc=akpm@digeo.com \
--cc=axboe@suse.de \
--cc=conman@kolivas.net \
--cc=linux-kernel@vger.kernel.org \
--cc=m.c.p@wolk-project.de \
--cc=riel@conectiva.com.br \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox