From: Jens Axboe <axboe@suse.de>
To: Marcelo Tosatti <marcelo@conectiva.com.br>
Cc: lkml <linux-kernel@vger.kernel.org>
Subject: Re: ll_rw_block/submit_bh and request limits
Date: Thu, 22 Feb 2001 14:56:42 +0100 [thread overview]
Message-ID: <20010222145642.D17276@suse.de> (raw)
In-Reply-To: <Pine.LNX.4.21.0102220707380.1694-100000@freak.distro.conectiva>
In-Reply-To: <Pine.LNX.4.21.0102220707380.1694-100000@freak.distro.conectiva>; from marcelo@conectiva.com.br on Thu, Feb 22, 2001 at 07:41:52AM -0200
On Thu, Feb 22 2001, Marcelo Tosatti wrote:
> The following piece of code in ll_rw_block() aims to limit the number of
> locked buffers by making processes throttle on IO if the number of on
> flight requests is bigger than a high watermaker. IO will only start
> again if we're under a low watermark.
>
> if (atomic_read(&queued_sectors) >= high_queued_sectors) {
> run_task_queue(&tq_disk);
> wait_event(blk_buffers_wait,
> atomic_read(&queued_sectors) < low_queued_sectors);
> }
>
>
> However, if submit_bh() is used to queue IO (which is used by ->readpage()
> for ext2, for example), no throttling happens.
>
> It looks like ll_rw_block() users (writes, metadata reads) can be starved
> by submit_bh() (data reads).
>
> If I'm not missing something, the watermark check should be moved to
> submit_bh().
We might as well put it there, the idea was to not lock this one
buffer either but I doubt this would make any different in reality :-)
Linus, could you apply?
--- /opt/kernel/linux-2.4.2/drivers/block/ll_rw_blk.c Thu Feb 22 14:55:22 2001
+++ drivers/block/ll_rw_blk.c Thu Feb 22 14:53:07 2001
@@ -957,6 +959,20 @@
if (!test_bit(BH_Lock, &bh->b_state))
BUG();
+ /*
+ * don't lock any more buffers if we are above the high
+ * water mark. instead start I/O on the queued stuff.
+ */
+ if (atomic_read(&queued_sectors) >= high_queued_sectors) {
+ run_task_queue(&tq_disk);
+ if (rw == READA) {
+ bh->b_end_io(bh, test_bit(BH_Uptodate, &bh->b_state));
+ return;
+ }
+ wait_event(blk_buffers_wait,
+ atomic_read(&queued_sectors) < low_queued_sectors);
+ }
+
set_bit(BH_Req, &bh->b_state);
/*
@@ -1057,16 +1073,6 @@
for (i = 0; i < nr; i++) {
struct buffer_head *bh = bhs[i];
-
- /*
- * don't lock any more buffers if we are above the high
- * water mark. instead start I/O on the queued stuff.
- */
- if (atomic_read(&queued_sectors) >= high_queued_sectors) {
- run_task_queue(&tq_disk);
- wait_event(blk_buffers_wait,
- atomic_read(&queued_sectors) < low_queued_sectors);
- }
/* Only one thread can actually submit the I/O. */
if (test_and_set_bit(BH_Lock, &bh->b_state))
--
Jens Axboe
next prev parent reply other threads:[~2001-02-22 13:57 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-02-22 9:41 ll_rw_block/submit_bh and request limits Marcelo Tosatti
2001-02-22 13:56 ` Jens Axboe [this message]
2001-02-22 18:59 ` Linus Torvalds
2001-02-22 20:32 ` Marcelo Tosatti
2001-02-22 21:38 ` Andrea Arcangeli
2001-02-22 20:40 ` Marcelo Tosatti
2001-02-22 22:57 ` Andrea Arcangeli
2001-02-22 21:44 ` Marcelo Tosatti
2001-02-22 23:54 ` Andrea Arcangeli
2001-02-22 23:12 ` Andrea Arcangeli
2001-02-25 17:34 ` Jens Axboe
2001-02-25 19:26 ` Andrea Arcangeli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20010222145642.D17276@suse.de \
--to=axboe@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=marcelo@conectiva.com.br \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox