From: Andrew Morton <akpm@zip.com.au>
To: Linus Torvalds <torvalds@transmeta.com>,
lkml <linux-kernel@vger.kernel.org>
Subject: Re: [patch 4/19] stack space reduction (remove MAX_BUF_PER_PAGE)
Date: Mon, 17 Jun 2002 02:35:37 -0700 [thread overview]
Message-ID: <3D0DAD69.5C667D63@zip.com.au> (raw)
In-Reply-To: 3D0D86F0.5016809@zip.com.au
Andrew Morton wrote:
>
> ..
> + do {
> + if (buffer_async_read(bh))
> + submit_bh(READ, bh);
> + } while ((bh = bh->b_this_page) != head);
That's a bug. We cannot touch bh->b_this_page after submitting
the buffer because the I/O could complete synchronously and the
page can come unlocked and the buffers can be stripped by the VM.
The buffer at `bh' could be dead memory.
This can happen on SMP with PIO-mode IDE disks.
We cannot fix this with the usual
do {
next = bh->b_this_page;
if (something)
submit_bh(bh);
} while ((bh = next) != head);
approach because the final buffer on the page could be unlocked,
clean and uptodate.
Pinning one of the buffers while we walk the ring will keep
try_to_free_buffers() away. Here is an incremental patch.
--- 2.5.22/fs/ntfs/aops.c~ntfs-race-fix Mon Jun 17 01:50:32 2002
+++ 2.5.22-akpm/fs/ntfs/aops.c Mon Jun 17 02:24:00 2002
@@ -220,10 +220,12 @@ handle_zblock:
} while ((bh = bh->b_this_page) != head);
/* Finally, start i/o on the buffers. */
+ get_bh(head); /* Pin the buffers while we walk the ring */
do {
if (buffer_async_read(bh))
submit_bh(READ, bh);
} while ((bh = bh->b_this_page) != head);
+ put_bh(head);
return 0;
}
/* No i/o was scheduled on any of the buffers. */
@@ -510,10 +512,12 @@ handle_zblock:
} while ((bh = bh->b_this_page) != head);
/* Finally, start i/o on the buffers. */
+ get_bh(head); /* Pin the buffers while we walk the ring */
do {
if (buffer_async_read(bh))
submit_bh(READ, bh);
} while ((bh = bh->b_this_page) != head);
+ put_bh(head);
return 0;
}
/* No i/o was scheduled on any of the buffers. */
@@ -774,10 +778,12 @@ handle_zblock:
} while ((bh = bh->b_this_page) != head);
/* Finally, start i/o on the buffers. */
+ get_bh(head); /* Pin the buffers while we walk the ring */
do {
if (buffer_async_read(bh))
submit_bh(READ, bh);
} while ((bh = bh->b_this_page) != head);
+ put_bh(head);
return 0;
}
/* No i/o was scheduled on any of the buffers. */
--- 2.5.22/fs/buffer.c~ntfs-race-fix Mon Jun 17 01:55:55 2002
+++ 2.5.22-akpm/fs/buffer.c Mon Jun 17 02:24:33 2002
@@ -2024,7 +2024,12 @@ int block_read_full_page(struct page *pa
* Stage 3: start the IO. Check for uptodateness
* inside the buffer lock in case another process reading
* the underlying blockdev brought it uptodate (the sct fix).
+ *
+ * Bump the refcount of one buffer while walking the ring so
+ * that the VM cannot release the buffers while we're looking at
+ * ->b_this_page.
*/
+ get_bh(head);
do {
if (buffer_async_read(bh)) {
if (buffer_uptodate(bh))
@@ -2033,6 +2038,7 @@ int block_read_full_page(struct page *pa
submit_bh(READ, bh);
}
} while ((bh = bh->b_this_page) != head);
+ put_bh(head);
return 0;
}
-
next prev parent reply other threads:[~2002-06-17 9:31 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-06-17 6:51 [patch 4/19] stack space reduction (remove MAX_BUF_PER_PAGE) Andrew Morton
2002-06-17 9:35 ` Andrew Morton [this message]
2002-06-18 17:08 ` Linus Torvalds
2002-06-18 19:28 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3D0DAD69.5C667D63@zip.com.au \
--to=akpm@zip.com.au \
--cc=linux-kernel@vger.kernel.org \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox