From: Andrew Morton <akpm@zip.com.au>
To: lkml <linux-kernel@vger.kernel.org>
Subject: [patch] multipage pagecache writeout
Date: Fri, 01 Mar 2002 00:35:51 -0800 [thread overview]
Message-ID: <3C7F3D67.2A8E6055@zip.com.au> (raw)
These patches:
http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.6-pre1/mpio-10-biobits.patch
http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.6-pre1/mpio-20-core.patch
http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.6-pre1/mpio-30-ext2.patch
implement multipage writeout from the pagecache. These patches require
the allocate-on-flush patches. The dalloc-30-ratcache patch is not a
requirement for the mpio series. But is recommended for
balls-to-the-wall how-fast-can-it-go testing.
Pages from the pagecache are given a disk mapping, are assembled into
large BIOs (up to half a megabyte) and these BIOs are injected direct
into the request layer.
These pages never have attached buffer_heads. The buffer layer is
completely bypassed for all write(2) data. As is, to some extent, the
request merging layer.
This patch should bypass the lru_list_lock contention problem, and the
ZONE_NORMAL-full-of-buffer_heads bug. (Well, this may require
multipage reads, too).
Future work includes:
- Implement buffer_head-less block_truncate_page().
- multipage reads.
A bit of a no-brainer, but first the current readahead code needs a
big shakeout.
Two additional patches are available:
http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.6-pre1/tuning-10-request.patch
- The get_request starvation fix for 2.5.
This patch also increases the request queue by a
lot. Which implies that we can have as much as 512 megabytes
of I/O underway per device. This may sound excessive, but
the locked- and dirty-page accounting in the delalloc patch
only permits this to happen if the machine is large enough to
cope with it.
http://www.zip.com.au/~akpm/linux/patches/2.5/2.5.6-pre1/tuning-20-ext2-preread-inode.patch
- Pull the backing block for a new ext2 inode into
the buffercache when the inode is created. This fixes a
significant throughput problem with many-file writeout, where
the writer is continually interrupted by having to perform
reads.
reply other threads:[~2002-03-01 8:42 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3C7F3D67.2A8E6055@zip.com.au \
--to=akpm@zip.com.au \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox