From: Andrew Morton <akpm@zip.com.au>
To: Steve Lord <lord@sgi.com>
Cc: Ben Israel <ben@genesis-one.com>, linux-kernel@vger.kernel.org
Subject: Re: File System Performance
Date: Mon, 12 Nov 2001 12:41:38 -0800 [thread overview]
Message-ID: <3BF03402.87D44589@zip.com.au> (raw)
In-Reply-To: <3BF02702.34C21E75@zip.com.au>, <00b201c16b81$9d7aaba0$5101a8c0@pbc.adelphia.net> <3BEFF9D1.3CC01AB3@zip.com.au> <00da01c16ba2$96aeda00$5101a8c0@pbc.adelphia.net> <3BF02702.34C21E75@zip.com.au> <1005595583.13307.5.camel@jen.americas.sgi.com>
Steve Lord wrote:
>
> ...
> This all works OK when the filesystem is not bursting at the seams,
> and when there is some correspondence between logical block numbers
> in the filesystem and physical drives underneath. I believe that once
> you get into complex filesystems using raid devices and smart caching
> drives it gets very hard for the filesystem to predict anything
> about latency of access for two blocks which it things are next
> to each other in the filesystem.
>
Well, file systems have for all time made efforts to write things
out contiguously. If an IO system were to come along and deoptimise
that case it'd be pretty broken?
> ...
>
> There is a lot of difference between doing a raw read of data from
> the device, and copying a large directory tree. The raw read is
> purely sequential, the tree copy is basically random access since
> the kernel is being asked to read and then write a lot of small
> chunks of disk space.
hum. Yes. Copying a tree from and to the same disk won't
achieve disk bandwidth. If they're different disks then
ext2+Orlov will actually get there.
We in fact do extremely well on IDE with Orlov for this workload,
even though we're not doing inter-inode readahead. This will be because
the disk is doing the readhead for us. It's going to be highly dependent
upon the disk cache size, readahead cache partitioning algorithm, etc.
BTW, I've been trying to hunt down a suitable file system aging tool.
We're not very happy with Keith Smith's workload because the directory
infomation was lost (he was purely studying FFS algorithmic differences
- the load isn't 100% suitable for testing other filesystems / algorithms). Constantin Loizides' tools are proving to be rather complex to compile,
drive and understand. Does the XFS team have anything like this in the
kitbag?
-
next prev parent reply other threads:[~2001-11-12 20:43 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-11-12 13:54 File System Performance Ben Israel
2001-11-12 16:33 ` Andrew Morton
2001-11-12 17:50 ` Ben Israel
2001-11-12 19:46 ` Andrew Morton
2001-11-12 19:59 ` Richard Gooch
2001-11-12 23:07 ` Mike Fedyk
2001-11-13 0:04 ` Richard Gooch
2001-11-13 0:08 ` Mike Fedyk
2001-11-13 0:26 ` Richard Gooch
2001-11-13 0:47 ` Mike Castle
2001-11-13 1:28 ` Mike Fedyk
2001-11-13 6:34 ` Richard Gooch
2001-11-13 20:56 ` Andreas Dilger
2001-11-13 7:45 ` Andreas Dilger
2001-11-12 20:06 ` Steve Lord
2001-11-12 20:41 ` Andrew Morton [this message]
2001-11-12 21:27 ` Steve Lord
2001-11-12 21:43 ` Andrew Morton
2001-11-12 21:45 ` Steve Lord
2001-11-12 21:48 ` Linus Torvalds
2001-11-12 22:11 ` Lionel Bouton
2001-11-12 19:41 ` Gérard Roudier
2001-11-12 22:14 ` Linus Torvalds
2001-11-12 22:30 ` Ragnar Kjørstad
2001-11-12 22:36 ` Andrew Morton
2001-11-12 23:04 ` Mike Castle
2001-11-13 9:56 ` Peter Wächtler
2001-11-13 9:41 ` Henning P. Schmiedehausen
2001-11-12 22:16 ` Andrew Morton
2001-11-12 22:26 ` Steve Lord
2001-11-12 22:32 ` Lionel Bouton
2001-11-12 22:45 ` Alan Cox
2001-11-12 22:39 ` Alan Cox
2001-11-12 22:39 ` Xavier Bestel
2001-11-12 22:46 ` Mike Castle
2001-11-12 21:53 ` Lionel Bouton
2001-11-13 0:17 ` Andreas Dilger
2001-11-13 0:40 ` Peter J . Braam
2001-11-13 20:46 ` Andreas Dilger
2001-11-16 22:07 ` Peter J . Braam
2001-11-16 23:14 ` Mike Fedyk
2001-11-12 16:40 ` Ben Israel
2001-11-12 17:29 ` Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2001-11-12 22:36 Grant Erickson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3BF03402.87D44589@zip.com.au \
--to=akpm@zip.com.au \
--cc=ben@genesis-one.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lord@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox