From: Chris Mason <chris.mason@oracle.com>
To: linux-ext4@vger.kernel.org
Subject: compilebench numbers for ext4
Date: Mon, 22 Oct 2007 19:31:04 -0400 [thread overview]
Message-ID: <20071022193104.0beafeca@think.oraclecorp.com> (raw)
Hello everyone,
I recently posted some performance numbers for Btrfs with different
blocksizes, and to help establish a baseline I did comparisons with
Ext3.
The graphs, numbers and a basic description of compilebench are here:
http://oss.oracle.com/~mason/blocksizes/
Ext3 easily wins the read phase, but scores poorly while creating files
and deleting them. Since ext3 is winning the read phase, we can assume
the file layout is fairly good. I think most of the problems during the
write phase are caused by pdflush doing metadata writeback. The file
data and metadata are written separately, and so we end up seeking
between things that are actually close together.
Andreas asked me to give ext4 a try, so I grabbed the patch queue from
Friday along with the latest Linus kernel. The FS was created with:
mkfs.ext3 -I 256 /dev/xxxx
mount -o delalloc,mballoc,data=ordered -t ext4dev /dev/xxxx
I did expect delayed allocation to help the write phases of
compilebench, especially the parts where it writes out .o files in
random order (basically writing medium sized files all over the
directory tree). But, every phase except reads showed huge
improvements.
http://oss.oracle.com/~mason/compilebench/ext4/ext-create-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-compile-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-read-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-rm-compare.png
To match the ext4 numbers with Btrfs, I'd probably have to turn off data
checksumming...
But oddly enough I saw very bad ext4 read throughput even when reading
a single kernel tree (outside of compilebench). The time to read the
tree was almost 2x ext3. Have others seen similar problems?
I think the ext4 delete times are so much better than ext3 because this
is a single threaded test. delayed allocation is able to get
everything into a few extents, and these all end up in the inode. So,
the delete phase only needs to seek around in small directories and
seek to well grouped inodes. ext3 probably had to seek all over for
the direct/indirect blocks.
So, tomorrow I'll run a few tests with delalloc and mballoc
independently, but if there are other numbers people are interested in,
please let me know.
(test box was a desktop machine with single sata drive, barriers were
not used).
-chris
next reply other threads:[~2007-10-22 23:32 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-22 23:31 Chris Mason [this message]
2007-10-22 23:48 ` compilebench numbers for ext4 Chris Mason
2007-10-23 0:12 ` Mingming Cao
2007-10-23 0:54 ` Chris Mason
2007-10-23 12:43 ` Aneesh Kumar K.V
2007-10-23 13:08 ` Chris Mason
2007-10-23 13:42 ` Aneesh Kumar K.V
2007-10-25 15:34 ` Jose R. Santos
2007-10-25 18:43 ` Chris Mason
2007-10-25 22:40 ` Jose R. Santos
2007-10-25 23:45 ` Chris Mason
2007-10-25 15:54 ` Jose R. Santos
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071022193104.0beafeca@think.oraclecorp.com \
--to=chris.mason@oracle.com \
--cc=linux-ext4@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).