linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* compilebench numbers for ext4
@ 2007-10-22 23:31 Chris Mason
  2007-10-22 23:48 ` Chris Mason
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Chris Mason @ 2007-10-22 23:31 UTC (permalink / raw)
  To: linux-ext4

Hello everyone,

I recently posted some performance numbers for Btrfs with different
blocksizes, and to help establish a baseline I did comparisons with
Ext3.

The graphs, numbers and a basic description of compilebench are here:

http://oss.oracle.com/~mason/blocksizes/

Ext3 easily wins the read phase, but scores poorly while creating files
and deleting them.  Since ext3 is winning the read phase, we can assume
the file layout is fairly good.  I think most of the problems during the
write phase are caused by pdflush doing metadata writeback.  The file
data and metadata are written separately, and so we end up seeking
between things that are actually close together.

Andreas asked me to give ext4 a try, so I grabbed the patch queue from
Friday along with the latest Linus kernel.  The FS was created with:

mkfs.ext3 -I 256 /dev/xxxx
mount -o delalloc,mballoc,data=ordered -t ext4dev /dev/xxxx

I did expect delayed allocation to help the write phases of
compilebench, especially the parts where it writes out .o files in
random order (basically writing medium sized files all over the
directory tree).  But, every phase except reads showed huge
improvements.

http://oss.oracle.com/~mason/compilebench/ext4/ext-create-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-compile-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-read-compare.png
http://oss.oracle.com/~mason/compilebench/ext4/ext-rm-compare.png

To match the ext4 numbers with Btrfs, I'd probably have to turn off data
checksumming...

But oddly enough I saw very bad ext4 read throughput even when reading
a single kernel tree (outside of compilebench).  The time to read the
tree was almost 2x ext3.  Have others seen similar problems?

I think the ext4 delete times are so much better than ext3 because this
is a single threaded test.  delayed allocation is able to get
everything into a few extents, and these all end up in the inode.  So,
the delete phase only needs to seek around in small directories and
seek to well grouped inodes.  ext3 probably had to seek all over for
the direct/indirect blocks.

So, tomorrow I'll run a few tests with delalloc and mballoc
independently, but if there are other numbers people are interested in,
please let me know.

(test box was a desktop machine with single sata drive, barriers were
not used).

-chris

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2007-10-25 23:47 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-22 23:31 compilebench numbers for ext4 Chris Mason
2007-10-22 23:48 ` Chris Mason
2007-10-23  0:12 ` Mingming Cao
2007-10-23  0:54   ` Chris Mason
2007-10-23 12:43 ` Aneesh Kumar K.V
2007-10-23 13:08   ` Chris Mason
2007-10-23 13:42     ` Aneesh Kumar K.V
2007-10-25 15:34 ` Jose R. Santos
2007-10-25 18:43   ` Chris Mason
2007-10-25 22:40     ` Jose R. Santos
2007-10-25 23:45       ` Chris Mason
2007-10-25 15:54 ` Jose R. Santos

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).