linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ric Wheeler <rwheeler@redhat.com>
To: nicholas.dokos@hp.com
Cc: linux-fsdevel@vger.kernel.org,
	Christoph Hellwig <hch@infradead.org>,
	Douglas Shakshober <dshaks@redhat.com>,
	Joshua Giles <jgiles@redhat.com>,
	Valerie Aurora <vaurora@redhat.com>,
	Eric Sandeen <esandeen@redhat.com>,
	Steven Whitehouse <swhiteho@redhat.com>,
	Edward Shishkin <edward@redhat.com>,
	Josef Bacik <jbacik@redhat.com>, Jeff Moyer <jmoyer@redhat.com>,
	Chris Mason <chris.mason@oracle.com>,
	"Whitney, Eric" <eric.whitney@hp.com>,
	Theodore Tso <tytso@mit.edu>
Subject: Re: large fs testing
Date: Tue, 26 May 2009 13:47:44 -0400	[thread overview]
Message-ID: <4A1C2B40.30102@redhat.com> (raw)
In-Reply-To: <5971.1243359565@gamaville.dokosmarshall.org>

On 05/26/2009 01:39 PM, Nick Dokos wrote:
>> (3) FS creation time - can you create a file system in reasonable
>> time? (mkfs.xfs took seconds, mkfs.ext4 took 90 minutes). I think that
>> 90 minutes is definitely on the painful side, but usable for most.
>>
>
> I get better numbers for some reason: on a 32 TiB filesystem (16 LUNs,
> 2TiB each, 128KiB stripes at both the RAID controller and in LVM), using
> the following options, I get:
>
> # time mke2fs -q -t ext4 -O ^resize_inode -E stride=32,stripe-width=512,lazy_itable_init=1 /dev/mapper/bigvg-bigvol
>
> real	1m2.137s
> user	0m58.934s
> sys	0m1.981s
>
>
> Without lazy_itable_init, I get
>
> # time mke2fs -q -t ext4 -O ^resize_inode -E stride=32,stripe-width=512 /dev/mapper/bigvg-bigvol
>
> real	12m54.510s
> user	1m4.786s
> sys	11m44.762s
>
> Thanks,
> Nick

Hi Nick,

These runs were without lazy init, so I would expect to be a little more than 
twice as slow as your second run (not the three times I saw) assuming that it 
scales linearly. This run was with limited DRAM on the box (6GB) and only a 
single HBA, but I am afraid that I did not get any good insight into what was 
the bottleneck during my runs. Also, I am pretty certain that most arrays do 
better with more, smaller sized LUN's (like you had) than fewer, larger ones.

Do you have any access to even larger storage, say the mythical 100TB :-) ? Any 
insight on interesting workloads?

Thanks!

Ric


  reply	other threads:[~2009-05-26 17:49 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-23 13:53 large fs testing Ric Wheeler
2009-05-26 12:21 ` Joshua Giles
2009-05-26 12:28   ` Ric Wheeler
2009-05-26 17:39 ` Nick Dokos
2009-05-26 17:47   ` Ric Wheeler [this message]
2009-05-26 21:21     ` Andreas Dilger
2009-05-26 21:39       ` Theodore Tso
2009-05-26 22:17       ` Ric Wheeler
2009-05-28  6:30         ` Andreas Dilger
2009-05-28 10:52           ` Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A1C2B40.30102@redhat.com \
    --to=rwheeler@redhat.com \
    --cc=chris.mason@oracle.com \
    --cc=dshaks@redhat.com \
    --cc=edward@redhat.com \
    --cc=eric.whitney@hp.com \
    --cc=esandeen@redhat.com \
    --cc=hch@infradead.org \
    --cc=jbacik@redhat.com \
    --cc=jgiles@redhat.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=nicholas.dokos@hp.com \
    --cc=swhiteho@redhat.com \
    --cc=tytso@mit.edu \
    --cc=vaurora@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).