linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ric Wheeler <rwheeler@redhat.com>
To: Andreas Dilger <adilger@sun.com>
Cc: linux-ext4@vger.kernel.org, "Ted Ts'o" <tytso@thunk.org>
Subject: Re: large file system & high object count testing
Date: Mon, 31 Aug 2009 17:02:51 -0400	[thread overview]
Message-ID: <4A9C3A7B.3050302@redhat.com> (raw)
In-Reply-To: <20090831205608.GE4197@webber.adilger.int>

On 08/31/2009 04:56 PM, Andreas Dilger wrote:
> On Aug 31, 2009  13:02 -0400, Ric Wheeler wrote:
>> One more note - this file system was filled using fs_mark, but without
>> doing any fsync() calls.
>>
>> umount:
>>
>> Aug 31 10:19:27 megadeth kernel: EXT4-fs: mballoc: 2580708130 blocks
>> 516141626 reqs (511081408 success)
>> Aug 31 10:19:27 megadeth kernel: EXT4-fs: mballoc: 5060218 extents
>> scanned, 0 goal hits, 5060218 2^N hits, 0 breaks, 0 lost
>> Aug 31 10:19:27 megadeth kernel: EXT4-fs: mballoc: 85164 generated and
>> it took 471527376
>> Aug 31 10:19:27 megadeth kernel: EXT4-fs: mballoc: 2590831616
>> preallocated, 10120312 discarded
>>
>> Mount after fsck:
>> Aug 31 12:27:12 megadeth kernel: EXT4-fs (dm-75):
>> ext4_check_descriptors: Checksum for group 487 failed (59799!=46827)
>> Aug 31 12:27:12 megadeth kernel: EXT4-fs (dm-75): group descriptors
>> corrupted!
>>
>> The MBALLOC messages are a bit worrying - what exactly gets discarded
>> during an unmount?
>
> The in-memory preallocation areas are discarded.  This is reporting
> that of the 2590M preallocation areas it reserved, only 10M of them
> were discarded during the lifetime of the filesystem.
>
> Of the other stats:
> - 471 seconds were spent in total generating the 85k buddy bitmaps
>    (this is done incrementally at runtime)
> - 516M calls to mballoc to find a chunk of blocks, 511M calls were able
>    to find the requested chunk (not surprising given it is a new filesystem,
>    probably the 5M calls that failed were when the fs was nearly full)
>
> Cheers, Andreas
> --
> Andreas Dilger
> Sr. Staff Engineer, Lustre Group
> Sun Microsystems of Canada, Inc.
>

This file system was never more than 7% full - the 511M calls were for each of 
the 20KB files more or less I guess.

ric


  reply	other threads:[~2009-08-31 21:01 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-31 16:34 large file system & high object count testing Ric Wheeler
2009-08-31 17:02 ` Ric Wheeler
2009-08-31 20:56   ` Andreas Dilger
2009-08-31 21:02     ` Ric Wheeler [this message]
2009-08-31 21:25       ` Justin Maggard
2009-08-31 22:20         ` Ric Wheeler
2009-08-31 23:13         ` Andreas Dilger
2009-08-31 23:37           ` Justin Maggard
2009-09-02  9:15             ` Andreas Dilger
2009-08-31 20:19 ` Andreas Dilger
2009-08-31 21:01   ` Ric Wheeler
2009-08-31 23:16     ` Andreas Dilger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A9C3A7B.3050302@redhat.com \
    --to=rwheeler@redhat.com \
    --cc=adilger@sun.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=tytso@thunk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).