linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <matthew@wil.cx>
To: Ric Wheeler <ric@emc.com>
Cc: linux-fsdevel@vger.kernel.org
Subject: Re: topics for the file system mini-summit
Date: Sun, 28 May 2006 18:11:03 -0600	[thread overview]
Message-ID: <20060529001103.GA23405@parisc-linux.org> (raw)
In-Reply-To: <44762552.8000906@emc.com>

On Thu, May 25, 2006 at 02:44:50PM -0700, Ric Wheeler wrote:
> The obvious alternative to this is to break up these big disks into
> multiple small file systems, but there again we hit several issues.
> 
> As an example, in one of the boxes that I work with we have 4 drives,
> each 500GBs, with limited memory and CPU resources. To address the
> issues above, we break each drive into 100GB chunks which gives us 20
> (reiserfs) file systems per box.  The set of new problems that arise
> from this include:
> 
>    (1) no forced unmount - one file system goes down, you have to
> reboot the box to recover.
>    (2) worst case memory consumption for the journal scales linearly
> with the number of file systems (32MB/per file system).
>    (3) we take away the ability of the file system to do intelligent
> head movement on the drives (i.e., I end up begging the application team
> to please only use one file system per drive at a time for ingest ;-)).
> The same goes for allocation - we basically have to push this up to the
> application to use the capacity in an even way.
>    (4) pain of administration of multiple file systems.
> 
> I know that other file systems deal with scale better, but the question
> is really how to move the mass of linux users onto these large and
> increasingly common storage devices in a way that handles these challenges.

How do you handle the inode number space?  Do you partition it across
the sub-filesystems, or do you prohibit hardlinks between the sub-fses?

  parent reply	other threads:[~2006-05-29  0:11 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-05-25 21:44 topics for the file system mini-summit Ric Wheeler
2006-05-26 16:48 ` Andreas Dilger
2006-05-27  0:49   ` Ric Wheeler
2006-05-27 14:18     ` Andreas Dilger
2006-05-28  1:44       ` Ric Wheeler
2006-05-29  0:11 ` Matthew Wilcox [this message]
2006-05-29  2:07   ` Ric Wheeler
2006-05-29 16:09     ` Andreas Dilger
2006-05-29 19:29       ` Ric Wheeler
2006-05-30  6:14         ` Andreas Dilger
2006-06-07 10:10       ` Stephen C. Tweedie
2006-06-07 14:03         ` Andi Kleen
2006-06-07 18:55         ` Andreas Dilger
2006-06-01  2:19 ` Valerie Henson
2006-06-01  2:42   ` Matthew Wilcox
2006-06-01  3:24     ` Valerie Henson
2006-06-01 12:45       ` Matthew Wilcox
2006-06-01 12:53         ` Arjan van de Ven
2006-06-01 20:06         ` Russell Cattelan
2006-06-02 11:27         ` Nathan Scott
2006-06-01  5:36   ` Andreas Dilger
2006-06-03 13:50   ` Ric Wheeler
2006-06-03 14:13     ` Arjan van de Ven
2006-06-03 15:07       ` Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060529001103.GA23405@parisc-linux.org \
    --to=matthew@wil.cx \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=ric@emc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).