linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andreas Dilger <adilger@clusterfs.com>
To: Ric Wheeler <ric@emc.com>
Cc: linux-fsdevel@vger.kernel.org
Subject: Re: topics for the file system mini-summit
Date: Fri, 26 May 2006 10:48:56 -0600	[thread overview]
Message-ID: <20060526164856.GQ5964@schatzie.adilger.int> (raw)
In-Reply-To: <44762552.8000906@emc.com>

On May 25, 2006  14:44 -0700, Ric Wheeler wrote:
> With both ext3 and with reiserfs, running a single large file system
> translates into several practical limitations before we even hit the
> existing size limitations:
> 
>    (1) repair/fsck time can take hours or even days depending on the
> health of the file system and its underlying disk as well as the number
> of files.  This does not work well for large servers and is a disaster
> for "appliances" that need to run these commands buried deep in some
> data center without a person watching...
>    (2) most file system performance testing is done on "pristine" file
> systems with very few files.  Performance over time, especially with
> very high file counts, suffers very noticeable performance degradation
> with very large file systems.
>     (3) very poor fault containment for these very large devices - it
> would be great to be able to ride through a failure of a segment of the
> underlying storage without taking down the whole file system.
> 
> The obvious alternative to this is to break up these big disks into
> multiple small file systems, but there again we hit several issues.
> 
> As an example, in one of the boxes that I work with we have 4 drives,
> each 500GBs, with limited memory and CPU resources. To address the
> issues above, we break each drive into 100GB chunks which gives us 20
> (reiserfs) file systems per box.  The set of new problems that arise
> from this include:
> 
>    (1) no forced unmount - one file system goes down, you have to
> reboot the box to recover.
>    (2) worst case memory consumption for the journal scales linearly
> with the number of file systems (32MB/per file system).
>    (3) we take away the ability of the file system to do intelligent
> head movement on the drives (i.e., I end up begging the application team
> to please only use one file system per drive at a time for ingest ;-)).
> The same goes for allocation - we basically have to push this up to the
> application to use the capacity in an even way.
>    (4) pain of administration of multiple file systems.
> 
> I know that other file systems deal with scale better, but the question
> is really how to move the mass of linux users onto these large and
> increasingly common storage devices in a way that handles these challenges.

In a way what you describe is Lustre - it aggregates multiple "smaller"
filesystems into a single large filesystem from the application POV
(though in many cases "smaller" filesystems are 2TB).  It runs e2fsck
in parallel if needed, has smart object allocation (clients do delayed
allocation, can load balance across storage targets, etc), can run with
down storage targets.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.


  reply	other threads:[~2006-05-26 16:48 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-05-25 21:44 topics for the file system mini-summit Ric Wheeler
2006-05-26 16:48 ` Andreas Dilger [this message]
2006-05-27  0:49   ` Ric Wheeler
2006-05-27 14:18     ` Andreas Dilger
2006-05-28  1:44       ` Ric Wheeler
2006-05-29  0:11 ` Matthew Wilcox
2006-05-29  2:07   ` Ric Wheeler
2006-05-29 16:09     ` Andreas Dilger
2006-05-29 19:29       ` Ric Wheeler
2006-05-30  6:14         ` Andreas Dilger
2006-06-07 10:10       ` Stephen C. Tweedie
2006-06-07 14:03         ` Andi Kleen
2006-06-07 18:55         ` Andreas Dilger
2006-06-01  2:19 ` Valerie Henson
2006-06-01  2:42   ` Matthew Wilcox
2006-06-01  3:24     ` Valerie Henson
2006-06-01 12:45       ` Matthew Wilcox
2006-06-01 12:53         ` Arjan van de Ven
2006-06-01 20:06         ` Russell Cattelan
2006-06-02 11:27         ` Nathan Scott
2006-06-01  5:36   ` Andreas Dilger
2006-06-03 13:50   ` Ric Wheeler
2006-06-03 14:13     ` Arjan van de Ven
2006-06-03 15:07       ` Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20060526164856.GQ5964@schatzie.adilger.int \
    --to=adilger@clusterfs.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=ric@emc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).