From: Ric Wheeler <ric@emc.com>
To: Valerie Henson <val_henson@linux.intel.com>
Cc: linux-fsdevel@vger.kernel.org, Arjan van de Ven <arjan@linux.intel.com>
Subject: Re: topics for the file system mini-summit
Date: Sat, 03 Jun 2006 09:50:16 -0400 [thread overview]
Message-ID: <44819398.4030603@emc.com> (raw)
In-Reply-To: <20060601021908.GL10420@goober>
Valerie Henson wrote:
>On Thu, May 25, 2006 at 02:44:50PM -0700, Ric Wheeler wrote:
>
>
>> (1) repair/fsck time can take hours or even days depending on the
>>health of the file system and its underlying disk as well as the number
>>of files. This does not work well for large servers and is a disaster
>>for "appliances" that need to run these commands buried deep in some
>>data center without a person watching...
>> (2) most file system performance testing is done on "pristine" file
>>systems with very few files. Performance over time, especially with
>>very high file counts, suffers very noticeable performance degradation
>>with very large file systems.
>> (3) very poor fault containment for these very large devices - it
>>would be great to be able to ride through a failure of a segment of the
>>underlying storage without taking down the whole file system.
>>
>>The obvious alternative to this is to break up these big disks into
>>multiple small file systems, but there again we hit several issues.
>>
>>
>
>1 and 3 are some of my main concerns, and what I want to focus a lot
>of the workshop discussion on. I view the question as: How do we keep
>file system management simple while splitting the underlying storage
>into isolated failure domains that can be repaired individually
>online? (Say that three times fast.) Just splitting up into multiple
>file systems only solves the second problem, and only if you have
>forced umount, as you noted.
>
>
>
>
Any thoughts about what the right semantics are for properly doing a
forced unmount and how whether it is doable near term (as opposed to the
more strategic/long term issues laid out in this thread) ?
ric
next prev parent reply other threads:[~2006-06-03 13:59 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-05-25 21:44 topics for the file system mini-summit Ric Wheeler
2006-05-26 16:48 ` Andreas Dilger
2006-05-27 0:49 ` Ric Wheeler
2006-05-27 14:18 ` Andreas Dilger
2006-05-28 1:44 ` Ric Wheeler
2006-05-29 0:11 ` Matthew Wilcox
2006-05-29 2:07 ` Ric Wheeler
2006-05-29 16:09 ` Andreas Dilger
2006-05-29 19:29 ` Ric Wheeler
2006-05-30 6:14 ` Andreas Dilger
2006-06-07 10:10 ` Stephen C. Tweedie
2006-06-07 14:03 ` Andi Kleen
2006-06-07 18:55 ` Andreas Dilger
2006-06-01 2:19 ` Valerie Henson
2006-06-01 2:42 ` Matthew Wilcox
2006-06-01 3:24 ` Valerie Henson
2006-06-01 12:45 ` Matthew Wilcox
2006-06-01 12:53 ` Arjan van de Ven
2006-06-01 20:06 ` Russell Cattelan
2006-06-02 11:27 ` Nathan Scott
2006-06-01 5:36 ` Andreas Dilger
2006-06-03 13:50 ` Ric Wheeler [this message]
2006-06-03 14:13 ` Arjan van de Ven
2006-06-03 15:07 ` Ric Wheeler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=44819398.4030603@emc.com \
--to=ric@emc.com \
--cc=arjan@linux.intel.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=val_henson@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).