From: Eric Sandeen <sandeen@sandeen.net>
To: Mike Ashton <mike@fysh.org>
Cc: Andi Kleen <andi@firstfloor.org>, xfs@oss.sgi.com
Subject: Re: fsck.xfs proposed improvements
Date: Thu, 23 Apr 2009 07:45:25 -0500 [thread overview]
Message-ID: <49F062E5.70800@sandeen.net> (raw)
In-Reply-To: <20090423084900.GB16600@fysh.org>
Mike Ashton wrote:
> On Wed, Apr 22, 2009 at 11:45:11PM +0200, Andi Kleen wrote:
>> Mike Ashton <mike@fysh.org> writes:
>>
>>> With badly behaved hardware,
>>> which seem prevalent, or any bugs which do get into xfs we could
>>> actually end up with xfs being less fault tolerant and less reliable
>>> in general use than other filesystems, which would be a bit of a
>>> shame.
>> Most Linux file systems are not very fault tolerant in this sense;
>> e.g. on ext3 you have have to press return and accept lots of scary
>> messages to get through fsck.
>
> Perhaps, but anecdotally/subjectively I've never had a ext3 based
> system fail to boot because I turned it off and on again.
<hand_wave> xfs log replay may be more sensitive... </hand_wave>
> I've had
> this happen with xfs root filesystems about 15 times over the past few
> years. I'm getting to the point where I'm starting to question the
> wisdom of choosing xfs for my systems - whether it's actually mature
> enough for use in server environments - which given that it's the one
> which ought to be a total no-brainer in this respect, is a worry.
Server environments probably *normally* are in better shape for power
consistency, but still...
> I think even if I can't persuade you guys to make official
> improvements, I've got enough information to make ad-hoc improvements
> to my own systems, but I'm going to have a hard time on the advocacy
> front. xfs rocks, but a system is only as good as its last power cut
> (or something).
>
> I'm hopeful that my readonly/norecovery tuning idea might catch
> someone's imagination, but we'll have to see.
It certainly does sound like an interesting idea, but others' concerns
are relevant too. The issues around how the root filesystem gets
mounted would need to be pretty clearly addressed. Maybe you can spell
out your original proposal again, with updates to handle that issue?
(as an aside, there have been arguments in the past that readonly mounts
should not do recovery at all - i.e. "mount -o ro" doesn't just mean
that you can only read the filesystem, but that the mount will only ever
read the block device...)
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2009-04-23 12:45 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.0.1240318659.128675.xfs@oss.sgi.com>
2009-04-21 14:23 ` fsck.xfs proposed improvements Mike Ashton
2009-04-21 22:09 ` Russell Cattelan
2009-04-22 9:45 ` Mike Ashton
2009-04-22 21:45 ` Andi Kleen
2009-04-23 8:49 ` Mike Ashton
2009-04-23 12:45 ` Eric Sandeen [this message]
[not found] ` <20090423141432.GC16600@fysh.org>
2009-04-23 14:35 ` Mike Ashton
2009-04-23 16:19 ` Russell Cattelan
2009-04-24 9:21 ` Mike Ashton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49F062E5.70800@sandeen.net \
--to=sandeen@sandeen.net \
--cc=andi@firstfloor.org \
--cc=mike@fysh.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox