From: Michael Tokarev <mjt@tls.msk.ru>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: Justin Piszcz <jpiszcz@lucidpixels.com>,
Moshe Yudkowsky <moshe@pobox.com>,
linux-raid@vger.kernel.org, xfs@oss.sgi.com
Subject: Re: RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash)
Date: Mon, 04 Feb 2008 19:38:40 +0300 [thread overview]
Message-ID: <47A73F90.3020307@msgid.tls.msk.ru> (raw)
In-Reply-To: <47A72061.3010800@sandeen.net>
Eric Sandeen wrote:
[]
> http://oss.sgi.com/projects/xfs/faq.html#nulls
>
> and note that recent fixes have been made in this area (also noted in
> the faq)
>
> Also - the above all assumes that when a drive says it's written/flushed
> data, that it truly has. Modern write-caching drives can wreak havoc
> with any journaling filesystem, so that's one good reason for a UPS. If
Unfortunately an UPS does not *really* help here. Because unless
it has control program which properly shuts system down on the loss
of input power, and the battery really has the capacity to power the
system while it's shutting down (anyone tested this? With new UPS?
and after an year of use, when the battery is not new?), -- unless
the UPS actually has the capacity to shutdown system, it will cut
the power at an unexpected time, while the disk(s) still has dirty
caches...
> the drive claims to have metadata safe on disk but actually does not,
> and you lose power, the data claimed safe will evaporate, there's not
> much the fs can do. IO write barriers address this by forcing the drive
> to flush order-critical data before continuing; xfs has them on by
> default, although they are tested at mount time and if you have
> something in between xfs and the disks which does not support barriers
> (i.e. lvm...) then they are disabled again, with a notice in the logs.
Note also that with linux software raid barriers are NOT supported.
/mjt
next prev parent reply other threads:[~2008-02-04 16:38 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-02-03 19:15 RAID needs more to survive a power hit, different /boot layout for example (was Re: draft howto on making raids for surviving a disk crash) Moshe Yudkowsky
2008-02-03 20:01 ` Robin Hill
2008-02-03 20:46 ` Moshe Yudkowsky
2008-02-03 22:01 ` Robin Hill
2008-02-04 11:06 ` Moshe Yudkowsky
2008-02-04 11:40 ` Robin Hill
2008-02-03 20:28 ` Michael Tokarev
2008-02-03 20:54 ` Moshe Yudkowsky
2008-02-03 21:04 ` Michael Tokarev
2008-02-04 9:27 ` Michael Tokarev
2008-02-04 10:58 ` Moshe Yudkowsky
2008-02-04 13:52 ` Michael Tokarev
2008-02-04 14:09 ` Justin Piszcz
2008-02-04 14:25 ` Eric Sandeen
2008-02-04 14:42 ` Eric Sandeen
2008-02-04 15:31 ` Moshe Yudkowsky
2008-02-04 16:45 ` Eric Sandeen
2008-02-04 17:22 ` Michael Tokarev
2008-02-05 12:31 ` Linda Walsh
2008-02-04 16:38 ` Michael Tokarev [this message]
2008-02-04 19:02 ` Richard Scobie
2008-02-04 22:27 ` Justin Piszcz
2008-02-06 1:12 ` Linda Walsh
2008-02-06 2:12 ` Michael Tokarev
2008-02-06 9:14 ` Luca Berra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=47A73F90.3020307@msgid.tls.msk.ru \
--to=mjt@tls.msk.ru \
--cc=jpiszcz@lucidpixels.com \
--cc=linux-raid@vger.kernel.org \
--cc=moshe@pobox.com \
--cc=sandeen@sandeen.net \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).