From: Benjamin ESTRABAUD <ben.estrabaud@mpstor.com>
To: Nicolas Noble <nicolas@nobis-crew.org>, John Stoffel <john@stoffel.org>
Cc: linux-raid@vger.kernel.org
Subject: Re: Failure propagation of concatenated raids ?
Date: Wed, 15 Jun 2016 10:29:54 +0100 [thread overview]
Message-ID: <57612012.9080902@mpstor.com> (raw)
In-Reply-To: <CAAkR8+vfXJOtYUBvkZXxQhXwh-e2t6yGuarLW4MGS8B+YKtUZg@mail.gmail.com>
On 15/06/16 10:18, Nicolas Noble wrote:
>> it
>> *might* make sense to look at ceph or some other distributed
>> filesystem.
>
> I was trying to avoid that, mainly because that doesn't seem to be as
> supported as a more straightforward raids+lvm2 scenario. But I might
> be willing to reconsider my position in light of such data losses.
>
>> no filesystem I know handles that without either going
>> readonly, or totally locking up.
>
> Which, to be fair, is exactly what I'm looking for. I'd rather see the
> filesystem lock itself up, until a human tries to restore the failed
> raid back online. But my recent experience and experiments show me
> that the filesystems actually don't lock themselves up, and don't go
> read only for quite some time, and heavy heavy data corruption will
> then happen. I'd be much more happy if the behavior was that the
> filesystem locks itself up instead of self destroying over time.
Hi Nicolas,
I have limited experience in that domain but I've usually observed that
if the filesystem (say xfs) is unable to read or write its superblock it
immediately goes into read only mode. MD will remain online and provide
"best service" whenever possible, but as you pointed out this can be
risky if you still think your RAID offers parity protection while
degraded. I think in your case you're better off stopping an array that
has less than parity drives than it should, either using a udev rule or
using mdadm --monitor.
Regards,
Ben.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2016-06-15 9:29 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-14 21:43 Failure propagation of concatenated raids ? Nicolas Noble
2016-06-14 22:41 ` Andreas Klauer
2016-06-14 23:35 ` Nicolas Noble
2016-06-15 0:48 ` Andreas Klauer
2016-06-15 9:11 ` Nicolas Noble
2016-06-15 1:37 ` John Stoffel
2016-06-15 9:18 ` Nicolas Noble
2016-06-15 9:29 ` Benjamin ESTRABAUD [this message]
2016-06-15 9:49 ` Nicolas Noble
2016-06-15 14:45 ` Benjamin ESTRABAUD
2016-06-15 14:59 ` John Stoffel
2016-06-15 14:56 ` John Stoffel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57612012.9080902@mpstor.com \
--to=ben.estrabaud@mpstor.com \
--cc=john@stoffel.org \
--cc=linux-raid@vger.kernel.org \
--cc=nicolas@nobis-crew.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).