linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: raid6 + hot spare question
Date: Thu, 10 Sep 2015 00:28:20 +0000 (UTC)	[thread overview]
Message-ID: <pan$3fecf$25489cad$6440daf2$1ed2047@cox.net> (raw)
In-Reply-To: 55F054BB.8090109@swiftspirit.co.za

Brendan Hide posted on Wed, 09 Sep 2015 17:48:11 +0200 as excerpted:

> Things can be a little more nuanced.
> 
> First off, I'm not even sure btrfs supports a hot spare currently. I
> haven't seen anything along those lines recently in the list - and don't
> recall anything along those lines before either. The current mention of
> it in the Project Ideas page on the wiki implies it hasn't been looked
> at yet.

Btrfs doesn't support hot spares... yet.  As mentioned it's in ideas and 
given the practicality, is likely to be implemented at some point, but 
given the reality of btrfs development speed, that's likely to be some 
years away.

The best that can be done is "warm spare", connected up but (presumably) 
spun down and not part of the raid, so it can (remotely if necessary) be 
brought online and added to the raid as needed.  That's certainly 
possible, but not as a btrfs specific feature, rather, as a general part 
of the Linux infrastructure.

> Also, depending on your experience with btrfs, some of the tasks
> involved in fixing up a missing/dead disk might be daunting.

Yes...

> On 2015-09-08 14:12, Hugo Mills wrote:
>> On Tue, Sep 08, 2015 at 01:59:19PM +0200, Peter Keše wrote:
>>> <snip>
>>> My assumption is that if one hard drive fails before the volume is
>>> more than 8TB full, I can just rebalance and resize the volume from 12
>>> TB back to 8 TB essentially going from 5-drive raid6 to 4-drive
>>> raid6).
>>>
>>> Can anyone confirm my assumption? Can I indeed rebalance from 5-drive
>>> raid6 to 4-drive raid6 if the volume is not too big?
>> 
>> Yes, you can, provided, as you say, the data is small enough to fit
>> into the reduced filesystem.
>>
> This is true - however, I'd be hesitant to build this up due to the
> current process not being very "smooth" depending on how unlucky you
> are.  [W]ill the filesystem still be read/write or read-only post-
> reboot? Will it "just work" with the only requirement being free space
> on the four working disks?

As long as there's four working devices and chunk-unallocated[1] space on 
them, yes, reducing to a 4-device raid6 should be fine.  What happens is 
that raid6 normally requires writing in at least fours[2], two-way data 
stripe and two parities.  If devices drop out and existing chunks with 
free space are no longer are available in fours, btrfs will leave them be 
and try to allocate additional chunks across remaining devices down to 
four[2].  If it can do so, writing can continue in the now reduced-stripe-
width raid6.  If not, there's a chance of going read-only, as it can no 
longer satisfy the raid6 requirements.[3]
 
> RAID6 is intended to be tolerant of two disk failures. In the case of
> there being a double failure and only 5 disks, the ease with which the
> user can balance/convert to a 3-disk raid5 is also important.

Again, see footnote [2] and [3] below.

---
[1] Btrfs allocates space in two stages, first in largish chunks to 
either data or metadata (chunk nominal size 1 GiB data, 256 MiB 
metadata), then actually using space from the chunk until it's gone and a 
new one needs allocated.  It's quite possible to have normal df, etc, 
report space left, but have it all locked up in pre-allocated chunks, 
typically data, and not have unallocated space left from which to 
allocate new chunks, typically metadata, when needed.  That used to be a 
big issue as btrfs could automatically allocate chunks but it took a 
balance to deallocate them, but now btrfs deallocates entirely empty 
chunks on its own, so the problem can still occur especially over time as 
existing chunks get fragmented and more chunks are only partially used, 
but it's not the /huge/ problem it once was, because at least entirely 
empty chunks are automatically deallocated and their space returned to 
the unallocated space pool to be chunk-allocated as necessary.

[2] While traditional raid6 requires minimum four devices, two-way-data-
stripe and two parity, and raid5 requires minimum three devices, two-way-
data-stripe with single parity, btrfs raid5, at least, degrades to single 
data, single parity, which ends up being in effect raid1, thus allowing a 
two-device "raid5".  I am not actually sure whether btrfs raid6 similarly 
allows degrading to single data, double parity, thus three devices, or 
not.  Of course, to do the full filesystem this way would require that 
the data and metadata fit on a single device, since the others are 
parity, but as a temporary fallback where existing chunks are simply left 
as is with data/metadata reconstructed from parity where necessary, only 
writing new data in the single-data/metadata mode, it can keep the 
filesystem writable.

[3] In actuality, given a device dropout situation, as long as the 
filesystem isn't unmounted, btrfs will continue to try to write to the 
failed/dropped device, writing to the other devices and buffering writes 
for the failed device in case it reappears, until memory is exhausted, at 
which point it presumably crashes.

I'm unsure about raid56 behavior on reboot/remount, but at least with 
raid1, dropping below the minimum required devices (2) to maintain the 
raid1 still allows mounting rw degraded... for one mount.  In that case 
the formerly raidN writes force allocation of new single chunks (or for 
the single device non-ssd case for metadata only, dup) and writing 
continues to them, allowing device delete/add/replace and rebalance as 
the admin considers appropriate.

The problem appears on the /second/ attempt to mount degraded writable, 
after there are existing single mode chunks on the filesystem, since 
btrfs sees the single chunks and thinks that there are single chunks on 
the missing device as well, and blocks writes in ordered to prevent 
further damage.  It's not smart enough to know that the only single 
chunks written are on still available devices.

Awareness of (the cause of) this problem is fairly recent, and there are 
patches that I think made it into 4.2 to allow writable degraded mount 
even with single chunks, but I'm not sure of the 4.2 status, and in any 
event, being new, the patches may not catch all corner cases.  
Additionally, while I /think/ the same situation and thus patches apply 
to raid56, I'm not entirely sure of that, so some testing (or 
verification from others who have tested in raid56 mode) would be needed 
if you want to be sure.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


      parent reply	other threads:[~2015-09-10  0:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-08 11:59 raid6 + hot spare question Peter Keše
2015-09-08 12:12 ` Hugo Mills
2015-09-09 15:48   ` Brendan Hide
2015-09-09 23:14     ` Chris Murphy
2015-09-10  0:28     ` Duncan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='pan$3fecf$25489cad$6440daf2$1ed2047@cox.net' \
    --to=1i5t5.duncan@cox.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).