public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: waxhead <waxhead@dirtcellar.net>
To: Sean Greenslade <sean@seangreenslade.com>,
	Marc Oggier <marc.oggier@megavolts.ch>,
	linux-btrfs@vger.kernel.org
Subject: Re: Spare Volume Features
Date: Fri, 30 Aug 2019 00:41:28 +0200	[thread overview]
Message-ID: <a15fadad-9d78-d1a2-a7a6-e6fc33c417b8@dirtcellar.net> (raw)
In-Reply-To: <CD4A10E4-5342-4F72-862A-3A2C3877EC36@seangreenslade.com>



Sean Greenslade wrote:
> On August 28, 2019 5:51:02 PM PDT, Marc Oggier <marc.oggier@megavolts.ch> wrote:
>> Hi All,
>>
>> I am currently buidling a small data server for an experiment.
>>
>> I was wondering if the features of the spare volume introduced a couple
>>
>> of years ago (ttps://patchwork.kernel.org/patch/8687721/) would be
>> release soon. I think this would be awesome to have a drive installed,
>> that can be used as a spare if one drive of an array died to avoid
>> downtime.
>>
>> Does anyone have news about it, and when it will be officially in the
>> kernel/btrfs-progs ?
>>
>> Marc
>>
>> P.S. It took me a long time to switch to btrfs. I did it less than a
>> year ago, and I love it.  Keep the great job going, y'all
> 
> I've been thinking about this issue myself, and I have an (untested) idea for how to accomplish something similar. My file server has three disks in a btrfs raid1. I added a fourth disk to the array as just a normal, participating disk. I keep an eye on the usage to make sure that I never exceed 3 disk's worth of usage. That way, if one disk dies, there are still enough disks to mount RW (though I may still need to do an explicit degraded mount, not sure). In that scenario, I can just trigger an online full balance to rebuild the missing raid copies on the remaining disks. In theory, minimal to no downtime.
> 
> I'm curious if anyone can see any problems with this idea. I've never tested it, and my offsite backups are thorough enough to survive downtime anyway.
> 
> --Sean
> 
I'm just a regular btrfs user, but I see tons of problems with this.

When BTRFS introduce per-subvolume (or even per file) "RAID" or 
redundancy levels a spare device an quickly become a headache. While you 
can argue that a spare device of equal or large size than the largest 
device in the pool would suffice in most cases I don't think it is very 
practical.

What BTRFS needs to do (IMHO) is to reserve spare-space instead. This 
means that many smaller devices can be used in case a large device keels 
over.

The spare space also of course needs to be as large or larger than the 
largest device in the pool, but you would have more flexibility.

For example spare space COULD be pre-populated with the most important 
data (hot data tracking) and serve as a speed-up for read operations. 
What is the point of having idle space just waiting to be used when you 
in fact can just use it for useful things such as obvious ideas like 
increased read speed, extra redundancy for stuff like single, dup or 
even raid0 chunks. Using the spare space for SOME potential for recovery 
is better than not using the spare space for anything.

When the spare space is needed you can either simply discard the data on 
the device that is broken if the spare space already holds the data 
(which makes for superfast recovery) or drop any caches it is used for 
and repopulate by restoring non-redundant data to it as soon as you hit 
a certain error count on another device etc...

Just like Linux uses memory I think that BTRFS is better off using the 
spare space for something rather than nothing. This should of course be 
configurable just for the record.

Anyway - that is how I, a humble user without the detailed know-how 
think it should be implemented... :)

  reply	other threads:[~2019-08-29 22:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-29  0:51 Spare Volume Features Marc Oggier
2019-08-29  2:21 ` Sean Greenslade
2019-08-29 22:41   ` waxhead [this message]
2019-09-01  3:28   ` Sean Greenslade
2019-09-01  8:03     ` Andrei Borzenkov
2019-09-02  0:52       ` Sean Greenslade
2019-09-02  1:09         ` Chris Murphy
2019-09-03 11:35           ` Austin S. Hemmelgarn
2019-08-30  8:07 ` Anand Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a15fadad-9d78-d1a2-a7a6-e6fc33c417b8@dirtcellar.net \
    --to=waxhead@dirtcellar.net \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=marc.oggier@megavolts.ch \
    --cc=sean@seangreenslade.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox