public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Jim Salter <jim@jrs-s.net>
To: Duncan <1i5t5.duncan@cox.net>, linux-btrfs@vger.kernel.org
Subject: Re: Scrubbing with BTRFS Raid 5
Date: Tue, 21 Jan 2014 12:18:01 -0500	[thread overview]
Message-ID: <52DEABC9.2040205@jrs-s.net> (raw)
In-Reply-To: <pan$226f4$e1329373$53c2dcca$438e95e@cox.net>

Would it be reasonably accurate to say "btrfs' RAID5 implementation is 
likely working well enough and safe enough if you are backing up 
regularly and are willing and able to restore from backup if necessary 
if a device failure goes horribly wrong", then?

This is a reasonably serious question. My typical scenario runs along 
the lines of two identical machines with regular filesystem replication 
between them; in the event of something going horribly horribly wrong 
with the production machine, I just spin up services on the replicated 
machine - making it "production" - and then deal with the broken one at 
relative leisure.

If the worst thing wrong with RAID5/6 in current btrfs is "might not 
deal as well as you'd like with a really nasty example of single-drive 
failure", that would likely be livable for me.

On 01/21/2014 12:08 PM, Duncan wrote:
 > What you're missing is that device death and replacement rarely happens
 > as neatly as your test (clean unmounts and all, no middle-of-process
 > power-loss, etc).  You tested best-case, not real-life or worst-case.
 >
 > Try that again, setting up the raid5, setting up a big write to it,
 > disconnect one device in the middle of that write (I'm not sure if just
 > dropping the loop works or if the kernel gracefully shuts down the loop
 > device), then unplugging the system without unmounting... and /then/ see
 > what sense btrfs can make of the resulting mess.  In theory, with an
 > atomic write btree filesystem such as btrfs, even that should work fine,
 > minus perhaps the last few seconds of file-write activity, but the
 > filesystem should remain consistent on degraded remount and device add,
 > device remove, and rebalance, even if another power-pull happens in the
 > middle of /that/.
 >
 > But given btrfs' raid5 incompleteness, I don't expect that will work.
 >


  reply	other threads:[~2014-01-21 17:17 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-01-21  9:06 Scrubbing with BTRFS Raid 5 Graham Fleming
2014-01-21 17:08 ` Duncan
2014-01-21 17:18   ` Jim Salter [this message]
2014-01-21 17:38     ` Chris Murphy
2014-01-21 18:25       ` Jim Salter
2014-01-22 16:02     ` Duncan
2014-01-22 20:45   ` Chris Mason
2014-01-22 21:06     ` ronnie sahlberg
2014-01-22 21:16       ` Chris Mason
2014-01-22 22:36         ` ronnie sahlberg
  -- strict thread matches above, loose matches on Subject: below --
2014-01-21 18:03 Graham Fleming
2014-01-22 15:39 ` Duncan
2014-01-20  0:53 Graham Fleming
2014-01-20 13:21 ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52DEABC9.2040205@jrs-s.net \
    --to=jim@jrs-s.net \
    --cc=1i5t5.duncan@cox.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox