linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bob Marley <bobmarley@shiftmail.org>
To: linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: device balance times
Date: Wed, 22 Oct 2014 18:59:31 +0200	[thread overview]
Message-ID: <5447E273.3030901@shiftmail.org> (raw)
In-Reply-To: <5447A5CF.9060405@siedziba.pl>

On 22/10/2014 14:40, Piotr Pawłow wrote:
> On 22.10.2014 03:43, Chris Murphy wrote:
>> On Oct 21, 2014, at 4:14 PM, Piotr Pawłow<pp@siedziba.pl>  wrote:
>>> Looks normal to me. Last time I started a balance after adding 6th 
>>> device to my FS, it took 4 days to move 25GBs of data.
>> It's long term untenable. At some point it must be fixed. It's way, 
>> way slower than md raid.
>> At a certain point it needs to fallback to block level copying, with 
>> a ~ 32KB block. It can't be treating things as if they're 1K files, 
>> doing file level copying that takes forever. It's just too risky that 
>> another device fails in the meantime.
>
> There's "device replace" for restoring redundancy, which is fast, but 
> not implemented yet for RAID5/6.

"Device replace" on raid 0,1,10 works if the device to be replaced is 
still alive, otherwise the operation is as long as a rebalance and works 
similarly (AFAIR).
Which is way too long in terms of the likelihood of another disk failing.
Additionally, it seeks like crazy during the operation, which also 
greatly increases the likelihood of another disk failing.

Until this is fixed I am not confident in using btrfs on a production 
system which requires RAID redundancy.

The operation needs to be streamlined: it should be as sequential as 
possible (sort files according to their LBA before reading/writing), 
with the fewest number of seeks on every disk, and with large buffers, 
so that reads from the source disk(s) and writes to the replacement disk 
goes at platter-speed or near there.



  reply	other threads:[~2014-10-22 16:59 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-21 18:59 device balance times Tomasz Chmielewski
2014-10-21 20:14 ` Piotr Pawłow
2014-10-21 20:44   ` Arnaud Kapp
2014-10-22  1:10     ` 5 _thousand_ snapshots? even 160? (was: device balance times) Robert White
2014-10-22  4:02       ` Zygo Blaxell
2014-10-22  4:05       ` Duncan
2014-10-23 20:38         ` 5 _thousand_ snapshots? even 160? Arnaud Kapp
2014-10-22 11:30       ` Austin S Hemmelgarn
2014-10-22 17:32       ` Goffredo Baroncelli
2014-10-22 11:22     ` device balance times Austin S Hemmelgarn
2014-10-22  1:43   ` Chris Murphy
2014-10-22 12:40     ` Piotr Pawłow
2014-10-22 16:59       ` Bob Marley [this message]
2014-10-23  7:39         ` Russell Coker
2014-10-23  8:49           ` Duncan
2014-10-23  9:19       ` Miao Xie
2014-10-23 11:39         ` Austin S Hemmelgarn
2014-10-24  1:05           ` Duncan
2014-10-24  2:35             ` Zygo Blaxell
2014-10-24  5:13               ` Duncan
2014-10-24 15:18                 ` Zygo Blaxell
2014-10-24 10:58               ` Rich Freeman
2014-10-24 16:07                 ` Zygo Blaxell
2014-10-24 19:58                   ` Rich Freeman
2014-10-22 16:15     ` Chris Murphy
2014-10-23  2:44       ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5447E273.3030901@shiftmail.org \
    --to=bobmarley@shiftmail.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).