From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Can't remove device -> I/O error
Date: Sat, 30 Sep 2017 23:54:34 +0000 (UTC) [thread overview]
Message-ID: <pan$e97b2$dbbbef5e$7d2adc0d$27a7ec21@cox.net> (raw)
In-Reply-To: CACJO8mcmabFc1CdhVr+Xj=NH1h2zy5eMOeD-8+UGkXODH34rVQ@mail.gmail.com
Dirk Diggler posted on Fri, 29 Sep 2017 22:00:28 +0200 as excerpted:
> is there any chance to get my device removed?
> Scrub literally takes months to complete (SATA 2/3 mix, about 1 minute
> per gigabyte) and i'm not sure if that helps.
> I guess same with balance. Mabye there is a quicker way. I can do
> without some data if it's corrupted. I have a backup, but i want to
> avoid to copy all data from scratch!
btrfs device remove uses an implicit balance to move data to other
devices, so even if btrfs device remove were to work for you, it'd
proceed at the same speed as balance.
[tl;dr stop there]
Even in the generic (non-btrfs) case, parity-raid is known to be slow for
writes and therefore isn't recommended when speed is of any priority
above minimum, thus, only for storage where both raw size and some level
of device failure recovery is possible, and minimal speed is acceptable.
Between that and the btrfs-specific issues btrfs parity-raid had until
kernel 4.13, with known bugs (but not the not btrfs-specific write hole)
now fixed but with the possibility of unknown issues still lurking, I'd
still not consider btrfs parity-raid particularly viable, tho it's no
longer entirely blacklisted as it was until those 4.13 fixes.
So I'd suggest surrendering the fight and chalking it up to a learning
experience, either taking the loss now and switching to something else,
say btrfs raid1 on top of dm/mdraid-0 for higher speed or btrfs raid10 if
you prefer to stick with a single layer at the sacrifice of speed, or as
you write further down a different subthread, just sticking with what you
have (since you do have backups) until a device dies and you really don't
have an alternative but to eat that "weeks to fix" penalty.
Of course if you have the resources, you can do both at once, continuing
to operate on the existing setup, while you create an entirely new setup
and either initialize it from the backups, or start copying data to it
off the still live raid5, presumably at idle priority so as to affect
other operations as little as possible. But the resource requirements to
keep both the old and the new in operation at once until you can switch
over to the new entirely, are high enough it may not be feasible.
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
prev parent reply other threads:[~2017-09-30 23:54 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-29 20:00 Can't remove device -> I/O error Dirk Diggler
2017-09-29 20:22 ` DocMAX
2017-09-29 21:04 ` Goffredo Baroncelli
2017-09-29 21:09 ` DocMAX
2017-09-29 21:48 ` Goffredo Baroncelli
2017-09-29 23:06 ` DocMAX
2017-09-30 7:16 ` Goffredo Baroncelli
2017-09-30 10:40 ` DocMAX
2017-09-30 11:40 ` Goffredo Baroncelli
2017-09-30 11:48 ` DocMAX
2017-09-30 23:54 ` Duncan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$e97b2$dbbbef5e$7d2adc0d$27a7ec21@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).