linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: Please help with exact actions for raid1 hot-swap
Date: Sat, 9 Sep 2017 12:19:49 +0000 (UTC)	[thread overview]
Message-ID: <pan$2f301$e10816b5$b2720187$72f9315c@cox.net> (raw)
In-Reply-To: CAA7pwKMuv69dsXZGXEASN=qarSv3tCYU=-k1sHquL_tm4FU0kw@mail.gmail.com

Patrik Lundquist posted on Sat, 09 Sep 2017 12:29:08 +0200 as excerpted:

> On 9 September 2017 at 12:05, Marat Khalili <mkh@rqc.ru> wrote:
>> Forgot to add, I've got a spare empty bay if it can be useful here.
> 
> That makes it much easier since you don't have to mount it degraded,
> with the risks involved.
> 
> Add and partition the disk.
> 
> # btrfs replace start /dev/sdb7 /dev/sdc(?)7 /mnt/data
> 
> Remove the old disk when it is done.

I did this with my dozen-plus (but small) btrfs raid1s on ssd partitions 
several kernel cycles ago.  It went very smoothly. =:^)

(TL;DR can stop there.)

I had actually been taking advantage of btrfs raid1's checksumming and 
scrub ability to continue running a failing ssd, with more and more 
sectors going bad and being replaced from spares, for quite some time 
after I'd have otherwise replaced it.  Everything of value was backed up, 
and I was simply doing it for the experience with both btrfs raid1 
scrubbing and continuing ssd sector failure.  But eventually the scrubs 
were finding and fixing errors every boot, especially when off for 
several hours, and further experience was of diminishing value while the 
hassle factor was building fast, so I attached the spare ssd, partitioned 
it up, did a final scrub on all the btrfs, and then one btrfs at a time 
btrfs replaced the devices from the old ssd's partitions to the new one's 
partitions.  Given that I was already used to running scrubs at every 
boot, the entirely uneventful replacements were actually somewhat 
anticlimactic, but that was a good thing! =:^)

Then more recently I bought a larger/newer pair of ssds (1 TB each, the 
old ones were quarter TB each) and converted my media partitions and 
secondary backups, which had still been on reiserfs on spinning rust, to 
btrfs raid1 on ssd as well, making me all-btrfs on all-ssd now, with 
everything but /boot and its backups on the other ssds being btrfs raid1, 
and /boot and its backups being btrfs dup. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


  reply	other threads:[~2017-09-09 12:20 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-09  7:46 Please help with exact actions for raid1 hot-swap Marat Khalili
2017-09-09  9:05 ` Patrik Lundquist
2017-09-09 10:05 ` Marat Khalili
2017-09-09 10:29   ` Patrik Lundquist
2017-09-09 12:19     ` Duncan [this message]
2017-09-10  6:33     ` Marat Khalili
2017-09-10  9:17       ` Patrik Lundquist
2017-09-11 12:49       ` Austin S. Hemmelgarn
2017-09-11 13:16     ` Marat Khalili
2017-09-11 15:11       ` Austin S. Hemmelgarn
2017-09-11 21:33         ` Duncan
2017-09-12 12:33           ` Austin S. Hemmelgarn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='pan$2f301$e10816b5$b2720187$72f9315c@cox.net' \
    --to=1i5t5.duncan@cox.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).