From: sam tygier <samtygier@yahoo.co.uk>
To: linux-btrfs@vger.kernel.org
Subject: problem replacing failing drive
Date: Mon, 22 Oct 2012 10:07:22 +0100 [thread overview]
Message-ID: <k632c9$3df$1@ger.gmane.org> (raw)
hi,
I have a 2 drive btrfs raid set up. It was created first with a single drive, and then adding a second and doing
btrfs fi balance start -dconvert=raid1 /data
the original drive is showing smart errors so i want to replace it. i dont easily have space in my desktop for an extra disk, so i decided to proceed by shutting down. taking out the old failing drive and putting in the new drive. this is similar to the description at
https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Replacing_Failed_Devices
(the other reason to try this is to simulate what would happen if a drive did completely fail).
so after swapping the drives and rebooting, i try to mount as degraded. i instantly get a kernel panic, http://www.hep.man.ac.uk/u/sam/pub/IMG_5397_crop.png
so far all this has been with 3.5 kernel. so i upgraded to 3.6.2 and tried to mount degraded again.
first with just sudo mount /dev/sdd2 /mnt, then with sudo mount -o degraded /dev/sdd2 /mnt
[ 582.535689] device label bdata devid 1 transid 25342 /dev/sdd2
[ 582.536196] btrfs: disk space caching is enabled
[ 582.536602] btrfs: failed to read the system array on sdd2
[ 582.536860] btrfs: open_ctree failed
[ 606.784176] device label bdata devid 1 transid 25342 /dev/sdd2
[ 606.784647] btrfs: allowing degraded mounts
[ 606.784650] btrfs: disk space caching is enabled
[ 606.785131] btrfs: failed to read chunk root on sdd2
[ 606.785331] btrfs warning page private not zero on page 3222292922368
[ 606.785408] btrfs: open_ctree failed
[ 782.422959] device label bdata devid 1 transid 25342 /dev/sdd2
no panic is good progress, but something is still not right.
my options would seem to be
1) reconnect old drive (probably in a USB caddy), see if it mounts as if nothing ever happened. or possibly try and recover it back to a working raid1. then try again with adding the new drive first, then removing the old one.
2) give up experimenting and create a new btrfs raid1, and restore from backup
both leave me with a worry about what would happen if a disk in a raid 1 did die. (unless is was the panic that did some damage that borked the filesystem.)
thanks.
sam
next reply other threads:[~2012-10-22 9:07 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-22 9:07 sam tygier [this message]
2012-10-25 21:02 ` problem replacing failing drive sam tygier
2012-10-25 21:37 ` Kyle Gates
2012-10-26 9:02 ` sam tygier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='k632c9$3df$1@ger.gmane.org' \
--to=samtygier@yahoo.co.uk \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).