* btrfs raid5 recovery with >1 half failed drive, or multiple drives kicked out at the same time.
@ 2013-08-17 15:46 Marc MERLIN
0 siblings, 0 replies; only message in thread
From: Marc MERLIN @ 2013-08-17 15:46 UTC (permalink / raw)
To: linux-btrfs
I know the raid5 code is still new and being worked on, but I was
curious.
With md raid5, I can do this:
mdadm /dev/md7 --replace /dev/sde1
This is cool because it lets you replace a drive with bad sectors where
at least one other drive in the array has bad sectors, and the md layer
will read all drives for each stripe and write to a new drive.
The nice part is that it'll take the working parts of each drive, and as
long as no 2 drives have an unreadable sector for the same stripe, it
can recover.
I was curious, how does btrfs handle this situation, and more generally
drive failures, spurious multiple drive failures due to a bus blip where
you force the drives back online, and so forth?
Thanks,
Marc
--
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
.... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2013-08-17 16:05 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-17 15:46 btrfs raid5 recovery with >1 half failed drive, or multiple drives kicked out at the same time Marc MERLIN
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).