* Possible failure & recovery
@ 2009-08-10 18:08 Daniel L. Miller
0 siblings, 0 replies; only message in thread
From: Daniel L. Miller @ 2009-08-10 18:08 UTC (permalink / raw)
To: linux-raid
I'm not 100% certain about this...but maybe.
I had setup a small box as a remote backup for our company. I THOUGHT I
had set it up as a Raid-10 - but I can't swear to it now. I just had a
need to try to recover a file from that backup - only to find we just
had an error.
Checking mdadm.conf, I find -
ARRAY /dev/.static/dev/md0 level=raid10 num-devices=4
devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd
UUID=7ec24ccc:973f5065:a79315d0:449291b3 auto=part
Now, I do know that one of the drives had failed previously (sdd), and
the array has been operating in degraded condition for some time. Now
it appears that a second drive failed. I received XFS errors and only
two drives showed under /proc/mdstat (sdb was removed as well as sdd).
xfs_check reported errors. Whether or not it was a good idea, I tried
adding sdb back to the array. It worked and started rebuilding. Then I
noticed that the array was reporting as "raid6". I don't know when it
BECAME raid6, if I always had it as such or if the raid-10 somehow
degraded and became raid-6. If it actually did so - that might make for
some type of a migration/expansion path for a raid-10 array that needs
to grow.
My xfs_repair -L /dev/md0 process is currently running...I'm holding my
breath to see how much I get back...
--
Daniel
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2009-08-10 18:08 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-08-10 18:08 Possible failure & recovery Daniel L. Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).