From: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
To: linux-raid@vger.kernel.org
Subject: mdadm dropped disk, won't re-add
Date: Wed, 15 Feb 2012 14:58:42 +0100 [thread overview]
Message-ID: <1329314322.3574.16.camel@z6> (raw)
[-- Attachment #1: Type: text/plain, Size: 1238 bytes --]
Hello,
I have a rather big problem with my Linux software RAID5.
It consists of 4 SATA disks each 1 TB in size, resulting in a 3 TB RAID5
volume (/dev/md0 assembled from /dev/sd{b,c,d,e}1.
Today, mdadm kicked disk sde1 from the RAID since the cable seemed to
make problems. I shutdown the machine, replaced the cable and tried
re-adding the disk, however, mdadm refused to add the drive.
So I re-partioned sde1 and added it as a new devices, mdadm instantly
started rebuilding the raid. Unfortunately, during the rebuild, mdadm
decided to kick sdc1 and I have now ended up with two drives failing.
I have tried re-adding sdc1 with the --re-add command, but mdadm again
refuses to re-add the drive.
I haven't changed anything since as I don't know what to do further. I
don't want to make any further damage to the raid and hope that someone
knows how to restore it.
My primary question is whether mdadm actually deletes any important data
on the remaining disks (sd{b,c,d}1) while rebuilding or whether it just
writes data to the newly added disk sde1.
mdadm is version 3.2.3, kernel is Linux 3.2.0 on Debian Wheezy.
Can anyone give further advise?
I'm attaching the output of mdadm -E /dev/sd{b,c,d,e}1.
Kind Regards,
Adrian
[-- Attachment #2: mddata.txt --]
[-- Type: text/plain, Size: 4235 bytes --]
/dev/sdb1:
Magic : a92b4efc
Version : 0.90.00
UUID : 6db22c7b:7d9287e2:d01e5766:86e12a40 (local to host z6)
Creation Time : Fri Apr 23 13:53:33 2010
Raid Level : raid5
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Wed Feb 15 13:27:31 2012
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : c0bf6492 - correct
Events : 311622
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 17 1 active sync /dev/sdb1
0 0 8 49 0 active sync /dev/sdd1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 0 0 3 faulty removed
4 4 8 65 4 spare /dev/sde1
/dev/sdc1:
Magic : a92b4efc
Version : 0.90.00
UUID : 6db22c7b:7d9287e2:d01e5766:86e12a40 (local to host z6)
Creation Time : Fri Apr 23 13:53:33 2010
Raid Level : raid5
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Wed Feb 15 13:25:25 2012
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 1
Spare Devices : 1
Checksum : c0bf6411 - correct
Events : 311617
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 33 3 active sync /dev/sdc1
0 0 8 49 0 active sync /dev/sdd1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 8 33 3 active sync /dev/sdc1
4 4 8 65 4 spare /dev/sde1
/dev/sdd1:
Magic : a92b4efc
Version : 0.90.00
UUID : 6db22c7b:7d9287e2:d01e5766:86e12a40 (local to host z6)
Creation Time : Fri Apr 23 13:53:33 2010
Raid Level : raid5
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Wed Feb 15 13:27:31 2012
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : c0bf64b0 - correct
Events : 311622
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 49 0 active sync /dev/sdd1
0 0 8 49 0 active sync /dev/sdd1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 0 0 3 faulty removed
4 4 8 65 4 spare /dev/sde1
/dev/sde1:
Magic : a92b4efc
Version : 0.90.00
UUID : 6db22c7b:7d9287e2:d01e5766:86e12a40 (local to host z6)
Creation Time : Fri Apr 23 13:53:33 2010
Raid Level : raid5
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Update Time : Wed Feb 15 13:27:31 2012
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 2
Spare Devices : 1
Checksum : c0bf64c2 - correct
Events : 311622
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 65 4 spare /dev/sde1
0 0 8 49 0 active sync /dev/sdd1
1 1 8 17 1 active sync /dev/sdb1
2 2 0 0 2 faulty removed
3 3 0 0 3 faulty removed
4 4 8 65 4 spare /dev/sde1
next reply other threads:[~2012-02-15 13:58 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-15 13:58 John Paul Adrian Glaubitz [this message]
2012-02-15 14:45 ` mdadm dropped disk, won't re-add Robin Hill
2012-02-15 23:01 ` John Paul Adrian Glaubitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1329314322.3574.16.camel@z6 \
--to=glaubitz@physik.fu-berlin.de \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).