From: Michael Evans <mjevans1983@gmail.com>
To: "Majed B." <majedb@gmail.com>
Cc: Carl Karsten <carl@personnelware.com>, linux-raid@vger.kernel.org
Subject: Re: reconstruct raid superblock
Date: Thu, 17 Dec 2009 03:22:54 -0800 [thread overview]
Message-ID: <4877c76c0912170322oda9d5f0tdb13c37517bb5e31@mail.gmail.com> (raw)
In-Reply-To: <70ed7c3e0912170235m3af05859x9c0472d4c7d2f370@mail.gmail.com>
On Thu, Dec 17, 2009 at 2:35 AM, Majed B. <majedb@gmail.com> wrote:
> I have misread the information you've provided, so allow me to correct myself:
>
> You're running a RAID6 array, with 2 disks lost/failed. Any disk loss
> after that will cause data loss since you have no redundancy (2 disks
> died).
>
> I believe it's still possible to reassemble the array, but you only
> need to remove the MBR. See this page for information:
> http://www.cyberciti.biz/faq/linux-how-to-uninstall-grub/
> dd if=/dev/null of=/dev/sdX bs=446 count=1
>
> Before proceeding, provide the output of cat /proc/mdstat
> Is the array currently running degraded or is it suspended?
> What happened to the spare disk assigned? Did it finish resyncing
> before you installed grub on the wrong disk?
>
> On Thu, Dec 17, 2009 at 8:21 AM, Majed B. <majedb@gmail.com> wrote:
>> If your other disks are sane and you are able to run a degraded array, then
>> you can remove grub using dd then re-add the disk to the array.
>>
>> To clear the first 1MB of the disk:
>> dd if=/dev/zero of=/dev/sdx bs=1M count=1
>> Replace sdx with the disk name that has grub.
>>
>> On Dec 17, 2009 6:53 AM, "Carl Karsten" <carl@personnelware.com> wrote:
>>
>> I took over a box that had 1 ide boot drive, 6 sata raid drives (4
>> internal, 2 external.) I believe the 2 externals were redundant, so
>> could be removed. so I did, and mkfs-ed them. then I installed
>> ubuntu to the ide, and installed grub to sda, which turns out to be
>> the first sata. which would be fine if the raid was on sda1, but it
>> is on sda, and now the raid wont' assemble. no surprise, and I do
>> have a backup of the data spread across 5 external drives. but before
>> I abandon the array, I am wondering if I can fix it by recreating
>> mdadm's metatdata on sda, given I have sd[bcd] to work with.
>>
>> any suggestions?
>>
>> root@dhcp128:~# mdadm --examine /dev/sd[abcd]
>> mdadm: No md superblock detected on /dev/sda.
>> /dev/sdb:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>> Creation Time : Wed Mar 25 21:04:08 2009
>> Raid Level : raid6
>> Used Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>> Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>> Raid Devices : 6
>> Total Devices : 6
>> Preferred Minor : 0
>>
>> Update Time : Tue Mar 31 23:08:02 2009
>> State : clean
>> Active Devices : 5
>> Working Devices : 6
>> Failed Devices : 1
>> Spare Devices : 1
>> Checksum : a4fbb93a - correct
>> Events : 8430
>>
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 6 8 16 6 spare /dev/sdb
>>
>> 0 0 8 0 0 active sync /dev/sda
>> 1 1 8 64 1 active sync /dev/sde
>> 2 2 8 32 2 active sync /dev/sdc
>> 3 3 8 48 3 active sync /dev/sdd
>> 4 4 0 0 4 faulty removed
>> 5 5 8 80 5 active sync
>> 6 6 8 16 6 spare /dev/sdb
>> /dev/sdc:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>> Creation Time : Wed Mar 25 21:04:08 2009
>> Raid Level : raid6
>> Used Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>> Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>> Raid Devices : 6
>> Total Devices : 4
>> Preferred Minor : 0
>>
>> Update Time : Sun Jul 12 11:31:47 2009
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 2
>> Spare Devices : 0
>> Checksum : a59452db - correct
>> Events : 580158
>>
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 2 8 32 2 active sync /dev/sdc
>>
>> 0 0 8 0 0 active sync /dev/sda
>> 1 1 0 0 1 faulty removed
>> 2 2 8 32 2 active sync /dev/sdc
>> 3 3 8 48 3 active sync /dev/sdd
>> 4 4 0 0 4 faulty removed
>> 5 5 8 96 5 active sync
>> /dev/sdd:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>> Creation Time : Wed Mar 25 21:04:08 2009
>> Raid Level : raid6
>> Used Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>> Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>> Raid Devices : 6
>> Total Devices : 4
>> Preferred Minor : 0
>>
>> Update Time : Sun Jul 12 11:31:47 2009
>> State : clean
>> Active Devices : 4
>> Working Devices : 4
>> Failed Devices : 2
>> Spare Devices : 0
>> Checksum : a59452ed - correct
>> Events : 580158
>>
>> Chunk Size : 64K
>>
>> Number Major Minor RaidDevice State
>> this 3 8 48 3 active sync /dev/sdd
>>
>> 0 0 8 0 0 active sync /dev/sda
>> 1 1 0 0 1 faulty removed
>> 2 2 8 32 2 active sync /dev/sdc
>> 3 3 8 48 3 active sync /dev/sdd
>> 4 4 0 0 4 faulty removed
>> 5 5 8 96 5 active sync
>>
>> --
>> Carl K
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> Majed B.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
He's using v0.9 superblocks; even if the replaced device was at 99%
complete it would still have to restart from 0% when re-added.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-12-17 11:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-17 3:53 reconstruct raid superblock Carl Karsten
[not found] ` <70ed7c3e0912162117n3617556p3a8decef94f33a1c@mail.gmail.com>
[not found] ` <70ed7c3e0912162121v5df1b972x6d9176bdf7e27401@mail.gmail.com>
2009-12-17 6:18 ` Carl Karsten
[not found] ` <4877c76c0912162226w3dfbdbb2t4b13e016f53728a0@mail.gmail.com>
[not found] ` <549053140912162236l134c38a9v490ba172231e6b8c@mail.gmail.com>
2009-12-17 7:35 ` Michael Evans
2009-12-17 10:35 ` Majed B.
2009-12-17 11:22 ` Michael Evans [this message]
2009-12-17 11:45 ` Majed B.
2009-12-17 14:15 ` Carl Karsten
2009-12-17 14:39 ` Majed B.
2009-12-17 15:06 ` Carl Karsten
2009-12-17 15:40 ` Majed B.
2009-12-17 16:17 ` Carl Karsten
2009-12-17 18:07 ` Majed B.
2009-12-17 19:18 ` Michael Evans
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4877c76c0912170322oda9d5f0tdb13c37517bb5e31@mail.gmail.com \
--to=mjevans1983@gmail.com \
--cc=carl@personnelware.com \
--cc=linux-raid@vger.kernel.org \
--cc=majedb@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).