From: Michael Evans <mjevans1983@gmail.com>
To: Carl Karsten <carl@personnelware.com>, linux-raid@vger.kernel.org
Subject: Re: reconstruct raid superblock
Date: Wed, 16 Dec 2009 23:35:36 -0800 [thread overview]
Message-ID: <4877c76c0912162335v63c52ef0jbc21bba17542c7b1@mail.gmail.com> (raw)
In-Reply-To: <549053140912162236l134c38a9v490ba172231e6b8c@mail.gmail.com>
On Wed, Dec 16, 2009 at 10:36 PM, Carl Karsten <carl@personnelware.com> wrote:
> On Thu, Dec 17, 2009 at 12:26 AM, Michael Evans <mjevans1983@gmail.com> wrote:
>> On Wed, Dec 16, 2009 at 10:18 PM, Carl Karsten <carl@personnelware.com> wrote:
>>> A degraded array is just missing the redundant data, not needed data, right?
>>>
>>> I am pretty sure I need all 4 disks.
>>>
>>> Is there any reason to 0 out the bytes I want replaced with good bytes?
>>>
>>> On Wed, Dec 16, 2009 at 11:21 PM, Majed B. <majedb@gmail.com> wrote:
>>>> If your other disks are sane and you are able to run a degraded array,á then
>>>> you can remove grub using dd then re-add the disk to the array.
>>>>
>>>> To clear the first 1MB of the disk:
>>>> dd if=/dev/zero of=/dev/sdx bs=1M count=1
>>>> Replace sdx with the disk name that has grub.
>>>>
>>>> On Dec 17, 2009 6:53 AM, "Carl Karsten" <carl@personnelware.com> wrote:
>>>>
>>>> I took over a box that had 1 ide boot drive, 6 sata raid drives (4
>>>> internal, 2 external.) áI believe the 2 externals were redundant, so
>>>> could be removed. áso I did, and mkfs-ed them. áthen I installed
>>>> ubuntu to the ide, and installed grub to sda, which turns out to be
>>>> the first sata. áwhich would be fine if the raid was on sda1, but it
>>>> is on sda, and now the raid wont' assemble. áno surprise, and I do
>>>> have a backup of the data spread across 5 external drives. ábut before
>>>> I áabandon the array, I am wondering if I can fix it by recreating
>>>> mdadm's metatdata on sda, given I have sd[bcd] to work with.
>>>>
>>>> any suggestions?
>>>>
>>>> root@dhcp128:~# mdadm --examine /dev/sd[abcd]
>>>> mdadm: No md superblock detected on /dev/sda.
>>>> /dev/sdb:
>>>> á á á á áMagic : a92b4efc
>>>> á á á áVersion : 00.90.00
>>>> á á á á á UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>>>> áCreation Time : Wed Mar 25 21:04:08 2009
>>>> á á Raid Level : raid6
>>>> áUsed Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>>>> á á Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>>>> á Raid Devices : 6
>>>> áTotal Devices : 6
>>>> Preferred Minor : 0
>>>>
>>>> á áUpdate Time : Tue Mar 31 23:08:02 2009
>>>> á á á á áState : clean
>>>> áActive Devices : 5
>>>> Working Devices : 6
>>>> áFailed Devices : 1
>>>> áSpare Devices : 1
>>>> á á á Checksum : a4fbb93a - correct
>>>> á á á á Events : 8430
>>>>
>>>> á á Chunk Size : 64K
>>>>
>>>> á á áNumber á Major á Minor á RaidDevice State
>>>> this á á 6 á á á 8 á á á 16 á á á á6 á á áspare á /dev/sdb
>>>>
>>>> á 0 á á 0 á á á 8 á á á á0 á á á á0 á á áactive sync á /dev/sda
>>>> á 1 á á 1 á á á 8 á á á 64 á á á á1 á á áactive sync á /dev/sde
>>>> á 2 á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>> á 3 á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>> á 4 á á 4 á á á 0 á á á á0 á á á á4 á á áfaulty removed
>>>> á 5 á á 5 á á á 8 á á á 80 á á á á5 á á áactive sync
>>>> á 6 á á 6 á á á 8 á á á 16 á á á á6 á á áspare á /dev/sdb
>>>> /dev/sdc:
>>>> á á á á áMagic : a92b4efc
>>>> á á á áVersion : 00.90.00
>>>> á á á á á UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>>>> áCreation Time : Wed Mar 25 21:04:08 2009
>>>> á á Raid Level : raid6
>>>> áUsed Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>>>> á á Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>>>> á Raid Devices : 6
>>>> áTotal Devices : 4
>>>> Preferred Minor : 0
>>>>
>>>> á áUpdate Time : Sun Jul 12 11:31:47 2009
>>>> á á á á áState : clean
>>>> áActive Devices : 4
>>>> Working Devices : 4
>>>> áFailed Devices : 2
>>>> áSpare Devices : 0
>>>> á á á Checksum : a59452db - correct
>>>> á á á á Events : 580158
>>>>
>>>> á á Chunk Size : 64K
>>>>
>>>> á á áNumber á Major á Minor á RaidDevice State
>>>> this á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>>
>>>> á 0 á á 0 á á á 8 á á á á0 á á á á0 á á áactive sync á /dev/sda
>>>> á 1 á á 1 á á á 0 á á á á0 á á á á1 á á áfaulty removed
>>>> á 2 á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>> á 3 á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>> á 4 á á 4 á á á 0 á á á á0 á á á á4 á á áfaulty removed
>>>> á 5 á á 5 á á á 8 á á á 96 á á á á5 á á áactive sync
>>>> /dev/sdd:
>>>> á á á á áMagic : a92b4efc
>>>> á á á áVersion : 00.90.00
>>>> á á á á á UUID : 8d0cf436:3fc2d2ef:93d71b24:b036cc6b
>>>> áCreation Time : Wed Mar 25 21:04:08 2009
>>>> á á Raid Level : raid6
>>>> áUsed Dev Size : 1465137408 (1397.26 GiB 1500.30 GB)
>>>> á á Array Size : 5860549632 (5589.06 GiB 6001.20 GB)
>>>> á Raid Devices : 6
>>>> áTotal Devices : 4
>>>> Preferred Minor : 0
>>>>
>>>> á áUpdate Time : Sun Jul 12 11:31:47 2009
>>>> á á á á áState : clean
>>>> áActive Devices : 4
>>>> Working Devices : 4
>>>> áFailed Devices : 2
>>>> áSpare Devices : 0
>>>> á á á Checksum : a59452ed - correct
>>>> á á á á Events : 580158
>>>>
>>>> á á Chunk Size : 64K
>>>>
>>>> á á áNumber á Major á Minor á RaidDevice State
>>>> this á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>>
>>>> á 0 á á 0 á á á 8 á á á á0 á á á á0 á á áactive sync á /dev/sda
>>>> á 1 á á 1 á á á 0 á á á á0 á á á á1 á á áfaulty removed
>>>> á 2 á á 2 á á á 8 á á á 32 á á á á2 á á áactive sync á /dev/sdc
>>>> á 3 á á 3 á á á 8 á á á 48 á á á á3 á á áactive sync á /dev/sdd
>>>> á 4 á á 4 á á á 0 á á á á0 á á á á4 á á áfaulty removed
>>>> á 5 á á 5 á á á 8 á á á 96 á á á á5 á á áactive sync
>>>>
>>>> --
>>>> Carl K
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at áhttp://vger.kernel.org/majordomo-info.html
>>>>
>>>
>>>
>>>
>>> --
>>> Carl K
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
>> You may want to recreate the array anyway to gain the benefits from
>> the 1.x metadata format (such as storing resync resume info).
>>
>> It would also be a good idea to look at what you need to do. As long
>> as you still have at least one parity device you can (assuming no
>> other hardware error) --fail any single device in the array, --remove
>> it, --zero-superblock that device, then re-add it as a fresh spare.
>>
>>
>
> Do I have one parity device?
>
> btw - all I need to to is get the array assembled and the fs mounted
> one more time so I can copy the data onto some externals and drive it
> over to the data centre where it will be uploaded into crazy raid
> land. So no point in adding hardware or any steps that are not needed
> to just read the files.
>
> --
> Carl K
>
Sorry, forgot to hit reply to all last time (gmail's got buttons on
top and bottom, but I know of no way to inform it I'm on a list and to
thus make the default action reply to all instead of reply).
Looking at it; you seem to have one STALE disk, and four in your
current array. It looks like you have ZERO spares, and zero spare
parity devices (it looks like you started with 6 devices, 2 parity
devices, and have since lost two devices). Your array could, since
there is no other data to compare against, accumulate unrecoverable
sectors/silently failed sectors on the drives without knowledge at
this point, if I understand what information is stored correctly.
cat /proc/mdstat will give you more information about which devices
are in what state. However it looks like you could re-add one device
which you listed to the array; have it resync to it, and then you
would have a parity device.
Of course if the device in question is the one you want to alter than
you should do so before re-adding it.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2009-12-17 7:35 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-12-17 3:53 reconstruct raid superblock Carl Karsten
[not found] ` <70ed7c3e0912162117n3617556p3a8decef94f33a1c@mail.gmail.com>
[not found] ` <70ed7c3e0912162121v5df1b972x6d9176bdf7e27401@mail.gmail.com>
2009-12-17 6:18 ` Carl Karsten
[not found] ` <4877c76c0912162226w3dfbdbb2t4b13e016f53728a0@mail.gmail.com>
[not found] ` <549053140912162236l134c38a9v490ba172231e6b8c@mail.gmail.com>
2009-12-17 7:35 ` Michael Evans [this message]
2009-12-17 10:35 ` Majed B.
2009-12-17 11:22 ` Michael Evans
2009-12-17 11:45 ` Majed B.
2009-12-17 14:15 ` Carl Karsten
2009-12-17 14:39 ` Majed B.
2009-12-17 15:06 ` Carl Karsten
2009-12-17 15:40 ` Majed B.
2009-12-17 16:17 ` Carl Karsten
2009-12-17 18:07 ` Majed B.
2009-12-17 19:18 ` Michael Evans
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4877c76c0912162335v63c52ef0jbc21bba17542c7b1@mail.gmail.com \
--to=mjevans1983@gmail.com \
--cc=carl@personnelware.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).