* md: kicking non-fresh sdf3 from array!
@ 2012-12-27 18:48 Mark Knecht
2012-12-27 19:57 ` Phil Turmel
0 siblings, 1 reply; 5+ messages in thread
From: Mark Knecht @ 2012-12-27 18:48 UTC (permalink / raw)
To: Linux-RAID
Hi,
I've got a home compute server with a transitional setup:
1) A completely working Gentoo build where root is a 3-disk RAID1
(md126) using metadata-0.9 and no initramfs. It boots, works and is
where I'm writing tthis email.
2) A new Gentoo build done in a chroot which has two 'config's'
2a) RAID6 using gentoo-sources-3.2.1 with a separate initramfs. This
works, or did an hour ago.
2b) RAID6 using gentoo-sources-3.6.11 with the initramfs built into
the kernel. This failed its first boot.
I attempted to boot config 2b above but it hung somewhere in the mdadm
stuff. I didn't think of trying the magic keys stuff and hit reset.
Following the failure I booted back into config 1 and saw the
following messages in dmesg:
[ 7.313458] md: kicking non-fresh sdf3 from array!
[ 7.313461] md: unbind<sdf3>
[ 7.329149] md: export_rdev(sdf3)
[ 7.329688] md/raid:md3: device sdc3 operational as raid disk 1
[ 7.329690] md/raid:md3: device sdd3 operational as raid disk 2
[ 7.329691] md/raid:md3: device sdb3 operational as raid disk 0
[ 7.329693] md/raid:md3: device sde3 operational as raid disk 3
[ 7.329914] md/raid:md3: allocated 5352kB
[ 7.329929] md/raid:md3: raid level 6 active with 4 out of 5
devices, algorithm 2
and mdstat tells me that md3, which is root for 2a & 2b above is dirty:
mark@c2stable ~ $ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md6 : active raid5 sdc6[1] sdd6[2] sdb6[0]
494833664 blocks super 1.1 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
md3 : active raid6 sdc3[1] sdd3[2] sdb3[0] sde3[3]
157305168 blocks super 1.2 level 6, 16k chunk, algorithm 2 [5/4] [UUUU_]
md7 : active raid6 sdc7[1] sdd7[2] sdb7[0] sde2[3] sdf2[4]
395387904 blocks super 1.2 level 6, 16k chunk, algorithm 2 [5/5] [UUUUU]
md126 : active raid1 sdd5[2] sdc5[1] sdb5[0]
52436032 blocks [3/3] [UUU]
unused devices: <none>
mark@c2stable ~ $
I'd like to check that the following commands would be the recommended
way to get the RAID6 back into a good state.
/sbin/mdadm /dev/md3 --fail /dev/sdf3 --remove /dev/sdf3
/sbin/mdadm /dev/md3 --add /dev/sdf3
My overall goal here is to move the machine to config 2b with / on
RAID6 and then eventually delete config 1 to reclaim disk space. This
machine has been my RAID learning vehicle where I started with RAID1
and then added as I went along.
I'll have to study why the config 2b failed to boot but first I want
to get everything back in good shape.
Thanks in advance,
Mark
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: md: kicking non-fresh sdf3 from array!
2012-12-27 18:48 md: kicking non-fresh sdf3 from array! Mark Knecht
@ 2012-12-27 19:57 ` Phil Turmel
[not found] ` <CAK2H+ecBALbrpshYQ5TZhuOJh_kgXqi_C90Zp-FtCgaE4dEgSw@mail.gmail.com>
0 siblings, 1 reply; 5+ messages in thread
From: Phil Turmel @ 2012-12-27 19:57 UTC (permalink / raw)
To: Mark Knecht; +Cc: Linux-RAID
Hi Mark,
On 12/27/2012 01:48 PM, Mark Knecht wrote:
[trim /]
> I'd like to check that the following commands would be the recommended
> way to get the RAID6 back into a good state.
>
> /sbin/mdadm /dev/md3 --fail /dev/sdf3 --remove /dev/sdf3
Shouldn't need the above, /dev/sdf3 isn't in the array.
> /sbin/mdadm /dev/md3 --add /dev/sdf3
mdadm may whine about the superblock, but should let you proceed. But
yes, this is correct.
If you had a bitmap on the array, you would have been able to --re-add
without a full rebuild. (On newer versions of mdadm, --add implies
--re-add if all the conditions are met.)
Phil
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: md: kicking non-fresh sdf3 from array!
[not found] ` <50DCAE69.1080003@turmel.org>
@ 2012-12-27 20:25 ` Phil Turmel
2012-12-27 20:52 ` Mark Knecht
0 siblings, 1 reply; 5+ messages in thread
From: Phil Turmel @ 2012-12-27 20:25 UTC (permalink / raw)
To: Mark Knecht; +Cc: linux-raid
Whoops! The list was dropped...
On 12/27/2012 03:24 PM, Phil Turmel wrote:
> On 12/27/2012 03:20 PM, Mark Knecht wrote:
>> Thanks Phil,
>> I've added the device back in and the rebuild is proceeding.
>>
>> So the new piece of info here for me is this idea of a bitmap. I'll
>> have to go look into that, but a quick question - is that something
>> that can be added to the RAID6 at this time, or can you only do that
>> when you first build it? As I have some issue with my (possibly)
>> initramfs I expect I'll kick the devices a few more times before I get
>> it worked out. Would be nice if putting the disk back in didn't take
>> so long.
>
> You can add a bitmap to any v1.x array after the fact with "--grow". It
> does add a little overhead to the write path, so you might not want it
> in a heavily loaded server. For a test-bed or other system prone to
> crashes, it's a godsend.
>
> Phil
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: md: kicking non-fresh sdf3 from array!
2012-12-27 20:25 ` Phil Turmel
@ 2012-12-27 20:52 ` Mark Knecht
2012-12-27 21:38 ` Mark Knecht
0 siblings, 1 reply; 5+ messages in thread
From: Mark Knecht @ 2012-12-27 20:52 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
On Thu, Dec 27, 2012 at 12:25 PM, Phil Turmel <philip@turmel.org> wrote:
> Whoops! The list was dropped...
>
> On 12/27/2012 03:24 PM, Phil Turmel wrote:
>> On 12/27/2012 03:20 PM, Mark Knecht wrote:
>>> Thanks Phil,
>>> I've added the device back in and the rebuild is proceeding.
>>>
>>> So the new piece of info here for me is this idea of a bitmap. I'll
>>> have to go look into that, but a quick question - is that something
>>> that can be added to the RAID6 at this time, or can you only do that
>>> when you first build it? As I have some issue with my (possibly)
>>> initramfs I expect I'll kick the devices a few more times before I get
>>> it worked out. Would be nice if putting the disk back in didn't take
>>> so long.
>>
>> You can add a bitmap to any v1.x array after the fact with "--grow". It
>> does add a little overhead to the write path, so you might not want it
>> in a heavily loaded server. For a test-bed or other system prone to
>> crashes, it's a godsend.
>>
>> Phil
So it was. Thanks for catching that.
OK, so adding the bitmap looks straight-forward
mdadm --grow --bitmap=internal /dev/md3
and then wait, although I think I can continue to use the RAID while
this is progressing.
I cannot see that someone like me (or rather my usage model - home
compute server doing 2 or 3 fairly heavily loaded Windows VMs most of
the day) would want anything other than an internal bitmap, correct?
Note that the VM files actually reside on a different RAID, which is
rather unfortunately part of the same set of disks but different
partitions. As I said, this has been a learning process. (!) My hope
is that over time I can move all of these different things to this one
RAID6, reclaim disk space, grow the RAID6, etc., until I get the whole
disk back.
Thanks,
Mark
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: md: kicking non-fresh sdf3 from array!
2012-12-27 20:52 ` Mark Knecht
@ 2012-12-27 21:38 ` Mark Knecht
0 siblings, 0 replies; 5+ messages in thread
From: Mark Knecht @ 2012-12-27 21:38 UTC (permalink / raw)
To: Phil Turmel; +Cc: linux-raid
On Thu, Dec 27, 2012 at 12:52 PM, Mark Knecht <markknecht@gmail.com> wrote:
> On Thu, Dec 27, 2012 at 12:25 PM, Phil Turmel <philip@turmel.org> wrote:
>> Whoops! The list was dropped...
>>
>> On 12/27/2012 03:24 PM, Phil Turmel wrote:
>>> On 12/27/2012 03:20 PM, Mark Knecht wrote:
>>>> Thanks Phil,
>>>> I've added the device back in and the rebuild is proceeding.
>>>>
>>>> So the new piece of info here for me is this idea of a bitmap. I'll
>>>> have to go look into that, but a quick question - is that something
>>>> that can be added to the RAID6 at this time, or can you only do that
>>>> when you first build it? As I have some issue with my (possibly)
>>>> initramfs I expect I'll kick the devices a few more times before I get
>>>> it worked out. Would be nice if putting the disk back in didn't take
>>>> so long.
>>>
>>> You can add a bitmap to any v1.x array after the fact with "--grow". It
>>> does add a little overhead to the write path, so you might not want it
>>> in a heavily loaded server. For a test-bed or other system prone to
>>> crashes, it's a godsend.
>>>
>>> Phil
>
>
> So it was. Thanks for catching that.
>
> OK, so adding the bitmap looks straight-forward
>
> mdadm --grow --bitmap=internal /dev/md3
>
> and then wait, although I think I can continue to use the RAID while
> this is progressing.
>
> I cannot see that someone like me (or rather my usage model - home
> compute server doing 2 or 3 fairly heavily loaded Windows VMs most of
> the day) would want anything other than an internal bitmap, correct?
> Note that the VM files actually reside on a different RAID, which is
> rather unfortunately part of the same set of disks but different
> partitions. As I said, this has been a learning process. (!) My hope
> is that over time I can move all of these different things to this one
> RAID6, reclaim disk space, grow the RAID6, etc., until I get the whole
> disk back.
>
> Thanks,
> Mark
So I went ahead and tried adding it, first to md6 which has a
different usage and then to md3. It only takes a couple of seconds to
add and no problems so far.
- Mark
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-12-27 21:38 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-27 18:48 md: kicking non-fresh sdf3 from array! Mark Knecht
2012-12-27 19:57 ` Phil Turmel
[not found] ` <CAK2H+ecBALbrpshYQ5TZhuOJh_kgXqi_C90Zp-FtCgaE4dEgSw@mail.gmail.com>
[not found] ` <50DCAE69.1080003@turmel.org>
2012-12-27 20:25 ` Phil Turmel
2012-12-27 20:52 ` Mark Knecht
2012-12-27 21:38 ` Mark Knecht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).