* reducing the number of disks a RAID1 expects
@ 2007-09-10 1:10 J. David Beutel
2007-09-10 2:29 ` Richard Scobie
0 siblings, 1 reply; 8+ messages in thread
From: J. David Beutel @ 2007-09-10 1:10 UTC (permalink / raw)
To: linux-raid
My /dev/hdd started failing its SMART check, so I removed it from a RAID1:
# mdadm /dev/md5 -f /dev/hdd2 -r /dev/hdd2
Now when I boot it looks like this in /proc/mdstat:
md5 : active raid1 hdc8[2] hdg8[1]
58604992 blocks [3/2] [_UU]
and I get a "DegradedArray event on /dev/md5" email on every boot from
mdadm monitoring. I only need 2 disks in md5 now. How can I stop it
from being considered "degraded"? I added a 3rd disk a while ago just
because I got a new disk with plenty of space, and little /dev/hdd was
getting old.
mdadm - v1.6.0 - 4 June 2004
Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
GNU/Linux
Cheers,
11011011
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-10 1:10 reducing the number of disks a RAID1 expects J. David Beutel
@ 2007-09-10 2:29 ` Richard Scobie
2007-09-10 7:31 ` J. David Beutel
0 siblings, 1 reply; 8+ messages in thread
From: Richard Scobie @ 2007-09-10 2:29 UTC (permalink / raw)
To: Linux RAID Mailing List
J. David Beutel wrote:
> My /dev/hdd started failing its SMART check, so I removed it from a RAID1:
>
> # mdadm /dev/md5 -f /dev/hdd2 -r /dev/hdd2
>
> Now when I boot it looks like this in /proc/mdstat:
>
> md5 : active raid1 hdc8[2] hdg8[1]
> 58604992 blocks [3/2] [_UU]
>
> and I get a "DegradedArray event on /dev/md5" email on every boot from
> mdadm monitoring. I only need 2 disks in md5 now. How can I stop it
> from being considered "degraded"? I added a 3rd disk a while ago just
> because I got a new disk with plenty of space, and little /dev/hdd was
> getting old.
>
> mdadm - v1.6.0 - 4 June 2004
> Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
> GNU/Linux
>
Have a look at the "Grow Mode" section of the mdadm man page.
It looks as though you should just need to use the same command you used
to grow it to 3 drives, except specify only 2 this time.
Regards,
Richard
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-10 2:29 ` Richard Scobie
@ 2007-09-10 7:31 ` J. David Beutel
2007-09-10 9:55 ` Iustin Pop
0 siblings, 1 reply; 8+ messages in thread
From: J. David Beutel @ 2007-09-10 7:31 UTC (permalink / raw)
To: Richard Scobie; +Cc: Linux RAID Mailing List
Richard Scobie wrote:
> Have a look at the "Grow Mode" section of the mdadm man page.
Thanks! I overlooked that, although I did look at the man page before
posting.
> It looks as though you should just need to use the same command you
> used to grow it to 3 drives, except specify only 2 this time.
I think I hot-added it. Anyway, --grow looks like what I need, but I'm
having some difficulty with it. The man page says, "Change the size or
shape of an active array." But I got:
[root@samue ~]# mdadm --grow /dev/md5 -n2
mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
[root@samue ~]# umount /dev/md5
[root@samue ~]# mdadm --grow /dev/md5 -n2
mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
So I tried stopping it, but got:
[root@samue ~]# mdadm --stop /dev/md5
[root@samue ~]# mdadm --grow /dev/md5 -n2
mdadm: Cannot get array information for /dev/md5: No such device
[root@samue ~]# mdadm --query /dev/md5 --scan
/dev/md5: is an md device which is not active
/dev/md5: is too small to be an md component.
[root@samue ~]# mdadm --grow /dev/md5 --scan -n2
mdadm: option s not valid in grow mode
Am I trying the right thing, but running into some limitation of my
version of mdadm or the kernel? Or am I overlooking something
fundamental yet again? md5 looked like this in /proc/mdstat before I
stopped it:
md5 : active raid1 hdc8[2] hdg8[1]
58604992 blocks [3/2] [_UU]
For -n the man page says, "This number can only be changed using --grow
for RAID1 arrays, and only on kernels which provide necessary support."
Grow mode says, "Various types of growth may be added during 2.6
development, possibly including restructuring a raid5 array to have
more active devices. Currently the only support available is to change
the "size" attribute for arrays with redundancy, and the raid-disks
attribute of RAID1 arrays. ... When reducing the number of devices in
a RAID1 array, the slots which are to be removed from the array must
already be vacant. That is, the devices that which were in those slots
must be failed and removed."
I don't know how I overlooked all that the first time, but I can't see
what I'm overlooking now.
mdadm - v1.6.0 - 4 June 2004
Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
GNU/Linux
Cheers,
11011011
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-10 7:31 ` J. David Beutel
@ 2007-09-10 9:55 ` Iustin Pop
2007-09-11 13:33 ` Bill Davidsen
2007-09-11 14:04 ` Neil Brown
0 siblings, 2 replies; 8+ messages in thread
From: Iustin Pop @ 2007-09-10 9:55 UTC (permalink / raw)
To: J. David Beutel; +Cc: Richard Scobie, Linux RAID Mailing List
On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
> [root@samue ~]# mdadm --grow /dev/md5 -n2
> mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
>
> mdadm - v1.6.0 - 4 June 2004
> Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
> GNU/Linux
I'm not sure that such an old kernel supports reshaping an array. The
mdadm version should not be a problem, as that message is probably
generated by the kernel.
I'd recommend trying to boot with a newer kernel, even if only for the
duration of the reshape.
regards,
iustin
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-10 9:55 ` Iustin Pop
@ 2007-09-11 13:33 ` Bill Davidsen
2007-09-11 14:04 ` Neil Brown
1 sibling, 0 replies; 8+ messages in thread
From: Bill Davidsen @ 2007-09-11 13:33 UTC (permalink / raw)
To: J. David Beutel, Richard Scobie, Linux RAID Mailing List
Iustin Pop wrote:
> On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
>
>> [root@samue ~]# mdadm --grow /dev/md5 -n2
>> mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
>>
>> mdadm - v1.6.0 - 4 June 2004
>> Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
>> GNU/Linux
>>
>
> I'm not sure that such an old kernel supports reshaping an array. The
> mdadm version should not be a problem, as that message is probably
> generated by the kernel.
>
Well, it supported making it larger, but you could be right, he's going
down a less-tested code path.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-10 9:55 ` Iustin Pop
2007-09-11 13:33 ` Bill Davidsen
@ 2007-09-11 14:04 ` Neil Brown
2007-09-15 21:13 ` J. David Beutel
1 sibling, 1 reply; 8+ messages in thread
From: Neil Brown @ 2007-09-11 14:04 UTC (permalink / raw)
To: Iustin Pop; +Cc: J. David Beutel, Richard Scobie, Linux RAID Mailing List
On Monday September 10, iusty@k1024.org wrote:
> On Sun, Sep 09, 2007 at 09:31:54PM -1000, J. David Beutel wrote:
> > [root@samue ~]# mdadm --grow /dev/md5 -n2
> > mdadm: Cannot set device size/shape for /dev/md5: Device or resource busy
> >
> > mdadm - v1.6.0 - 4 June 2004
> > Linux 2.6.12-1.1381_FC3 #1 Fri Oct 21 03:46:55 EDT 2005 i686 athlon i386
> > GNU/Linux
>
> I'm not sure that such an old kernel supports reshaping an array. The
> mdadm version should not be a problem, as that message is probably
> generated by the kernel.
2.6.12 does support reducing the number of drives in a raid1, but it
will only remove drives from the end of the list. e.g. if the
state was
58604992 blocks [3/2] [UU_]
then it would work. But as it is
58604992 blocks [3/2] [_UU]
it won't. You could fail the last drive (hdc8) and then add it back
in again. This would move it to the first slot, but it would cause a
full resync which is a bit of a waste.
Since commit 6ea9c07c6c6d1c14d9757dd8470dc4c85bbe9f28 (about
2.6.13-rc4) raid1 will repack the devices to the start of the
list when trying to change the number of devices.
>
> I'd recommend trying to boot with a newer kernel, even if only for the
> duration of the reshape.
>
Yes, a kernel upgrade would do the trick.
NeilBrown
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-11 14:04 ` Neil Brown
@ 2007-09-15 21:13 ` J. David Beutel
2007-09-16 22:09 ` Goswin von Brederlow
0 siblings, 1 reply; 8+ messages in thread
From: J. David Beutel @ 2007-09-15 21:13 UTC (permalink / raw)
To: Neil Brown; +Cc: Iustin Pop, Richard Scobie, Linux RAID Mailing List
Neil Brown wrote:
> 2.6.12 does support reducing the number of drives in a raid1, but it
> will only remove drives from the end of the list. e.g. if the
> state was
>
> 58604992 blocks [3/2] [UU_]
>
> then it would work. But as it is
>
> 58604992 blocks [3/2] [_UU]
>
> it won't. You could fail the last drive (hdc8) and then add it back
> in again. This would move it to the first slot, but it would cause a
> full resync which is a bit of a waste.
>
Thanks for your help! That's the route I took. It worked ([2/2]
[UU]). The only hiccup was that when I rebooted, hdd2 was back in the
first slot by itself ([3/1] [U__]). I guess there was some contention
in discovery. But all I had to do was physically remove hdd and the
remaining two were back to [2/2] [UU].
> Since commit 6ea9c07c6c6d1c14d9757dd8470dc4c85bbe9f28 (about
> 2.6.13-rc4) raid1 will repack the devices to the start of the
> list when trying to change the number of devices.
>
I couldn't find a newer kernel RPM for FC3, and I was nervous about
building a new kernel myself and screwing up my system, so I went the
slot rotate route instead. It only took about 20 minutes to resync (a
lot faster than trying to build a new kernel).
My main concern was that it would discover an unreadable sector while
resyncing from the last remaining drive and I would lose the whole
array. (That didn't happen, though.) I looked for some mdadm command
to check the remaining drive before I failed the last one, to help avoid
that worst case scenario, but couldn't find any. Is there some way to
do that, for future reference?
Cheers,
11011011
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: reducing the number of disks a RAID1 expects
2007-09-15 21:13 ` J. David Beutel
@ 2007-09-16 22:09 ` Goswin von Brederlow
0 siblings, 0 replies; 8+ messages in thread
From: Goswin von Brederlow @ 2007-09-16 22:09 UTC (permalink / raw)
To: J. David Beutel
Cc: Neil Brown, Iustin Pop, Richard Scobie, Linux RAID Mailing List
"J. David Beutel" <jdb@getsu.com> writes:
> Neil Brown wrote:
>> 2.6.12 does support reducing the number of drives in a raid1, but it
>> will only remove drives from the end of the list. e.g. if the
>> state was
>>
>> 58604992 blocks [3/2] [UU_]
>>
>> then it would work. But as it is
>>
>> 58604992 blocks [3/2] [_UU]
>>
>> it won't. You could fail the last drive (hdc8) and then add it back
>> in again. This would move it to the first slot, but it would cause a
>> full resync which is a bit of a waste.
>>
>
> Thanks for your help! That's the route I took. It worked ([2/2]
> [UU]). The only hiccup was that when I rebooted, hdd2 was back in the
> first slot by itself ([3/1] [U__]). I guess there was some contention
> in discovery. But all I had to do was physically remove hdd and the
> remaining two were back to [2/2] [UU].
mdadm --zero-superblock /dev/hdd
Never forget that when removing a disk. It sucks when you reboot and
your / is suddenly on the removed disk instead of the remaining raid.
MfG
Goswin
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-09-16 22:09 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-10 1:10 reducing the number of disks a RAID1 expects J. David Beutel
2007-09-10 2:29 ` Richard Scobie
2007-09-10 7:31 ` J. David Beutel
2007-09-10 9:55 ` Iustin Pop
2007-09-11 13:33 ` Bill Davidsen
2007-09-11 14:04 ` Neil Brown
2007-09-15 21:13 ` J. David Beutel
2007-09-16 22:09 ` Goswin von Brederlow
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).