* problems with faulty disks and superblocks 1.0, 1.1 and 1.2
@ 2007-05-31 9:20 Hubert Verstraete
2007-06-04 16:43 ` Hubert Verstraete
0 siblings, 1 reply; 2+ messages in thread
From: Hubert Verstraete @ 2007-05-31 9:20 UTC (permalink / raw)
To: linux-raid
Hello
I'm having problems with a RAID-1 configuration. I cannot re-add a disk
that I've failed, because each time I do this, the re-added disk is
still seen as failed.
After some investigations, I found that this problem only occur when I
create the RAID array with superblocks 1.0, 1.1 and 1.2.
With the superblock 0.90 I don't encounter this issue.
Here are the commands to easily reproduce the issue
mdadm -C /dev/md_d0 -e 1.0 -l 1 -n 2 -b internal -R /dev/sda /dev/sdb
mdadm /dev/md_d0 -f /dev/sda
mdadm /dev/md_d0 -r /dev/sda
mdadm /dev/md_d0 -a /dev/sda
cat /proc/mdstat
The output of mdstat is:
Personalities : [raid1]
md_d0 : active raid1 sda[0](F) sdb[1]
104849 blocks super 1.2 [2/1] [_U]
bitmap: 0/7 pages [0KB], 8KB chunk
unused devices: <none>
I'm wondering if the way I'm failing and re-adding a disk is correct.
Did I make something wrong?
If I change the superblock to "-e 0.90", there's no problem with this
set of commands.
For now, I found a work-around with superblock 1.0 which consists in
zeroing the superblock before re-adding the disk. But I suppose that
doing so will force a full rebuild of the re-added disk, and I don't
want this, because I'm using write-intent bitmaps.
I'm using mdadm - v2.5.6 on Debian Etch with kernel 2.6.18-4.
Bug or misunderstanding from myself? Any help would be appreciated :)
Thanks
Hubert
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: problems with faulty disks and superblocks 1.0, 1.1 and 1.2
2007-05-31 9:20 problems with faulty disks and superblocks 1.0, 1.1 and 1.2 Hubert Verstraete
@ 2007-06-04 16:43 ` Hubert Verstraete
0 siblings, 0 replies; 2+ messages in thread
From: Hubert Verstraete @ 2007-06-04 16:43 UTC (permalink / raw)
To: linux-raid
Hubert Verstraete wrote:
> Hello
>
> I'm having problems with a RAID-1 configuration. I cannot re-add a
> disk that I've failed, because each time I do this, the re-added disk
> is still seen as failed.
> After some investigations, I found that this problem only occur when I
> create the RAID array with superblocks 1.0, 1.1 and 1.2.
> With the superblock 0.90 I don't encounter this issue.
>
> Here are the commands to easily reproduce the issue
>
> mdadm -C /dev/md_d0 -e 1.0 -l 1 -n 2 -b internal -R /dev/sda /dev/sdb
> mdadm /dev/md_d0 -f /dev/sda
> mdadm /dev/md_d0 -r /dev/sda
> mdadm /dev/md_d0 -a /dev/sda
> cat /proc/mdstat
>
> The output of mdstat is:
> Personalities : [raid1]
> md_d0 : active raid1 sda[0](F) sdb[1]
> 104849 blocks super 1.2 [2/1] [_U]
> bitmap: 0/7 pages [0KB], 8KB chunk
> unused devices: <none>
>
> I'm wondering if the way I'm failing and re-adding a disk is correct.
> Did I make something wrong?
>
> If I change the superblock to "-e 0.90", there's no problem with this
> set of commands.
>
> For now, I found a work-around with superblock 1.0 which consists in
> zeroing the superblock before re-adding the disk. But I suppose that
> doing so will force a full rebuild of the re-added disk, and I don't
> want this, because I'm using write-intent bitmaps.
>
> I'm using mdadm - v2.5.6 on Debian Etch with kernel 2.6.18-4.
>
> Bug or misunderstanding from myself? Any help would be appreciated :)
>
> Thanks
> Hubert
The kernel 2.6.20 Changelog says:
- restarting device recovery after a clean shutdown (version-1 metadata only) didn't work as intended (or at all).
That might be my problem, and I confirm 2.6.20.12 is working correctly.
Hubert
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2007-06-04 16:43 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-31 9:20 problems with faulty disks and superblocks 1.0, 1.1 and 1.2 Hubert Verstraete
2007-06-04 16:43 ` Hubert Verstraete
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).