* very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
@ 2012-06-12 21:08 Iordan Iordanov
2012-06-18 9:34 ` Alexander Lyakas
0 siblings, 1 reply; 4+ messages in thread
From: Iordan Iordanov @ 2012-06-12 21:08 UTC (permalink / raw)
To: Linux RAID
Hello,
On Ubuntu 12.04 with a standard kernel (3.2) we've been seeing very
strange behavior with our RAID1 sets, both with superblock 1.2, and with
0.9. The system has been instructed to come up with a degraded array in
initrd, in case this is relevant. Here is an example of what is
happening. We have 5 RAID1 sets on a server. They live on partitions on
/dev/sda and /dev/sdb. The sever comes up with 2 out of 5 sets degraded,
and the others just fine.
Trying to re-add or add the partitions into the arrays fails like this:
# mdadm /dev/md2 --re-add /dev/sda6
mdadm: --re-add for /dev/sda6 to /dev/md2 is not possible
# mdadm /dev/md2 --add /dev/sda6
mdadm: /dev/sda6 reports being an active member for /dev/md2, but a
--re-add fails.
mdadm: not performing --add as that would convert /dev/sda6 in to a spare.
mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda6" first.
Here is some more information from /proc/mdstat, dmesg, and syslog.
# cat /proc/mdstat
md2 : active raid1 sdb6[1]
20479872 blocks [2/1] [_U]
md3 : active raid1 sdb7[0]
10239872 blocks [2/1] [U_]
# dmesg | grep md2
[ 4.087037] md/raid1:md2: active with 1 out of 2 mirrors
[ 4.087147] md2: detected capacity change from 0 to 20971388928
[ 4.119168] md2: unknown partition table
[ 12.383035] EXT4-fs (md2): mounted filesystem with ordered data mode.
Opts: (null)
# dmesg | grep md3
[ 4.083084] md/raid1:md3: active with 1 out of 2 mirrors
[ 4.083230] md3: detected capacity change from 0 to 10485628928
[ 4.180986] md3: unknown partition table
[ 9.631814] EXT4-fs (md3): mounted filesystem with ordered data mode.
Opts: (null)
# ls -l /dev/sda6
brw-rw---- 1 root disk 8, 6 Jun 12 16:54 /dev/sda6
# ls -l /dev/sda7
brw-rw---- 1 root disk 8, 7 Jun 12 16:54 /dev/sda7
# grep md2 /var/log/syslog
Jun 12 16:54:32 ps2 kernel: [ 4.087037] md/raid1:md2: active with 1
out of 2 mirrors
Jun 12 16:54:32 ps2 kernel: [ 4.087147] md2: detected capacity change
from 0 to 20971388928
Jun 12 16:54:32 ps2 kernel: [ 4.119168] md2: unknown partition table
Jun 12 16:54:32 ps2 kernel: [ 12.383035] EXT4-fs (md2): mounted
filesystem with ordered data mode. Opts: (null)
Jun 12 16:54:38 ps2 mdadm[1181]: DegradedArray event detected on md
device /dev/md2
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
2012-06-12 21:08 very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2) Iordan Iordanov
@ 2012-06-18 9:34 ` Alexander Lyakas
2012-06-18 21:04 ` Iordan Iordanov
0 siblings, 1 reply; 4+ messages in thread
From: Alexander Lyakas @ 2012-06-18 9:34 UTC (permalink / raw)
To: Iordan Iordanov; +Cc: Linux RAID
Iordan,
you may be hitting an issue I recently discussed with Neil here:
http://www.spinics.net/lists/raid/msg39137.html
Please check (using mdadm --examine) whether the drive you are trying
to re-add has a valid "Recovery Offset" in the superblock. In other
words, the drive was recovering before the reboot. If yes, then this
is the issue. Hopefully, we can convince (somebody) to backport it to
ubuntu-precise...
Alex.
On Wed, Jun 13, 2012 at 12:08 AM, Iordan Iordanov
<iordan@cdf.toronto.edu> wrote:
> Hello,
>
> On Ubuntu 12.04 with a standard kernel (3.2) we've been seeing very strange
> behavior with our RAID1 sets, both with superblock 1.2, and with 0.9. The
> system has been instructed to come up with a degraded array in initrd, in
> case this is relevant. Here is an example of what is happening. We have 5
> RAID1 sets on a server. They live on partitions on /dev/sda and /dev/sdb.
> The sever comes up with 2 out of 5 sets degraded, and the others just fine.
>
> Trying to re-add or add the partitions into the arrays fails like this:
>
> # mdadm /dev/md2 --re-add /dev/sda6
> mdadm: --re-add for /dev/sda6 to /dev/md2 is not possible
>
> # mdadm /dev/md2 --add /dev/sda6
> mdadm: /dev/sda6 reports being an active member for /dev/md2, but a --re-add
> fails.
> mdadm: not performing --add as that would convert /dev/sda6 in to a spare.
> mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda6" first.
>
> Here is some more information from /proc/mdstat, dmesg, and syslog.
>
> # cat /proc/mdstat
> md2 : active raid1 sdb6[1]
> 20479872 blocks [2/1] [_U]
>
> md3 : active raid1 sdb7[0]
> 10239872 blocks [2/1] [U_]
>
> # dmesg | grep md2
> [ 4.087037] md/raid1:md2: active with 1 out of 2 mirrors
> [ 4.087147] md2: detected capacity change from 0 to 20971388928
> [ 4.119168] md2: unknown partition table
> [ 12.383035] EXT4-fs (md2): mounted filesystem with ordered data mode.
> Opts: (null)
>
> # dmesg | grep md3
> [ 4.083084] md/raid1:md3: active with 1 out of 2 mirrors
> [ 4.083230] md3: detected capacity change from 0 to 10485628928
> [ 4.180986] md3: unknown partition table
> [ 9.631814] EXT4-fs (md3): mounted filesystem with ordered data mode.
> Opts: (null)
>
> # ls -l /dev/sda6
> brw-rw---- 1 root disk 8, 6 Jun 12 16:54 /dev/sda6
> # ls -l /dev/sda7
> brw-rw---- 1 root disk 8, 7 Jun 12 16:54 /dev/sda7
>
> # grep md2 /var/log/syslog
> Jun 12 16:54:32 ps2 kernel: [ 4.087037] md/raid1:md2: active with 1 out
> of 2 mirrors
> Jun 12 16:54:32 ps2 kernel: [ 4.087147] md2: detected capacity change
> from 0 to 20971388928
> Jun 12 16:54:32 ps2 kernel: [ 4.119168] md2: unknown partition table
> Jun 12 16:54:32 ps2 kernel: [ 12.383035] EXT4-fs (md2): mounted filesystem
> with ordered data mode. Opts: (null)
> Jun 12 16:54:38 ps2 mdadm[1181]: DegradedArray event detected on md device
> /dev/md2
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
2012-06-18 9:34 ` Alexander Lyakas
@ 2012-06-18 21:04 ` Iordan Iordanov
2012-06-19 17:48 ` Iordan Iordanov
0 siblings, 1 reply; 4+ messages in thread
From: Iordan Iordanov @ 2012-06-18 21:04 UTC (permalink / raw)
To: Alexander Lyakas; +Cc: Linux RAID
Hi Alexander,
In our case, we saw this behavior with three systems that have RAID1,
neither of which was rebuilding the RAID1 array when the system was
rebooted. Also, we witnessed this with a RAID6 system with 6 drives. We
failed and removed a drive, and moved it to another slot on the machine
(to test something else). Trying to add (which triggered a re-add) the
drive into the RAID6 array cause the same error to be output, namely:
mdadm: /dev/sdf2 reports being an active member for /dev/md2, but a
--re-add fails.
So our cases do seem to be somewhat different.
Cheers,
Iordan
On 06/18/12 05:34, Alexander Lyakas wrote:
> Iordan,
> you may be hitting an issue I recently discussed with Neil here:
> http://www.spinics.net/lists/raid/msg39137.html
>
> Please check (using mdadm --examine) whether the drive you are trying
> to re-add has a valid "Recovery Offset" in the superblock. In other
> words, the drive was recovering before the reboot. If yes, then this
> is the issue. Hopefully, we can convince (somebody) to backport it to
> ubuntu-precise...
>
> Alex.
>
>
> On Wed, Jun 13, 2012 at 12:08 AM, Iordan Iordanov
> <iordan@cdf.toronto.edu> wrote:
>> Hello,
>>
>> On Ubuntu 12.04 with a standard kernel (3.2) we've been seeing very strange
>> behavior with our RAID1 sets, both with superblock 1.2, and with 0.9. The
>> system has been instructed to come up with a degraded array in initrd, in
>> case this is relevant. Here is an example of what is happening. We have 5
>> RAID1 sets on a server. They live on partitions on /dev/sda and /dev/sdb.
>> The sever comes up with 2 out of 5 sets degraded, and the others just fine.
>>
>> Trying to re-add or add the partitions into the arrays fails like this:
>>
>> # mdadm /dev/md2 --re-add /dev/sda6
>> mdadm: --re-add for /dev/sda6 to /dev/md2 is not possible
>>
>> # mdadm /dev/md2 --add /dev/sda6
>> mdadm: /dev/sda6 reports being an active member for /dev/md2, but a --re-add
>> fails.
>> mdadm: not performing --add as that would convert /dev/sda6 in to a spare.
>> mdadm: To make this a spare, use "mdadm --zero-superblock /dev/sda6" first.
>>
>> Here is some more information from /proc/mdstat, dmesg, and syslog.
>>
>> # cat /proc/mdstat
>> md2 : active raid1 sdb6[1]
>> 20479872 blocks [2/1] [_U]
>>
>> md3 : active raid1 sdb7[0]
>> 10239872 blocks [2/1] [U_]
>>
>> # dmesg | grep md2
>> [ 4.087037] md/raid1:md2: active with 1 out of 2 mirrors
>> [ 4.087147] md2: detected capacity change from 0 to 20971388928
>> [ 4.119168] md2: unknown partition table
>> [ 12.383035] EXT4-fs (md2): mounted filesystem with ordered data mode.
>> Opts: (null)
>>
>> # dmesg | grep md3
>> [ 4.083084] md/raid1:md3: active with 1 out of 2 mirrors
>> [ 4.083230] md3: detected capacity change from 0 to 10485628928
>> [ 4.180986] md3: unknown partition table
>> [ 9.631814] EXT4-fs (md3): mounted filesystem with ordered data mode.
>> Opts: (null)
>>
>> # ls -l /dev/sda6
>> brw-rw---- 1 root disk 8, 6 Jun 12 16:54 /dev/sda6
>> # ls -l /dev/sda7
>> brw-rw---- 1 root disk 8, 7 Jun 12 16:54 /dev/sda7
>>
>> # grep md2 /var/log/syslog
>> Jun 12 16:54:32 ps2 kernel: [ 4.087037] md/raid1:md2: active with 1 out
>> of 2 mirrors
>> Jun 12 16:54:32 ps2 kernel: [ 4.087147] md2: detected capacity change
>> from 0 to 20971388928
>> Jun 12 16:54:32 ps2 kernel: [ 4.119168] md2: unknown partition table
>> Jun 12 16:54:32 ps2 kernel: [ 12.383035] EXT4-fs (md2): mounted filesystem
>> with ordered data mode. Opts: (null)
>> Jun 12 16:54:38 ps2 mdadm[1181]: DegradedArray event detected on md device
>> /dev/md2
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2)
2012-06-18 21:04 ` Iordan Iordanov
@ 2012-06-19 17:48 ` Iordan Iordanov
0 siblings, 0 replies; 4+ messages in thread
From: Iordan Iordanov @ 2012-06-19 17:48 UTC (permalink / raw)
To: Linux RAID
Hey guys,
Just a bit of extra information. It looks like this mdadm bug was the
cause of our problem (at least the people on launchpad think it's a
confirmed bug):
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/925280
What do you think?
Cheers and thanks!
Iordan
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2012-06-19 17:48 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-12 21:08 very strange behavior with RAID1 arrays on Ubuntu 12.04 (kernel 3.2) Iordan Iordanov
2012-06-18 9:34 ` Alexander Lyakas
2012-06-18 21:04 ` Iordan Iordanov
2012-06-19 17:48 ` Iordan Iordanov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).