* Another report of a raid6 array being maintaind by _raid5 in ps .
@ 2007-03-22 4:41 Mr. James W. Laferriere
2007-03-22 5:30 ` Neil Brown
2007-03-24 19:26 ` mdadm: RUN_ARRAY failed: Cannot allocate memory Mr. James W. Laferriere
0 siblings, 2 replies; 6+ messages in thread
From: Mr. James W. Laferriere @ 2007-03-22 4:41 UTC (permalink / raw)
To: linux-raid maillist
Hello Neil , Someone else reported this before . But I'd thought it
was under a older kernel than 2.6.21-rc4 . Hth , JimL
root 2936 0.0 0.0 2948 1760 tts/0 Ss 04:30 0:00 -bash
root 2965 0.3 0.0 0 0 ? S< 04:34 0:00 [md3_raid5]
root 2977 0.0 0.0 2380 912 tts/0 R+ 04:38 0:00 ps -auxww
root@(none):~# uname -a
Linux (none) 2.6.21-rc4 #2 SMP Thu Mar 22 04:19:35 UTC 2007 i686 pentium4 i386 GNU/Linux
root@(none):~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md3 : active raid6 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1]
573905664 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
bitmap: 0/137 pages [0KB], 512KB chunk
--
+-----------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | 663 Beaumont Blvd | Give me Linux |
| babydr@baby-dragons.com | Pacifica, CA. 94044 | only on AXP |
+-----------------------------------------------------------------+
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Another report of a raid6 array being maintaind by _raid5 in ps .
2007-03-22 4:41 Another report of a raid6 array being maintaind by _raid5 in ps Mr. James W. Laferriere
@ 2007-03-22 5:30 ` Neil Brown
2007-03-22 13:45 ` Bill Davidsen
2007-03-24 19:26 ` mdadm: RUN_ARRAY failed: Cannot allocate memory Mr. James W. Laferriere
1 sibling, 1 reply; 6+ messages in thread
From: Neil Brown @ 2007-03-22 5:30 UTC (permalink / raw)
To: Mr. James W. Laferriere; +Cc: linux-raid maillist
On Wednesday March 21, babydr@baby-dragons.com wrote:
> Hello Neil , Someone else reported this before . But I'd thought it
> was under a older kernel than 2.6.21-rc4 . Hth , JimL
>
> root 2936 0.0 0.0 2948 1760 tts/0 Ss 04:30 0:00 -bash
> root 2965 0.3 0.0 0 0 ? S< 04:34 0:00 [md3_raid5]
> root 2977 0.0 0.0 2380 912 tts/0 R+ 04:38 0:00 ps -auxww
>
> root@(none):~# uname -a
> Linux (none) 2.6.21-rc4 #2 SMP Thu Mar 22 04:19:35 UTC 2007 i686 pentium4 i386 GNU/Linux
>
> root@(none):~# cat /proc/mdstat
> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
> md3 : active raid6 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1]
> 573905664 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
> bitmap: 0/137 pages [0KB], 512KB chunk
It's just a name....
Given that the module is raid456.ko, how about this?
There are lots of error message that say 'raid5' too....
NeilBrown
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid5.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c 2007-03-13 10:38:33.000000000 +1100
+++ ./drivers/md/raid5.c 2007-03-22 16:28:37.000000000 +1100
@@ -4209,7 +4209,7 @@ static int run(mddev_t *mddev)
}
{
- mddev->thread = md_register_thread(raid5d, mddev, "%s_raid5");
+ mddev->thread = md_register_thread(raid5d, mddev, "%s_raid456");
if (!mddev->thread) {
printk(KERN_ERR
"raid5: couldn't allocate thread for %s\n",
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Another report of a raid6 array being maintaind by _raid5 in ps .
2007-03-22 5:30 ` Neil Brown
@ 2007-03-22 13:45 ` Bill Davidsen
0 siblings, 0 replies; 6+ messages in thread
From: Bill Davidsen @ 2007-03-22 13:45 UTC (permalink / raw)
To: Neil Brown; +Cc: Mr. James W. Laferriere, linux-raid maillist
Neil Brown wrote:
> On Wednesday March 21, babydr@baby-dragons.com wrote:
>
>> Hello Neil , Someone else reported this before . But I'd thought it
>> was under a older kernel than 2.6.21-rc4 . Hth , JimL
>>
>> root 2936 0.0 0.0 2948 1760 tts/0 Ss 04:30 0:00 -bash
>> root 2965 0.3 0.0 0 0 ? S< 04:34 0:00 [md3_raid5]
>> root 2977 0.0 0.0 2380 912 tts/0 R+ 04:38 0:00 ps -auxww
>>
>> root@(none):~# uname -a
>> Linux (none) 2.6.21-rc4 #2 SMP Thu Mar 22 04:19:35 UTC 2007 i686 pentium4 i386 GNU/Linux
>>
>> root@(none):~# cat /proc/mdstat
>> Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
>> md3 : active raid6 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1]
>> 573905664 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
>> bitmap: 0/137 pages [0KB], 512KB chunk
>>
>
> It's just a name....
> Given that the module is raid456.ko, how about this?
>
> There are lots of error message that say 'raid5' too....
>
Yes, and lots of log watching programs which know about them. I bet they
don't know raid456, though, and will either miss them or report them as
"unknown error" on the scan. Do we really need a cosmetic change which
will require a functional change with no gain in function?
> NeilBrown
>
> Signed-off-by: Neil Brown <neilb@suse.de>
>
> ### Diffstat output
> ./drivers/md/raid5.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
> --- .prev/drivers/md/raid5.c 2007-03-13 10:38:33.000000000 +1100
> +++ ./drivers/md/raid5.c 2007-03-22 16:28:37.000000000 +1100
> @@ -4209,7 +4209,7 @@ static int run(mddev_t *mddev)
> }
>
> {
> - mddev->thread = md_register_thread(raid5d, mddev, "%s_raid5");
> + mddev->thread = md_register_thread(raid5d, mddev, "%s_raid456");
> if (!mddev->thread) {
> printk(KERN_ERR
> "raid5: couldn't allocate thread for %s\n",
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 6+ messages in thread
* mdadm: RUN_ARRAY failed: Cannot allocate memory
2007-03-22 4:41 Another report of a raid6 array being maintaind by _raid5 in ps Mr. James W. Laferriere
2007-03-22 5:30 ` Neil Brown
@ 2007-03-24 19:26 ` Mr. James W. Laferriere
2007-03-29 7:54 ` Neil Brown
1 sibling, 1 reply; 6+ messages in thread
From: Mr. James W. Laferriere @ 2007-03-24 19:26 UTC (permalink / raw)
To: linux-raid maillist
Hello Neil , I found the problem that caused the 'cannot allcate
memory' , DON'T use '--bitmap=' .
But that said , Hmmmm , Shouldn't mdadm just stop & say ...
'md: bitmaps not supported for this level.'
Like it puts out into dmesg .
Also think this message in dmesg is interesting .
"raid0: bad disk number -1 - aborting!'
Hth , JimL
ps:
# mdadm -C /dev/md6 -l 0 -n 3 --bitmap=internal /dev/md[3-5]
mdadm: /dev/md4 appears to be part of a raid array:
level=raid0 devices=3 ctime=Sat Mar 24 18:13:38 2007
mdadm: /dev/md5 appears to be part of a raid array:
level=raid0 devices=3 ctime=Sat Mar 24 18:13:38 2007
Continue creating array? y
mdadm: RUN_ARRAY failed: Cannot allocate memory
mdadm: stopped /dev/md6
# mdadm --version
mdadm - v2.6.1 - 22nd February 2007
# uname -a
Linux filesrv1b 2.6.21-rc4 #2 SMP Thu Mar 22 04:19:35 UTC 2007 i686 pentium4 i386 GNU/Linux
More info at:
http://www.baby-dragons.com/filesrv2-raid-and-scsi-device-LettersnNumbers.txt
--
+-----------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | 663 Beaumont Blvd | Give me Linux |
| babydr@baby-dragons.com | Pacifica, CA. 94044 | only on AXP |
+-----------------------------------------------------------------+
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm: RUN_ARRAY failed: Cannot allocate memory
2007-03-24 19:26 ` mdadm: RUN_ARRAY failed: Cannot allocate memory Mr. James W. Laferriere
@ 2007-03-29 7:54 ` Neil Brown
2007-03-30 16:37 ` Bill Davidsen
0 siblings, 1 reply; 6+ messages in thread
From: Neil Brown @ 2007-03-29 7:54 UTC (permalink / raw)
To: Mr. James W. Laferriere; +Cc: linux-raid maillist
On Saturday March 24, babydr@baby-dragons.com wrote:
> Hello Neil , I found the problem that caused the 'cannot allcate
> memory' , DON'T use '--bitmap=' .
> But that said , Hmmmm , Shouldn't mdadm just stop & say ...
> 'md: bitmaps not supported for this level.'
> Like it puts out into dmesg .
>
> Also think this message in dmesg is interesting .
> "raid0: bad disk number -1 - aborting!'
>
> Hth , JimL
Yeah.... mdadm should be fixed too, but this kernel patch should make
it behave a bit better. I'll queue it for 2.6.22.
Thanks,
NeilBrown
Move test for whether level supports bitmap to correct place.
We need to check for internal-consistency of superblock in
load_super. validate_super is for inter-device consistency.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/md.c | 42 ++++++++++++++++++++++++++----------------
1 file changed, 26 insertions(+), 16 deletions(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2007-03-29 16:42:18.000000000 +1000
+++ ./drivers/md/md.c 2007-03-29 16:49:26.000000000 +1000
@@ -695,6 +695,17 @@ static int super_90_load(mdk_rdev_t *rde
rdev->data_offset = 0;
rdev->sb_size = MD_SB_BYTES;
+ if (sb->state & (1<<MD_SB_BITMAP_PRESENT)) {
+ if (sb->level != 1 && sb->level != 4
+ && sb->level != 5 && sb->level != 6
+ && sb->level != 10) {
+ /* FIXME use a better test */
+ printk(KERN_WARNING
+ "md: bitmaps not supported for this level.\n");
+ goto abort;
+ }
+ }
+
if (sb->level == LEVEL_MULTIPATH)
rdev->desc_nr = -1;
else
@@ -793,16 +804,8 @@ static int super_90_validate(mddev_t *md
mddev->max_disks = MD_SB_DISKS;
if (sb->state & (1<<MD_SB_BITMAP_PRESENT) &&
- mddev->bitmap_file == NULL) {
- if (mddev->level != 1 && mddev->level != 4
- && mddev->level != 5 && mddev->level != 6
- && mddev->level != 10) {
- /* FIXME use a better test */
- printk(KERN_WARNING "md: bitmaps not supported for this level.\n");
- return -EINVAL;
- }
+ mddev->bitmap_file == NULL)
mddev->bitmap_offset = mddev->default_bitmap_offset;
- }
} else if (mddev->pers == NULL) {
/* Insist on good event counter while assembling */
@@ -1059,6 +1062,18 @@ static int super_1_load(mdk_rdev_t *rdev
bdevname(rdev->bdev,b));
return -EINVAL;
}
+ if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BITMAP_OFFSET)) {
+ if (sb->level != cpu_to_le32(1) &&
+ sb->level != cpu_to_le32(4) &&
+ sb->level != cpu_to_le32(5) &&
+ sb->level != cpu_to_le32(6) &&
+ sb->level != cpu_to_le32(10)) {
+ printk(KERN_WARNING
+ "md: bitmaps not supported for this level.\n");
+ return -EINVAL;
+ }
+ }
+
rdev->preferred_minor = 0xffff;
rdev->data_offset = le64_to_cpu(sb->data_offset);
atomic_set(&rdev->corrected_errors, le32_to_cpu(sb->cnt_corrected_read));
@@ -1142,14 +1157,9 @@ static int super_1_validate(mddev_t *mdd
mddev->max_disks = (4096-256)/2;
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BITMAP_OFFSET) &&
- mddev->bitmap_file == NULL ) {
- if (mddev->level != 1 && mddev->level != 5 && mddev->level != 6
- && mddev->level != 10) {
- printk(KERN_WARNING "md: bitmaps not supported for this level.\n");
- return -EINVAL;
- }
+ mddev->bitmap_file == NULL )
mddev->bitmap_offset = (__s32)le32_to_cpu(sb->bitmap_offset);
- }
+
if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_RESHAPE_ACTIVE)) {
mddev->reshape_position = le64_to_cpu(sb->reshape_position);
mddev->delta_disks = le32_to_cpu(sb->delta_disks);
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm: RUN_ARRAY failed: Cannot allocate memory
2007-03-29 7:54 ` Neil Brown
@ 2007-03-30 16:37 ` Bill Davidsen
0 siblings, 0 replies; 6+ messages in thread
From: Bill Davidsen @ 2007-03-30 16:37 UTC (permalink / raw)
To: Neil Brown; +Cc: Mr. James W. Laferriere, linux-raid maillist, Andrew Morton
Neil Brown wrote:
> On Saturday March 24, babydr@baby-dragons.com wrote:
>
>> Hello Neil , I found the problem that caused the 'cannot allcate
>> memory' , DON'T use '--bitmap=' .
>> But that said , Hmmmm , Shouldn't mdadm just stop & say ...
>> 'md: bitmaps not supported for this level.'
>> Like it puts out into dmesg .
>>
>> Also think this message in dmesg is interesting .
>> "raid0: bad disk number -1 - aborting!'
>>
>> Hth , JimL
>>
>
> Yeah.... mdadm should be fixed too, but this kernel patch should make
> it behave a bit better. I'll queue it for 2.6.22.
>
Given the release cycle, this might fit 2.6.21-rc6 (is is a fix), or
stable 2.6.21.1 if 21 comes out soon. In any case it could go in -mm for
testing and to be sure it would be pushed at an appropriate time.
> Thanks,
> NeilBrown
>
>
> Move test for whether level supports bitmap to correct place.
>
> We need to check for internal-consistency of superblock in
> load_super. validate_super is for inter-device consistency.
>
>
> Signed-off-by: Neil Brown <neilb@suse.de>
>
> ### Diffstat output
> ./drivers/md/md.c | 42 ++++++++++++++++++++++++++----------------
> 1 file changed, 26 insertions(+), 16 deletions(-)
>
> diff .prev/drivers/md/md.c ./drivers/md/md.c
> --- .prev/drivers/md/md.c 2007-03-29 16:42:18.000000000 +1000
> +++ ./drivers/md/md.c 2007-03-29 16:49:26.000000000 +1000
> @@ -695,6 +695,17 @@ static int super_90_load(mdk_rdev_t *rde
> rdev->data_offset = 0;
> rdev->sb_size = MD_SB_BYTES;
>
> + if (sb->state & (1<<MD_SB_BITMAP_PRESENT)) {
> + if (sb->level != 1 && sb->level != 4
> + && sb->level != 5 && sb->level != 6
> + && sb->level != 10) {
> + /* FIXME use a better test */
> + printk(KERN_WARNING
> + "md: bitmaps not supported for this level.\n");
> + goto abort;
> + }
> + }
> +
> if (sb->level == LEVEL_MULTIPATH)
> rdev->desc_nr = -1;
> else
> @@ -793,16 +804,8 @@ static int super_90_validate(mddev_t *md
> mddev->max_disks = MD_SB_DISKS;
>
> if (sb->state & (1<<MD_SB_BITMAP_PRESENT) &&
> - mddev->bitmap_file == NULL) {
> - if (mddev->level != 1 && mddev->level != 4
> - && mddev->level != 5 && mddev->level != 6
> - && mddev->level != 10) {
> - /* FIXME use a better test */
> - printk(KERN_WARNING "md: bitmaps not supported for this level.\n");
> - return -EINVAL;
> - }
> + mddev->bitmap_file == NULL)
> mddev->bitmap_offset = mddev->default_bitmap_offset;
> - }
>
> } else if (mddev->pers == NULL) {
> /* Insist on good event counter while assembling */
> @@ -1059,6 +1062,18 @@ static int super_1_load(mdk_rdev_t *rdev
> bdevname(rdev->bdev,b));
> return -EINVAL;
> }
> + if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BITMAP_OFFSET)) {
> + if (sb->level != cpu_to_le32(1) &&
> + sb->level != cpu_to_le32(4) &&
> + sb->level != cpu_to_le32(5) &&
> + sb->level != cpu_to_le32(6) &&
> + sb->level != cpu_to_le32(10)) {
> + printk(KERN_WARNING
> + "md: bitmaps not supported for this level.\n");
> + return -EINVAL;
> + }
> + }
> +
> rdev->preferred_minor = 0xffff;
> rdev->data_offset = le64_to_cpu(sb->data_offset);
> atomic_set(&rdev->corrected_errors, le32_to_cpu(sb->cnt_corrected_read));
> @@ -1142,14 +1157,9 @@ static int super_1_validate(mddev_t *mdd
> mddev->max_disks = (4096-256)/2;
>
> if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_BITMAP_OFFSET) &&
> - mddev->bitmap_file == NULL ) {
> - if (mddev->level != 1 && mddev->level != 5 && mddev->level != 6
> - && mddev->level != 10) {
> - printk(KERN_WARNING "md: bitmaps not supported for this level.\n");
> - return -EINVAL;
> - }
> + mddev->bitmap_file == NULL )
> mddev->bitmap_offset = (__s32)le32_to_cpu(sb->bitmap_offset);
> - }
> +
> if ((le32_to_cpu(sb->feature_map) & MD_FEATURE_RESHAPE_ACTIVE)) {
> mddev->reshape_position = le64_to_cpu(sb->reshape_position);
> mddev->delta_disks = le32_to_cpu(sb->delta_disks);
>
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2007-03-30 16:37 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-03-22 4:41 Another report of a raid6 array being maintaind by _raid5 in ps Mr. James W. Laferriere
2007-03-22 5:30 ` Neil Brown
2007-03-22 13:45 ` Bill Davidsen
2007-03-24 19:26 ` mdadm: RUN_ARRAY failed: Cannot allocate memory Mr. James W. Laferriere
2007-03-29 7:54 ` Neil Brown
2007-03-30 16:37 ` Bill Davidsen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).