linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
@ 2005-05-03 11:15 Tyler
  2005-05-03 11:38 ` Tyler
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Tyler @ 2005-05-03 11:15 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid

When i try to create any raid5/raid6 array with --metadata 1 or 
--metadata 1.0, it simply spits out a response saying /dev/hdX is 
busy... X being whichever drive i listed as an array member first on the 
list of drives.  1.x.0 doesn't support version 1 superblocks, but i used 
it as an example below without the metadata option to show that creating 
the raid works fine, and then the same with mdadm v2.0-devel, they 
create fine until you add the metadata option.  Kernel is 2.6.12-rc3-mm2.

First I create an array using v1.9.0 successfully (from system path):

root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -C -l 5 -n 3 /dev/md0 
/dev/hdb /dev/hdc /dev/hdd
mdadm: array /dev/md0 started.
root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -S /dev/md0

Then successfully create an array with default superblock (0.90?) using 
v2.0-devel (from current dir):

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 /dev/md0 
/dev/hdb /dev/hdc /dev/hdd
mdadm: /dev/hdb appears to be part of a raid array:
    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
mdadm: /dev/hdc appears to be part of a raid array:
    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
mdadm: /dev/hdd appears to be part of a raid array:
    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
Continue creating array? y
VERS = 9002
mdadm: array /dev/md0 started.
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -S /dev/md0

Then try creating a raid with version 1 superblock, which fails:

root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -C -l 5 -n 3 --metadata 1 
/dev/md0 /dev/hdb /dev/hdc /dev/hdd
mdadm: /dev/hdb appears to be part of a raid array:
    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
mdadm: /dev/hdc appears to be part of a raid array:
    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
mdadm: /dev/hdd appears to be part of a raid array:
    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
Continue creating array? y
VERS = 9002
mdadm: ADD_NEW_DISK for /dev/hdb failed: Device or resource busy

I believe it *could* have something to do with the fact that mdadm 
2.0-devel doesn't detect previously written (and/or blanked) raid 
superblocks, look at my previous bugreport I filed a few minutes ago.  
Maybe its seeing what the --examine feature shows, that there's an 
"active" array on the drive still, hence, being unable to create a new one.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-03 11:15 BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport Tyler
@ 2005-05-03 11:38 ` Tyler
  2005-05-03 11:38 ` Tyler
  2005-05-03 23:54 ` Neil Brown
  2 siblings, 0 replies; 10+ messages in thread
From: Tyler @ 2005-05-03 11:38 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid

Possibly useful information I forgot to include, that was in dmesg, just 
after trying the create with the version 1 superblock, cat /proc/mdstat 
shows/showed no raid devices active:

md: could not bd_claim hdb.
md: md_import_device returned -16

Tyler wrote:

> When i try to create any raid5/raid6 array with --metadata 1 or 
> --metadata 1.0, it simply spits out a response saying /dev/hdX is 
> busy... X being whichever drive i listed as an array member first on 
> the list of drives.  1.x.0 doesn't support version 1 superblocks, but 
> i used it as an example below without the metadata option to show that 
> creating the raid works fine, and then the same with mdadm v2.0-devel, 
> they create fine until you add the metadata option.  Kernel is 
> 2.6.12-rc3-mm2.
>
> First I create an array using v1.9.0 successfully (from system path):
>
> root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -C -l 5 -n 3 /dev/md0 
> /dev/hdb /dev/hdc /dev/hdd
> mdadm: array /dev/md0 started.
> root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -S /dev/md0
>
> Then successfully create an array with default superblock (0.90?) 
> using v2.0-devel (from current dir):
>
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 /dev/md0 
> /dev/hdb /dev/hdc /dev/hdd
> mdadm: /dev/hdb appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
> mdadm: /dev/hdc appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
> mdadm: /dev/hdd appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
> Continue creating array? y
> VERS = 9002
> mdadm: array /dev/md0 started.
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -S /dev/md0
>
> Then try creating a raid with version 1 superblock, which fails:
>
> root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -C -l 5 -n 3 --metadata 
> 1 /dev/md0 /dev/hdb /dev/hdc /dev/hdd
> mdadm: /dev/hdb appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
> mdadm: /dev/hdc appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
> mdadm: /dev/hdd appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
> Continue creating array? y
> VERS = 9002
> mdadm: ADD_NEW_DISK for /dev/hdb failed: Device or resource busy
>
> I believe it *could* have something to do with the fact that mdadm 
> 2.0-devel doesn't detect previously written (and/or blanked) raid 
> superblocks, look at my previous bugreport I filed a few minutes ago.  
> Maybe its seeing what the --examine feature shows, that there's an 
> "active" array on the drive still, hence, being unable to create a new 
> one.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-03 11:15 BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport Tyler
  2005-05-03 11:38 ` Tyler
@ 2005-05-03 11:38 ` Tyler
  2005-05-03 23:54 ` Neil Brown
  2 siblings, 0 replies; 10+ messages in thread
From: Tyler @ 2005-05-03 11:38 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid

Possibly useful information I forgot to include, that was in dmesg, just 
after trying the create with the version 1 superblock, cat /proc/mdstat 
shows/showed no raid devices active:

md: could not bd_claim hdb.
md: md_import_device returned -16

Tyler wrote:

> When i try to create any raid5/raid6 array with --metadata 1 or 
> --metadata 1.0, it simply spits out a response saying /dev/hdX is 
> busy... X being whichever drive i listed as an array member first on 
> the list of drives.  1.x.0 doesn't support version 1 superblocks, but 
> i used it as an example below without the metadata option to show that 
> creating the raid works fine, and then the same with mdadm v2.0-devel, 
> they create fine until you add the metadata option.  Kernel is 
> 2.6.12-rc3-mm2.
>
> First I create an array using v1.9.0 successfully (from system path):
>
> root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -C -l 5 -n 3 /dev/md0 
> /dev/hdb /dev/hdc /dev/hdd
> mdadm: array /dev/md0 started.
> root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -S /dev/md0
>
> Then successfully create an array with default superblock (0.90?) 
> using v2.0-devel (from current dir):
>
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 /dev/md0 
> /dev/hdb /dev/hdc /dev/hdd
> mdadm: /dev/hdb appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
> mdadm: /dev/hdc appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
> mdadm: /dev/hdd appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:42:28 2005
> Continue creating array? y
> VERS = 9002
> mdadm: array /dev/md0 started.
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -S /dev/md0
>
> Then try creating a raid with version 1 superblock, which fails:
>
> root@localhost:~/dev/mdadm-2.0-devel-1# mdadm -C -l 5 -n 3 --metadata 
> 1 /dev/md0 /dev/hdb /dev/hdc /dev/hdd
> mdadm: /dev/hdb appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
> mdadm: /dev/hdc appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
> mdadm: /dev/hdd appears to be part of a raid array:
>    level=5 devices=3 ctime=Tue May  3 11:43:00 2005
> Continue creating array? y
> VERS = 9002
> mdadm: ADD_NEW_DISK for /dev/hdb failed: Device or resource busy
>
> I believe it *could* have something to do with the fact that mdadm 
> 2.0-devel doesn't detect previously written (and/or blanked) raid 
> superblocks, look at my previous bugreport I filed a few minutes ago.  
> Maybe its seeing what the --examine feature shows, that there's an 
> "active" array on the drive still, hence, being unable to create a new 
> one.
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-03 11:15 BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport Tyler
  2005-05-03 11:38 ` Tyler
  2005-05-03 11:38 ` Tyler
@ 2005-05-03 23:54 ` Neil Brown
  2005-05-04  1:36   ` Tyler
  2 siblings, 1 reply; 10+ messages in thread
From: Neil Brown @ 2005-05-03 23:54 UTC (permalink / raw)
  To: Tyler; +Cc: linux-raid

On Tuesday May 3, pml@dtbb.net wrote:
> When i try to create any raid5/raid6 array with --metadata 1 or 
> --metadata 1.0, it simply spits out a response saying /dev/hdX is 
> busy... X being whichever drive i listed as an array member first on the 
> list of drives.

Yes .... thanks for the very clear bug report.  Some clumsy developer
forgot to close the file, didn't that :-(
Patch below.

NeilBrown



### Diffstat output
 ./super1.c |    9 +++++++--
 1 files changed, 7 insertions(+), 2 deletions(-)

diff ./super1.c~current~ ./super1.c
--- ./super1.c~current~	2005-05-04 09:45:24.000000000 +1000
+++ ./super1.c	2005-05-04 09:52:34.000000000 +1000
@@ -496,11 +496,15 @@ static int write_init_super1(struct supe
 		free(refsb);
 	}
     
-	if (ioctl(fd, BLKGETSIZE, &size))
+	if (ioctl(fd, BLKGETSIZE, &size)) {
+		close(fd);
 		return 1;
+	}
 
-	if (size < 24)
+	if (size < 24) {
+		close(fd);
 		return 2;
+	}
 
 
 	/*
@@ -540,6 +544,7 @@ static int write_init_super1(struct supe
 	rv = store_super1(fd, sb);
 	if (rv)
 		fprintf(stderr, Name ": failed to write superblock to %s\n", devname);
+	close(fd);
 	return rv;
 }
 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-03 23:54 ` Neil Brown
@ 2005-05-04  1:36   ` Tyler
  2005-05-04  2:17     ` Neil Brown
  0 siblings, 1 reply; 10+ messages in thread
From: Tyler @ 2005-05-04  1:36 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Hi Neil,

I've gotten past the device being busy using the patch, and onto a new 
error message and set of problems:

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdc
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdd
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 -e 1 
/dev/md0 /dev/hdb /dev/hdc /dev/hdd
VERS = 9002
mdadm: ADD_NEW_DISK for /dev/hdb failed: Invalid argument
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
/dev/hdb:
          Magic : a92b4efc
        Version : 01.00
     Array UUID : 02808c44c0:1a8f4351:d5dc7f68:fe2e4c
           Name :
  Creation Time : Wed May  4 02:31:48 2005
     Raid Level : raid5
   Raid Devices : 3

    Device Size : 390721952 (186.31 GiB 200.05 GB)
   Super Offset : 390721952 sectors
          State : active
    Device UUID : 02808c44c0:1a8f4351:d5dc7f68:fe2e4c
    Update Time : Wed May  4 02:31:48 2005
       Checksum : 7462e130 - correct
         Events : 0

         Layout : -unknown-
     Chunk Size : 64K

   Array State : Uu_ 380 spares 2 failed
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdc
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdd

And the following in Dmesg:
md: hdb has invalid sb, not importing!
md: md_import_device returned -22

So it would seem that a bad superblock gets written to the first device, 
then the program bails, leaving the bad superblock (that was blank 
before), and doesn't finish.  Mdadm is unable to zero this superblock 
either, as i posted earlier, even after multiple attempts.  I did manage 
to erase it using 'dd if=/dev/zero of=/dev/hdb bs=64k seek=3050000' to 
seek near the end of the drive and erase it, but thats when I ran the 
above steps, coming to the same problem each time.

Thanks,
Tyler.

Neil Brown wrote:

>On Tuesday May 3, pml@dtbb.net wrote:
>  
>
>>When i try to create any raid5/raid6 array with --metadata 1 or 
>>--metadata 1.0, it simply spits out a response saying /dev/hdX is 
>>busy... X being whichever drive i listed as an array member first on the 
>>list of drives.
>>    
>>
>
>Yes .... thanks for the very clear bug report.  Some clumsy developer
>forgot to close the file, didn't that :-(
>Patch below.
>
>NeilBrown
>
>
>
>### Diffstat output
> ./super1.c |    9 +++++++--
> 1 files changed, 7 insertions(+), 2 deletions(-)
>
>diff ./super1.c~current~ ./super1.c
>--- ./super1.c~current~	2005-05-04 09:45:24.000000000 +1000
>+++ ./super1.c	2005-05-04 09:52:34.000000000 +1000
>@@ -496,11 +496,15 @@ static int write_init_super1(struct supe
> 		free(refsb);
> 	}
>     
>-	if (ioctl(fd, BLKGETSIZE, &size))
>+	if (ioctl(fd, BLKGETSIZE, &size)) {
>+		close(fd);
> 		return 1;
>+	}
> 
>-	if (size < 24)
>+	if (size < 24) {
>+		close(fd);
> 		return 2;
>+	}
> 
> 
> 	/*
>@@ -540,6 +544,7 @@ static int write_init_super1(struct supe
> 	rv = store_super1(fd, sb);
> 	if (rv)
> 		fprintf(stderr, Name ": failed to write superblock to %s\n", devname);
>+	close(fd);
> 	return rv;
> }
> 
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>  
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-04  1:36   ` Tyler
@ 2005-05-04  2:17     ` Neil Brown
  2005-05-04  5:08       ` Tyler
  0 siblings, 1 reply; 10+ messages in thread
From: Neil Brown @ 2005-05-04  2:17 UTC (permalink / raw)
  To: Tyler; +Cc: linux-raid

On Tuesday May 3, pml@dtbb.net wrote:
> Hi Neil,
> 
> I've gotten past the device being busy using the patch, and onto a new 
> error message and set of problems:
> 
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdc
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdd
> root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 -e 1 
> /dev/md0 /dev/hdb /dev/hdc /dev/hdd
> VERS = 9002
> mdadm: ADD_NEW_DISK for /dev/hdb failed: Invalid argument
....
> 
> And the following in Dmesg:
> md: hdb has invalid sb, not importing!
> md: md_import_device returned -22
> 

Hey, I got that too!
You must be running an -mm kernel (I cannot remember what kernel you
said you were using).

Look in include/linux/md_p.h near line 205.
If it has
	__u32	chunksize;	/* in 512byte sectors */
	__u32	raid_disks;
	__u32	bitmap_offset;	/* sectors after start of superblock that bitmap starts
				 * NOTE: signed, so bitmap can be before superblock
				 * only meaningful of feature_map[0] is set.
				 */
	__u8	pad1[128-96];	/* set to 0 when written */

then change the '96' to '100'.  (It should have been changed when
bitmap_offset was added).

You will then need to mdadm some more.  In super1.c near line 400,

	sb->ctime = __cpu_to_le64((unsigned long long)time(0));
	sb->level = __cpu_to_le32(info->level);
	sb->layout = __cpu_to_le32(info->level);
	sb->size = __cpu_to_le64(info->size*2ULL);

notice that 'layout' is being set to 'level'.  This is wrong.  That
line should be

	sb->layout = __cpu_to_le32(info->layout);

With these changes, I can create a 56 device raid6 array. (I only have
14 drives, but I partitioned each into 4 equal parts!).

I'll try to do another mdadm-2 release in the next week.

Thanks for testing this stuff...

NeilBrown

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-04  2:17     ` Neil Brown
@ 2005-05-04  5:08       ` Tyler
  2005-05-04  5:59         ` Neil Brown
  2005-05-04  6:00         ` Neil Brown
  0 siblings, 2 replies; 10+ messages in thread
From: Tyler @ 2005-05-04  5:08 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

What kernel are you using Neil, and what patches to the kernel if any, 
and which patches to mdadm 2.0-devel?  I'm still having difficulty here. 
:(  I even tried compiling a 2.6.11-rc3-mm2 kernel like your 2.0-devel 
announce suggested, and put your patches from 02-18 against it, and 
still no love.  Included is some detailed output from trying this all 
over again with 2.6.11-rc3-mm2 with your patches, and mdadm 2.0-devel, 
plus the patch you put on this list a few messages ago, plus the 
suggested changes to include/linux/raid/md_p.h and super1.c sblayout-> 
layout instead of level from your reply below (no patch supplied), and I 
tried without the 96 to 100 change, and with the 96 -> 100 change, as 
the 2.6.11-rc3-mm2 kernel didn't have the bitmap offset patch.. which I 
am assuming would be there if I applied only patch you've got on your 
site that has come out since 2.0-devel, to include bitmap support (the 
patch only mentions bitmap for 0.90.0 support.. but that is neither here 
nor there).

root@localhost:~/dev/mdadm-2.0-devel-1# uname -a
Linux localhost 2.6.11-rc3-mm2 #1 SMP Wed May 4 04:57:08 CEST 2005 i686 
GNU/Linux

First I check the superblocks on each drive, then create a v0.90.0 
superblock based array successfully:

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdc
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdd
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 /dev/md0 
/dev/hdb /dev/hdc /dev/hdd
VERS = 9001
mdadm: array /dev/md0 started.

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.01
  Creation Time : Wed May  4 05:06:04 2005
     Raid Level : raid5
     Array Size : 390721792 (372.62 GiB 400.10 GB)
    Device Size : 195360896 (186.31 GiB 200.05 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed May  4 05:06:04 2005
          State : clean, degraded, recovering
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 0% complete

           UUID : 3edff19d:53f64b6f:1cef039c:1f60b157
         Events : 0.1

    Number   Major   Minor   RaidDevice State
       0       3       64        0      active sync   /dev/hdb
       1      22        0        1      active sync   /dev/hdc
       2       0        0        -      removed

       3      22       64        2      spare rebuilding   /dev/hdd

root@localhost:~/dev/mdadm-2.0-devel-1# cat /proc/mdstat
Personalities : [raid5]
Event: 4                  
md0 : active raid5 hdd[3] hdc[1] hdb[0]
      390721792 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]
      [>....................]  recovery =  0.2% (511944/195360896) 
finish=126.8min speed=25597K/sec
     
unused devices: <none>

I then stop the array, zero the superblocks (0.90.0 superblocks seem to 
erase okay, and there aren't any version 1 supeblocks on the devices 
yet, as shown above):

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -S /dev/md0
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdb
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdb
mdadm: Unrecognised md component device - /dev/hdb

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdc
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdc
mdadm: Unrecognised md component device - /dev/hdc

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdd
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdd
mdadm: Unrecognised md component device - /dev/hdd

I then try creating the same array with a version 1 superblock 
unsuccessfully, but one difference now, with all the patches, is that it 
successfully writes a superblock to all three devices:

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 -e 1 
/dev/md0 /dev/hdb /dev/hdc /dev/hdd
VERS = 9001
mdadm: RUN_ARRAY failed: Input/output error
root@localhost:~/dev/mdadm-2.0-devel-1# cat /proc/mdstat
Personalities : [raid5]
Event: 6
unused devices: <none>

First we see the successful creation and then stopping of a v0.90.0 
superblock raid:

root@localhost:~/dev/mdadm-2.0-devel-1# dmesg |tail -55
md: bind<hdb>
md: bind<hdc>
md: bind<hdd>
raid5: device hdc operational as raid disk 1
raid5: device hdb operational as raid disk 0
raid5: allocated 3165kB for md0
raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, o:1, dev:hdb
 disk 1, o:1, dev:hdc
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, o:1, dev:hdb
 disk 1, o:1, dev:hdc
 disk 2, o:1, dev:hdd
.<6>md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwith (but not more than 200000 
KB/sec) for reconstruction.
md: using 128k window, over a total of 195360896 blocks.
md: md0: sync done.
md: md0 stopped.
md: unbind<hdd>
md: export_rdev(hdd)
md: unbind<hdc>
md: export_rdev(hdc)
md: unbind<hdb>
md: export_rdev(hdb)

Then following that is the attempt with version 1 superblock:

md: bind<hdb>
md: bind<hdc>
md: bind<hdd>
md: md0: raid array is not clean -- starting background reconstruction
raid5: device hdc operational as raid disk 1
raid5: device hdb operational as raid disk 0
raid5: cannot start dirty degraded array for md0
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, o:1, dev:hdb
 disk 1, o:1, dev:hdc
raid5: failed to run raid set md0
md: pers->run() failed ...
md: md0 stopped.
md: unbind<hdd>
md: export_rdev(hdd)
md: unbind<hdc>
md: export_rdev(hdc)
md: unbind<hdb>
md: export_rdev(hdb)

All three drives do show a version 1 superblock.. (further than before, 
where just the first drive got one):

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
/dev/hdb:
          Magic : a92b4efc
        Version : 01.00
     Array UUID : 8867ea01e1:8c59144b:b76f2ccb:52e94f
           Name :
  Creation Time : Wed May  4 05:15:24 2005
     Raid Level : raid5
   Raid Devices : 3

    Device Size : 390721952 (186.31 GiB 200.05 GB)
   Super Offset : 390721952 sectors
          State : active
    Device UUID : 8867ea01e1:8c59144b:b76f2ccb:52e94f
    Update Time : Wed May  4 05:15:24 2005
       Checksum : af8dc3da - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : Uu_ 380 spares 2 failed
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdc
/dev/hdc:
          Magic : a92b4efc
        Version : 01.00
     Array UUID : 8867ea01e1:8c59144b:b76f2ccb:52e94f
           Name :
  Creation Time : Wed May  4 05:15:24 2005
     Raid Level : raid5
   Raid Devices : 3

    Device Size : 390721952 (186.31 GiB 200.05 GB)
   Super Offset : 390721952 sectors
          State : active
    Device UUID : 8867ea01e1:8c59144b:b76f2ccb:52e94f
    Update Time : Wed May  4 05:15:24 2005
       Checksum : 695cc5cc - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : uU_ 380 spares 2 failed
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdd
/dev/hdd:
          Magic : a92b4efc
        Version : 01.00
     Array UUID : 8867ea01e1:8c59144b:b76f2ccb:52e94f
           Name :
  Creation Time : Wed May  4 05:15:24 2005
     Raid Level : raid5
   Raid Devices : 3

    Device Size : 390721952 (186.31 GiB 200.05 GB)
   Super Offset : 390721952 sectors
          State : active
    Device UUID : 8867ea01e1:8c59144b:b76f2ccb:52e94f
    Update Time : Wed May  4 05:15:24 2005
       Checksum : c71279d8 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Array State : uu_ 380 spares 2 failed

The attempt to use zero-superblock fails to remove version 1 superblocks 
.. it should report that there was no superblock the second time you run 
zero-superblock on a device, like it did above.  (an mdadm -E /dev/hdX 
still shows the version 1 superblocks intact):

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdb
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdb
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdc
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdc
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdd
root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm --zero-superblock /dev/hdd

An mdadm -E on each drive shows the same info as the information just 
above, before the zero-superblock, and below is a sample after 
recreating a version 0.90.0 superblock array again, the bottom of the 
output seems very different than the version 1 superblock information.. 
giving alot more information about devices in the raid, etc, than the 
version 1 display does (not that the version 1 has successfully started 
mind you, and maybe the version 1 superblocks don't include all the same 
information, as they are smaller, correct?).

root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
/dev/hdb:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 2ae086b7:780fa0af:5e5171e9:20ba5aa5
  Creation Time : Wed May  4 05:32:20 2005
     Raid Level : raid5
    Device Size : 195360896 (186.31 GiB 200.05 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 0

    Update Time : Wed May  4 05:32:20 2005
          State : clean
 Active Devices : 2
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 1
       Checksum : 5bbdc17c - correct
         Events : 0.1

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       3       64        0      active sync   /dev/hdb

   0     0       3       64        0      active sync   /dev/hdb
   1     1      22        0        1      active sync   /dev/hdc
   2     2       0        0        2      faulty
   3     3      22       64        3      spare   /dev/hdd


So at this point, I feel I'm close... but still no cigar, sorry to be 
such a pain .. heh.

Thanks,
Tyler.

Neil Brown wrote:

>On Tuesday May 3, pml@dtbb.net wrote:
>  
>
>>Hi Neil,
>>
>>I've gotten past the device being busy using the patch, and onto a new 
>>error message and set of problems:
>>
>>root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdb
>>root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdc
>>root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -E /dev/hdd
>>root@localhost:~/dev/mdadm-2.0-devel-1# ./mdadm -C -l 5 -n 3 -e 1 
>>/dev/md0 /dev/hdb /dev/hdc /dev/hdd
>>VERS = 9002
>>mdadm: ADD_NEW_DISK for /dev/hdb failed: Invalid argument
>>    
>>
>....
>  
>
>>And the following in Dmesg:
>>md: hdb has invalid sb, not importing!
>>md: md_import_device returned -22
>>
>>    
>>
>
>Hey, I got that too!
>You must be running an -mm kernel (I cannot remember what kernel you
>said you were using).
>
>Look in include/linux/md_p.h near line 205.
>If it has
>	__u32	chunksize;	/* in 512byte sectors */
>	__u32	raid_disks;
>	__u32	bitmap_offset;	/* sectors after start of superblock that bitmap starts
>				 * NOTE: signed, so bitmap can be before superblock
>				 * only meaningful of feature_map[0] is set.
>				 */
>	__u8	pad1[128-96];	/* set to 0 when written */
>
>then change the '96' to '100'.  (It should have been changed when
>bitmap_offset was added).
>
>You will then need to mdadm some more.  In super1.c near line 400,
>
>	sb->ctime = __cpu_to_le64((unsigned long long)time(0));
>	sb->level = __cpu_to_le32(info->level);
>	sb->layout = __cpu_to_le32(info->level);
>	sb->size = __cpu_to_le64(info->size*2ULL);
>
>notice that 'layout' is being set to 'level'.  This is wrong.  That
>line should be
>
>	sb->layout = __cpu_to_le32(info->layout);
>
>With these changes, I can create a 56 device raid6 array. (I only have
>14 drives, but I partitioned each into 4 equal parts!).
>
>I'll try to do another mdadm-2 release in the next week.
>
>Thanks for testing this stuff...
>
>NeilBrown
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>  
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-04  5:08       ` Tyler
@ 2005-05-04  5:59         ` Neil Brown
  2005-05-04 12:13           ` Tyler
  2005-05-04  6:00         ` Neil Brown
  1 sibling, 1 reply; 10+ messages in thread
From: Neil Brown @ 2005-05-04  5:59 UTC (permalink / raw)
  To: Tyler; +Cc: linux-raid

On Tuesday May 3, pml@dtbb.net wrote:
> What kernel are you using Neil, and what patches to the kernel if any, 
> and which patches to mdadm 2.0-devel?  

2.6.12-rc2-mm1  and a few patches to mdadm, but none significant to
your current issue.

The reason it worked for me is that I tried raid6 and you tried raid5.
To make it work with raid5 you need the following patch.  I haven't
actually tested it as my test machine has had odd hardware issues for
ages (only causing problems at reboot, but for a test machine, that is
often..) and it is finally being looked at.

Let me know if this gets you further.

NeilBrown


 ----------- Diffstat output ------------
 ./super1.c |    2 +-
 1 files changed, 1 insertion(+), 1 deletion(-)

diff ./super1.c~current~ ./super1.c
--- ./super1.c~current~	2005-05-04 12:06:33.000000000 +1000
+++ ./super1.c	2005-05-04 15:54:59.000000000 +1000
@@ -411,7 +411,7 @@ static int init_super1(void **sbp, mdu_a
 
 	sb->utime = sb->ctime;
 	sb->events = __cpu_to_le64(1);
-	if (info->state & MD_SB_CLEAN)
+	if (info->state & (1<<MD_SB_CLEAN))
 		sb->resync_offset = ~0ULL;
 	else
 		sb->resync_offset = 0;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-04  5:08       ` Tyler
  2005-05-04  5:59         ` Neil Brown
@ 2005-05-04  6:00         ` Neil Brown
  1 sibling, 0 replies; 10+ messages in thread
From: Neil Brown @ 2005-05-04  6:00 UTC (permalink / raw)
  To: Tyler; +Cc: linux-raid


You might find this useful too....

---
Increase max-devs on type-1 superblocks


Signed-off-by: Neil Brown <neilb@cse.unsw.edu.au>

### Diffstat output
 ./Detail.c |    7 +++++--
 ./Grow.c   |    2 +-
 ./mdadm.c  |    6 ++++--
 ./mdadm.h  |    1 +
 ./super0.c |    2 ++
 ./super1.c |    4 +++-
 6 files changed, 16 insertions(+), 6 deletions(-)

diff ./Detail.c~current~ ./Detail.c
--- ./Detail.c~current~	2005-05-04 09:44:37.000000000 +1000
+++ ./Detail.c	2005-05-04 10:05:54.000000000 +1000
@@ -51,6 +51,7 @@ int Detail(char *dev, int brief, int tes
 	int is_rebuilding = 0;
 	int failed = 0;
 	struct supertype *st = NULL;
+	int max_disks = MD_SB_DISKS;
 
 	void *super = NULL;
 	int rv = test ? 4 : 1;
@@ -89,8 +90,10 @@ int Detail(char *dev, int brief, int tes
 		stb.st_rdev = 0;
 	rv = 0;
 
+	if (st) max_disks = st->max_devs;
+
 	/* try to load a superblock */
-	for (d= 0; d<MD_SB_DISKS; d++) {
+	for (d= 0; d<max_disks; d++) {
 		mdu_disk_info_t disk;
 		char *dv;
 		disk.number = d;
@@ -210,7 +213,7 @@ int Detail(char *dev, int brief, int tes
 
 		printf("    Number   Major   Minor   RaidDevice State\n");
 	}
-	for (d= 0; d<MD_SB_DISKS; d++) {
+	for (d= 0; d < max_disks; d++) {
 		mdu_disk_info_t disk;
 		char *dv;
 		disk.number = d;

diff ./Grow.c~current~ ./Grow.c
--- ./Grow.c~current~	2005-05-04 09:46:34.000000000 +1000
+++ ./Grow.c	2005-05-04 10:06:32.000000000 +1000
@@ -236,7 +236,7 @@ int Grow_addbitmap(char *devname, int fd
 	}
 	if (strcmp(file, "internal") == 0) {
 		int d;
-		for (d=0; d< MD_SB_DISKS; d++) {
+		for (d=0; d< st->max_devs; d++) {
 			mdu_disk_info_t disk;
 			char *dv;
 			disk.number = d;

diff ./mdadm.c~current~ ./mdadm.c
--- ./mdadm.c~current~	2005-05-04 09:46:34.000000000 +1000
+++ ./mdadm.c	2005-05-04 10:03:23.000000000 +1000
@@ -50,6 +50,7 @@ int main(int argc, char *argv[])
 	int level = UnSet;
 	int layout = UnSet;
 	int raiddisks = 0;
+	int max_disks = MD_SB_DISKS;
 	int sparedisks = 0;
 	struct mddev_ident_s ident;
 	char *configfile = NULL;
@@ -302,6 +303,7 @@ int main(int argc, char *argv[])
 				fprintf(stderr, Name ": unrecognised metadata identifier: %s\n", optarg);
 				exit(2);
 			}
+			max_disks = ss->max_devs;
 			continue;
 
 		case O(GROW,'z'):
@@ -425,7 +427,7 @@ int main(int argc, char *argv[])
 				exit(2);
 			}
 			raiddisks = strtol(optarg, &c, 10);
-			if (!optarg[0] || *c || raiddisks<=0 || raiddisks > MD_SB_DISKS) {
+			if (!optarg[0] || *c || raiddisks<=0 || raiddisks > max_disks) {
 				fprintf(stderr, Name ": invalid number of raid devices: %s\n",
 					optarg);
 				exit(2);
@@ -451,7 +453,7 @@ int main(int argc, char *argv[])
 				exit(2);
 			}
 			sparedisks = strtol(optarg, &c, 10);
-			if (!optarg[0] || *c || sparedisks < 0 || sparedisks > MD_SB_DISKS - raiddisks) {
+			if (!optarg[0] || *c || sparedisks < 0 || sparedisks > max_disks - raiddisks) {
 				fprintf(stderr, Name ": invalid number of spare-devices: %s\n",
 					optarg);
 				exit(2);

diff ./mdadm.h~current~ ./mdadm.h
--- ./mdadm.h~current~	2005-05-04 09:46:34.000000000 +1000
+++ ./mdadm.h	2005-05-04 10:01:20.000000000 +1000
@@ -194,6 +194,7 @@ extern struct superswitch {
 struct supertype {
 	struct superswitch *ss;
 	int minor_version;
+	int max_devs;
 };
 
 extern struct supertype *super_by_version(int vers, int minor);

diff ./super0.c~current~ ./super0.c
--- ./super0.c~current~	2005-05-04 09:46:40.000000000 +1000
+++ ./super0.c	2005-05-04 10:08:47.000000000 +1000
@@ -582,6 +582,7 @@ static int load_super0(struct supertype 
 	if (st->ss == NULL) {
 		st->ss = &super0;
 		st->minor_version = 90;
+		st->max_devs = MD_SB_DISKS;
 	}
 
 	return 0;
@@ -594,6 +595,7 @@ static struct supertype *match_metadata_
 
 	st->ss = &super0;
 	st->minor_version = 90;
+	st->max_devs = MD_SB_DISKS;
 	if (strcmp(arg, "0") == 0 ||
 	    strcmp(arg, "0.90") == 0 ||
 	    strcmp(arg, "default") == 0

diff ./super1.c~current~ ./super1.c
--- ./super1.c~current~	2005-05-04 09:52:34.000000000 +1000
+++ ./super1.c	2005-05-04 12:06:33.000000000 +1000
@@ -399,7 +399,7 @@ static int init_super1(void **sbp, mdu_a
 
 	sb->ctime = __cpu_to_le64((unsigned long long)time(0));
 	sb->level = __cpu_to_le32(info->level);
-	sb->layout = __cpu_to_le32(info->level);
+	sb->layout = __cpu_to_le32(info->layout);
 	sb->size = __cpu_to_le64(info->size*2ULL);
 	sb->chunksize = __cpu_to_le32(info->chunk_size>>9);
 	sb->raid_disks = __cpu_to_le32(info->raid_disks);
@@ -616,6 +616,7 @@ static int load_super1(struct supertype 
 			int rv;
 			st->minor_version = bestvers;
 			st->ss = &super1;
+			st->max_devs = 384;
 			rv = load_super1(st, fd, sbp, devname);
 			if (rv) st->ss = NULL;
 			return rv;
@@ -714,6 +715,7 @@ static struct supertype *match_metadata_
 	if (!st) return st;
 
 	st->ss = &super1;
+	st->max_devs = 384;
 	if (strcmp(arg, "1") == 0 ||
 	    strcmp(arg, "1.0") == 0) {
 		st->minor_version = 0;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport
  2005-05-04  5:59         ` Neil Brown
@ 2005-05-04 12:13           ` Tyler
  0 siblings, 0 replies; 10+ messages in thread
From: Tyler @ 2005-05-04 12:13 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Hi Neil,

The last two patches got me going, however, I tried raid5 for the heck 
of it (I was just using it for testing earlier, and figured I would get 
the lesser of two evils working, then go for raid6), and it creates the 
array fine, and I can make a filesystem on it, and I can mount it, but, 
it is listed as clean, degraded in mdadm -D /dev/md0, and cat 
/proc/mdstat doesn't show any rebuilding/resyncing going on.  It doesn't 
seem to start the resync, never gaining redundancy.  Raid6 seems to be 
working just fine, thanks :).  Possibly another patch needed for raid5 
still.  Also, is there going to be more detail available about the array 
like there was with older mdadm tools?  Right now when you do a -D 
/dev/mdX, with a version one superblock, there doesn't seem to be much 
information regarding what drives are in the array, etc.   I also posted 
a bug a few days ago regarding mdadm v1.9.0 (or maybe 1.11 .. i forget 
if i tried it also), where if you have a large number of drives (I 
tested with 27), that the bottom of the -D /dev/mdX page seemed to be 
cut off, and didn't show things like spares, and removed drive, etc.

Currently I'm using a 2.6.11.8 vanilla kernel (md v0.90.01), I did *not* 
change "pad1[128-96]" to "pad1[128-100]", since 2.6.11.8 vanilla doesn't 
have the bitmap_offset added yet, I did patch super1.c to include the 
"info->layout" near line 400 (this patch was also present in one of your 
other patches), I also patched it with the one patch that came out on 
your web page after 2.0-devel was released (bitmap for v0.90.0 
superblock I believe), and patched with the raid5 superblock version 1 
support patch, the "disk busy" patch, and the greater than 27 MD 
superblock devices patch.  I think thats it :)

Not that it should matter, but i did it in this order:
patch.greater.than.27.superblock.devices (this patch includes change to 
super1.c near line 400, info->layout)
patch.raid5.to.support.superblock.version.1
patch.bitmap.support.for.v0.90.0.superblocks
patch.disk.busy

I ran a diff against it with the above patches, and have posted it at 
http://www.dtbb.net/~tyler/linux.troubleshoot/

I almost forgot to mention that one of the patches against Grow.c failed 
(I'm maybe missing another patch against Grow that you've done? Mine 
only has 194 lines):

root@localhost:~/dev/mdadm-2.0-devel-1# cat Grow.c.rej
***************
*** 236,242 ****
        }
        if (strcmp(file, "internal") == 0) {
                int d;
-               for (d=0; d< MD_SB_DISKS; d++) {
                        mdu_disk_info_t disk;
                        char *dv;
                        disk.number = d;
--- 236,242 ----
        }
        if (strcmp(file, "internal") == 0) {
                int d;
+               for (d=0; d< st->max_devs; d++) {
                        mdu_disk_info_t disk;
                        char *dv;
                        disk.number = d;

Regards,
Tyler.

Neil Brown wrote:

>On Tuesday May 3, pml@dtbb.net wrote:
>  
>
>>What kernel are you using Neil, and what patches to the kernel if any, 
>>and which patches to mdadm 2.0-devel?  
>>    
>>
>
>2.6.12-rc2-mm1  and a few patches to mdadm, but none significant to
>your current issue.
>
>The reason it worked for me is that I tried raid6 and you tried raid5.
>To make it work with raid5 you need the following patch.  I haven't
>actually tested it as my test machine has had odd hardware issues for
>ages (only causing problems at reboot, but for a test machine, that is
>often..) and it is finally being looked at.
>
>Let me know if this gets you further.
>
>NeilBrown
>
>
> ----------- Diffstat output ------------
> ./super1.c |    2 +-
> 1 files changed, 1 insertion(+), 1 deletion(-)
>
>diff ./super1.c~current~ ./super1.c
>--- ./super1.c~current~	2005-05-04 12:06:33.000000000 +1000
>+++ ./super1.c	2005-05-04 15:54:59.000000000 +1000
>@@ -411,7 +411,7 @@ static int init_super1(void **sbp, mdu_a
> 
> 	sb->utime = sb->ctime;
> 	sb->events = __cpu_to_le64(1);
>-	if (info->state & MD_SB_CLEAN)
>+	if (info->state & (1<<MD_SB_CLEAN))
> 		sb->resync_offset = ~0ULL;
> 	else
> 		sb->resync_offset = 0;
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>
>  
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2005-05-04 12:13 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-05-03 11:15 BUGREPORT: mdadm v2.0-devel - can't create array using version 1 superblock, possibly related to previous bugreport Tyler
2005-05-03 11:38 ` Tyler
2005-05-03 11:38 ` Tyler
2005-05-03 23:54 ` Neil Brown
2005-05-04  1:36   ` Tyler
2005-05-04  2:17     ` Neil Brown
2005-05-04  5:08       ` Tyler
2005-05-04  5:59         ` Neil Brown
2005-05-04 12:13           ` Tyler
2005-05-04  6:00         ` Neil Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).