linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm assemble question
@ 2006-12-08 13:27 Jacob Schmidt Madsen
  2006-12-08 22:59 ` Neil Brown
  0 siblings, 1 reply; 3+ messages in thread
From: Jacob Schmidt Madsen @ 2006-12-08 13:27 UTC (permalink / raw)
  To: linux-raid

Hey,

I've added 2 new disks to an existing raid5 array and started the grow 
process.

The grow process was unsuccessfull because it stalled at 98.1% and the system 
log show a long list of "compute_blocknr: map not correct".

I'm not able to mount the array or stop it at all. The 'mount' command 
and 'mdadm' just stall in middle of their execution.

So now I want to recover as much data as possible, but its not possible since 
I cant read from the filesystem on the array, because 'mount' stall.

Now I want to assemble the array without starting the reshape process after 
rebooting, but I cant find this option in the manual of mdadm.
I'm suspecting the reshape process is the problem and thats why I dont want it 
to start.
I'm hoping it will be possible to use the array and read from it, if the 
reshape process it not started.

Am I just blind or is it not possible to start an array without starting the 
reshape process?

Thanks!

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: mdadm assemble question
  2006-12-08 13:27 mdadm assemble question Jacob Schmidt Madsen
@ 2006-12-08 22:59 ` Neil Brown
  2006-12-09  0:47   ` Jacob Schmidt Madsen
  0 siblings, 1 reply; 3+ messages in thread
From: Neil Brown @ 2006-12-08 22:59 UTC (permalink / raw)
  To: Jacob Schmidt Madsen; +Cc: linux-raid

On Friday December 8, jacob@mungo.dk wrote:
> Hey,
> 
> I've added 2 new disks to an existing raid5 array and started the grow 
> process.
> 
> The grow process was unsuccessfull because it stalled at 98.1% and the system 
> log show a long list of "compute_blocknr: map not correct".

Not good!

> 
> Am I just blind or is it not possible to start an array without starting the 
> reshape process?

Normally you wouldn't want to....

Can you post the output of "mdadm --examine" on each of the component
devices please.  And tell me what version of the Linux kernel you are
using, and what version of mdadm?  I'll see if I can figure out what
happened and what the best way to fix it is.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: mdadm assemble question
  2006-12-08 22:59 ` Neil Brown
@ 2006-12-09  0:47   ` Jacob Schmidt Madsen
  0 siblings, 0 replies; 3+ messages in thread
From: Jacob Schmidt Madsen @ 2006-12-09  0:47 UTC (permalink / raw)
  To: linux-raid

I'm using kernel-2.6.19 and mdadm-2.5.5.

I figured out that the error occured because large block device support wasnt 
enabled in the kernel, and because the array is bigger than 2tb now.

If its possible to change, then I'd suggest replacing the "compute_blocknr: 
map not correct" message (from the reshape process) with a hit or something 
more informative.
Also mdadm could post a warning before someone try to cross the 2tb limit in a 
grow process, which require large block support - or just check if its been 
enabled.
It would atleast have saved me from the trouble :-)

You can check out the recent "Trouble when growing a raid5 array" email 
thread, where I try to describe the experience more detailed.



# mdadm -D /dev/md5
/dev/md5:
        Version : 00.90.03
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
    Device Size : 312568576 (298.09 GiB 320.07 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5
    Persistence : Superblock is persistent

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean, degraded, recovering
 Active Devices : 7
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 46% complete

           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
         Events : 0.22

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       97        1      active sync   /dev/sdg1
       2       8      113        2      active sync   /dev/sdh1
       3       8      129        3      active sync   /dev/sdi1
       4       8       65        4      active sync   /dev/sde1
       5       8       49        5      active sync   /dev/sdd1
       6       8       33        6      active sync   /dev/sdc1
       8       8       17        7      spare rebuilding   /dev/sdb1



# mdadm -E /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed130785 - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     8       8       17        8      spare   /dev/sdb1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed130797 - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     6       8       33        6      active sync   /dev/sdc1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sdd1
/dev/sdd1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed1307a5 - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     5       8       49        5      active sync   /dev/sdd1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sde1
/dev/sde1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed1307b3 - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     4       8       65        4      active sync   /dev/sde1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sdf1
/dev/sdf1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed1307bb - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       81        0      active sync   /dev/sdf1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sdg1
/dev/sdg1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed1307cd - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       97        1      active sync   /dev/sdg1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sdh1
/dev/sdh1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed1307df - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8      113        2      active sync   /dev/sdh1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1



# mdadm -E /dev/sdi1
/dev/sdi1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : a24c9a1d:6ff2910a:9e2ad3b1:f5e7c6a5
  Creation Time : Fri Dec  8 19:07:26 2006
     Raid Level : raid5
    Device Size : 312568576 (298.09 GiB 320.07 GB)
     Array Size : 2187980032 (2086.62 GiB 2240.49 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 5

    Update Time : Fri Dec  8 22:34:03 2006
          State : clean
 Active Devices : 7
Working Devices : 8
 Failed Devices : 1
  Spare Devices : 1
       Checksum : ed1307f1 - correct
         Events : 0.22

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8      129        3      active sync   /dev/sdi1

   0     0       8       81        0      active sync   /dev/sdf1
   1     1       8       97        1      active sync   /dev/sdg1
   2     2       8      113        2      active sync   /dev/sdh1
   3     3       8      129        3      active sync   /dev/sdi1
   4     4       8       65        4      active sync   /dev/sde1
   5     5       8       49        5      active sync   /dev/sdd1
   6     6       8       33        6      active sync   /dev/sdc1
   7     7       0        0        7      faulty removed
   8     8       8       17        8      spare   /dev/sdb1




On Friday 08 December 2006 23:59, Neil Brown wrote:
> On Friday December 8, jacob@mungo.dk wrote:
> > Hey,
> >
> > I've added 2 new disks to an existing raid5 array and started the grow
> > process.
> >
> > The grow process was unsuccessfull because it stalled at 98.1% and the
> > system log show a long list of "compute_blocknr: map not correct".
>
> Not good!
>
> > Am I just blind or is it not possible to start an array without starting
> > the reshape process?
>
> Normally you wouldn't want to....
>
> Can you post the output of "mdadm --examine" on each of the component
> devices please.  And tell me what version of the Linux kernel you are
> using, and what version of mdadm?  I'll see if I can figure out what
> happened and what the best way to fix it is.
>
> Thanks,
> NeilBrown
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-12-09  0:47 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-12-08 13:27 mdadm assemble question Jacob Schmidt Madsen
2006-12-08 22:59 ` Neil Brown
2006-12-09  0:47   ` Jacob Schmidt Madsen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).