linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* problem in grow of raid5 3 -> 4 disks
@ 2009-02-17 20:48 Redeeman
  0 siblings, 0 replies; 3+ messages in thread
From: Redeeman @ 2009-02-17 20:48 UTC (permalink / raw)
  To: linux-raid

Hello.

A friend of mines array just stalled in the reshape process, while
growing from 3 to 4 disks, on raid5.

kernel is 2.6.27,

dmesg spews insane amounts of:
compute_blocknr: map not correct
compute_blocknr: map not correct
compute_blocknr: map not correct
compute_blocknr: map not correct

/proc/mdstat:
md0 : active raid5 sdd1[3] sdc1[2] sdb1[1] sda1[0]
      1953519872 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4]
[UUUU]
      [==============>......]  reshape = 73.2% (715827840/976759936)
finish=15504.9min speed=280K/sec

# mdadm --detail:
/dev/md0:
        Version : 0.91
  Creation Time : Sun Feb  1 03:30:50 2009
     Raid Level : raid5
     Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent
 
    Update Time : Tue Feb 17 23:26:37 2009
          State : clean, recovering
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
 
         Layout : left-symmetric
     Chunk Size : 64K
 
 Reshape Status : 73% complete
  Delta Devices : 1, (3->4)
 
           UUID : e55b3e10:456492af:b4421a5d:cd497c91
         Events : 0.476018
 
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1


What should we do about this? Anyone experienced something similar?
SMART reports no errors on any of the disks.

---
Thanks.
Kasper Sandberg


^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: problem in grow of raid5 3 -> 4 disks
@ 2009-02-17 21:12 Redeeman
  2009-02-18  1:52 ` Kasper Sandberg
  0 siblings, 1 reply; 3+ messages in thread
From: Redeeman @ 2009-02-17 21:12 UTC (permalink / raw)
  To: linux-raid

(sorry its not properly threaded, but message didnt show up via
mailinglist yet)

To add some information, i think the culprit is that he forgot support
for large block devices in kernel (its 32bit x86)..

What is the recommendation based on this? enable support, and reboot?


Thanks again.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: problem in grow of raid5 3 -> 4 disks
  2009-02-17 21:12 problem in grow of raid5 3 -> 4 disks Redeeman
@ 2009-02-18  1:52 ` Kasper Sandberg
  0 siblings, 0 replies; 3+ messages in thread
From: Kasper Sandberg @ 2009-02-18  1:52 UTC (permalink / raw)
  To: linux-raid

Hello again

Seems he was impatient, and just recompiled kernel and rebooted - and it
worked, it continued the reshape and all is well :)

On Tue, 2009-02-17 at 22:12 +0100, Redeeman wrote:
> (sorry its not properly threaded, but message didnt show up via
> mailinglist yet)
> 
> To add some information, i think the culprit is that he forgot support
> for large block devices in kernel (its 32bit x86)..
> 
> What is the recommendation based on this? enable support, and reboot?
> 
> 
> Thanks again.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2009-02-18  1:52 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-17 21:12 problem in grow of raid5 3 -> 4 disks Redeeman
2009-02-18  1:52 ` Kasper Sandberg
  -- strict thread matches above, loose matches on Subject: below --
2009-02-17 20:48 Redeeman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).