linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* raid5 reshape bug with XFS
@ 2006-11-05  1:59 Bill Cizek
  2006-11-05 22:56 ` Neil Brown
  0 siblings, 1 reply; 4+ messages in thread
From: Bill Cizek @ 2006-11-05  1:59 UTC (permalink / raw)
  To: linux-raid, Neil Brown

Hi,

I'm setting up a raid 5 system and I ran across a bug when reshaping an 
array with a mounted XFS filesystem on it.  This is under linux 2.6.18.2 
and mdadm 2.5.5

I have a test array with 3 10 GB disks and a fourth 10 GB spare disk, 
and a mounted xfs filesystem on it:

root@localhost $ mdadm --detail /dev/md4
/dev/md4:
        Version : 00.90.03
  Creation Time : Sat Nov  4 18:58:59 2006
     Raid Level : raid5
     Array Size : 20964480 (19.99 GiB 21.47 GB)
    Device Size : 10482240 (10.00 GiB 10.73 GB)
   Raid Devices : 3
  Total Devices : 4
Preferred Minor : 4
    Persistence : Superblock is persistent
[snip]
------------------------------------
...I Grow it:

root@localhost $ mdadm -G /dev/md4 -n4
mdadm: Need to backup 384K of critical section..
mdadm: ... critical section passed.
root@localhost $ mdadm --detail /dev/md4
/dev/md4:
        Version : 00.91.03
  Creation Time : Sat Nov  4 18:58:59 2006
     Raid Level : raid5
     Array Size : 20964480 (19.99 GiB 21.47 GB)
    Device Size : 10482240 (10.00 GiB 10.73 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 4
    Persistence : Superblock is persistent
-----------------------------------

It goes along and reshapes fine (from /proc/mdstat):

md4 : active raid5 dm-67[3] dm-66[2] dm-65[1] dm-64[0]
      20964480 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] 
[UUUU]
      [====>................]  reshape = 22.0% (2314624/10482240) 
finish=16.7min
 speed=8128K/sec

------------------------------------

When the reshape completes, the full array size gets corrupted:
/proc/mdstat:
md4 : active raid5 dm-67[3] dm-66[2] dm-65[1] dm-64[0]
      31446720 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

- looks good, but-

root@localhost $ mdadm --detail /dev/md4
/dev/md4:
        Version : 00.90.03
  Creation Time : Sat Nov  4 18:58:59 2006
     Raid Level : raid5
 >>
 >>    Array Size : 2086592 (2038.03 MiB 2136.67 MB)
 >>
    Device Size : 10482240 (10.00 GiB 10.73 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 4
    Persistence : Superblock is persistent

(2086592 != 31446720 -- Bad, much too small)

---------------------------------
xfs_growfs /dev/md4 barfs horribly - something about reading past the 
end of the device.

If I unmount the XFS filesystem, things work ok:

root@localhost $ umount /dev/md4

root@localhost $ mdadm --detail /dev/md4
/dev/md4:
        Version : 00.90.03
  Creation Time : Sat Nov  4 18:58:59 2006
     Raid Level : raid5
     Array Size : 31446720 (29.99 GiB 32.20 GB)
    Device Size : 10482240 (10.00 GiB 10.73 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 4
    Persistence : Superblock is persistent

(31446720 == 31446720 -- Good)

If I remount the fs, I can use xfs_growfs with no ill effects.

It's a pretty easy work-around to not have the fs mounted during the 
resize, but it doesn't seem right for the array size to get borked like 
this. If there's anything I can provide to debug this let me know.

Thanks,
Bill






^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: raid5 reshape bug with XFS
  2006-11-05  1:59 raid5 reshape bug with XFS Bill Cizek
@ 2006-11-05 22:56 ` Neil Brown
  2006-11-06  5:48   ` Bill Cizek
  0 siblings, 1 reply; 4+ messages in thread
From: Neil Brown @ 2006-11-05 22:56 UTC (permalink / raw)
  To: Bill Cizek; +Cc: linux-raid

On Saturday November 4, cizek@rcn.com wrote:
> Hi,
> 
> I'm setting up a raid 5 system and I ran across a bug when reshaping an 
> array with a mounted XFS filesystem on it.  This is under linux 2.6.18.2 
> and mdadm 2.5.5
> 
...
> root@localhost $ mdadm --detail /dev/md4
> /dev/md4:
>         Version : 00.90.03
>   Creation Time : Sat Nov  4 18:58:59 2006
>      Raid Level : raid5
>  >>
>  >>    Array Size : 2086592 (2038.03 MiB 2136.67 MB)
>  >>
>     Device Size : 10482240 (10.00 GiB 10.73 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 4
>     Persistence : Superblock is persistent
> 
> (2086592 != 31446720 -- Bad, much too small)


You have CONFIG_LBD=n don't you?

Thanks for the report.  This should fix it.  Please let me know if it does.

NeilBrown

Signed-off-by: Neil Brown <neilb@suse.de>

### Diffstat output
 ./drivers/md/raid5.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c	2006-11-03 15:11:52.000000000 +1100
+++ ./drivers/md/raid5.c	2006-11-06 09:55:20.000000000 +1100
@@ -3909,7 +3909,7 @@ static void end_reshape(raid5_conf_t *co
 		bdev = bdget_disk(conf->mddev->gendisk, 0);
 		if (bdev) {
 			mutex_lock(&bdev->bd_inode->i_mutex);
-			i_size_write(bdev->bd_inode, conf->mddev->array_size << 10);
+			i_size_write(bdev->bd_inode, (loff_t)conf->mddev->array_size << 10);
 			mutex_unlock(&bdev->bd_inode->i_mutex);
 			bdput(bdev);
 		}

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: raid5 reshape bug with XFS
  2006-11-05 22:56 ` Neil Brown
@ 2006-11-06  5:48   ` Bill Cizek
  2006-11-07  4:55     ` Neil Brown
  0 siblings, 1 reply; 4+ messages in thread
From: Bill Cizek @ 2006-11-06  5:48 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid

Neil Brown wrote:
> On Saturday November 4, cizek@rcn.com wrote:
>   
>> Hi,
>>
>> I'm setting up a raid 5 system and I ran across a bug when reshaping an 
>> array with a mounted XFS filesystem on it.  This is under linux 2.6.18.2 
>> and mdadm 2.5.5
> You have CONFIG_LBD=n don't you?
>   
Yes,

I have CONFIG_LBD=n

...and the patch fixed the problem.

Side Note: I just converted 2 raid0 drives into a 4 drive raid5 array 
in-place, with relative ease.
I couldn't have done it without the work you (and I'm sure others) have 
done. Thanks.

-Bill



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: raid5 reshape bug with XFS
  2006-11-06  5:48   ` Bill Cizek
@ 2006-11-07  4:55     ` Neil Brown
  0 siblings, 0 replies; 4+ messages in thread
From: Neil Brown @ 2006-11-07  4:55 UTC (permalink / raw)
  To: Bill Cizek; +Cc: linux-raid

On Sunday November 5, cizek@rcn.com wrote:
> Neil Brown wrote:
> > On Saturday November 4, cizek@rcn.com wrote:
> >   
> >> Hi,
> >>
> >> I'm setting up a raid 5 system and I ran across a bug when reshaping an 
> >> array with a mounted XFS filesystem on it.  This is under linux 2.6.18.2 
> >> and mdadm 2.5.5
> > You have CONFIG_LBD=n don't you?
> >   
> Yes,
> 
> I have CONFIG_LBD=n
> 
> ...and the patch fixed the problem.

Cool thanks.
> 
> Side Note: I just converted 2 raid0 drives into a 4 drive raid5 array 
> in-place, with relative ease.
> I couldn't have done it without the work you (and I'm sure others) have 
> done. Thanks.

And without bug reports like yours others would have more problems.

Thanks.
NeilBrown

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2006-11-07  4:55 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-11-05  1:59 raid5 reshape bug with XFS Bill Cizek
2006-11-05 22:56 ` Neil Brown
2006-11-06  5:48   ` Bill Cizek
2006-11-07  4:55     ` Neil Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).