linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [MD PATCH 1/1] Raid5 should update rdev->sectors after reshape
@ 2017-07-04  6:29 Xiao Ni
  2017-07-05  8:51 ` Guoqing Jiang
  0 siblings, 1 reply; 3+ messages in thread
From: Xiao Ni @ 2017-07-04  6:29 UTC (permalink / raw)
  To: linux-raid; +Cc: ncroxon, tbskyd, shli

The raid5 md device is created by the disks which we don't use the total size. For example,
the size of the device is 5G and it just uses 3G of the devices to create one raid5 device.
Then change the chunksize and wait reshape to finish. After reshape finishing stop the raid
and assemble it again. It fails.
mdadm -CR /dev/md0 -l5 -n3 /dev/loop[0-2] --size=3G --chunk=32 --assume-clean
mdadm /dev/md0 --grow --chunk=64
wait reshape to finish
mdadm -S /dev/md0
mdadm -As
The error messages:
[197519.814302] md: loop1 does not have a valid v1.2 superblock, not importing!
[197519.821686] md: md_import_device returned -22

After reshape the data offset is changed. It selects backwards direction in this condition.
In function super_1_load it compares the available space of the underlying device with
sb->data_size. The new data offset gets bigger after reshape. So super_1_load returns -EINVAL.
sb->data_size is updated in md_finish_reshape. So add md_finish_reshape in end_reshape.

Signed-off-by: Xiao Ni <xni@redhat.com>
---
 drivers/md/raid5.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index ec0f951..e7f527c 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7947,12 +7947,10 @@ static void end_reshape(struct r5conf *conf)
 {
 
 	if (!test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery)) {
-		struct md_rdev *rdev;
 
 		spin_lock_irq(&conf->device_lock);
 		conf->previous_raid_disks = conf->raid_disks;
-		rdev_for_each(rdev, conf->mddev)
-			rdev->data_offset = rdev->new_data_offset;
+		md_finish_reshape(conf->mddev);
 		smp_wmb();
 		conf->reshape_progress = MaxSector;
 		conf->mddev->reshape_position = MaxSector;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [MD PATCH 1/1] Raid5 should update rdev->sectors after reshape
  2017-07-04  6:29 [MD PATCH 1/1] Raid5 should update rdev->sectors after reshape Xiao Ni
@ 2017-07-05  8:51 ` Guoqing Jiang
  2017-07-05  9:27   ` Xiao Ni
  0 siblings, 1 reply; 3+ messages in thread
From: Guoqing Jiang @ 2017-07-05  8:51 UTC (permalink / raw)
  To: Xiao Ni, linux-raid; +Cc: ncroxon, tbskyd, shli



On 07/04/2017 02:29 PM, Xiao Ni wrote:
> The raid5 md device is created by the disks which we don't use the total size. For example,
> the size of the device is 5G and it just uses 3G of the devices to create one raid5 device.
> Then change the chunksize and wait reshape to finish. After reshape finishing stop the raid
> and assemble it again. It fails.
> mdadm -CR /dev/md0 -l5 -n3 /dev/loop[0-2] --size=3G --chunk=32 --assume-clean
> mdadm /dev/md0 --grow --chunk=64
> wait reshape to finish
> mdadm -S /dev/md0
> mdadm -As
> The error messages:
> [197519.814302] md: loop1 does not have a valid v1.2 superblock, not importing!
> [197519.821686] md: md_import_device returned -22
>
> After reshape the data offset is changed. It selects backwards direction in this condition.
> In function super_1_load it compares the available space of the underlying device with
> sb->data_size. The new data offset gets bigger after reshape. So super_1_load returns -EINVAL.
> sb->data_size is updated in md_finish_reshape. So add md_finish_reshape in end_reshape.

IMO, md_finish_reshape doesn't update sb->data_size directly, but it 
updates rdev->sectors,
then super_1_sync sets sb->data_size based on rdev->sectors.

Acked-by: Guoqing Jiang <gqjiang@suse.com>

Thanks,
Guoqing

> Signed-off-by: Xiao Ni <xni@redhat.com>
> ---
>   drivers/md/raid5.c | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index ec0f951..e7f527c 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -7947,12 +7947,10 @@ static void end_reshape(struct r5conf *conf)
>   {
>   
>   	if (!test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery)) {
> -		struct md_rdev *rdev;
>   
>   		spin_lock_irq(&conf->device_lock);
>   		conf->previous_raid_disks = conf->raid_disks;
> -		rdev_for_each(rdev, conf->mddev)
> -			rdev->data_offset = rdev->new_data_offset;
> +		md_finish_reshape(conf->mddev);
>   		smp_wmb();
>   		conf->reshape_progress = MaxSector;
>   		conf->mddev->reshape_position = MaxSector;


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [MD PATCH 1/1] Raid5 should update rdev->sectors after reshape
  2017-07-05  8:51 ` Guoqing Jiang
@ 2017-07-05  9:27   ` Xiao Ni
  0 siblings, 0 replies; 3+ messages in thread
From: Xiao Ni @ 2017-07-05  9:27 UTC (permalink / raw)
  To: Guoqing Jiang; +Cc: linux-raid, ncroxon, tbskyd, shli



----- Original Message -----
> From: "Guoqing Jiang" <gqjiang@suse.com>
> To: "Xiao Ni" <xni@redhat.com>, linux-raid@vger.kernel.org
> Cc: ncroxon@redhat.com, tbskyd@gmail.com, shli@kernel.org
> Sent: Wednesday, July 5, 2017 4:51:59 PM
> Subject: Re: [MD PATCH 1/1] Raid5 should update rdev->sectors after reshape
> 
> 
> 
> On 07/04/2017 02:29 PM, Xiao Ni wrote:
> > The raid5 md device is created by the disks which we don't use the total
> > size. For example,
> > the size of the device is 5G and it just uses 3G of the devices to create
> > one raid5 device.
> > Then change the chunksize and wait reshape to finish. After reshape
> > finishing stop the raid
> > and assemble it again. It fails.
> > mdadm -CR /dev/md0 -l5 -n3 /dev/loop[0-2] --size=3G --chunk=32
> > --assume-clean
> > mdadm /dev/md0 --grow --chunk=64
> > wait reshape to finish
> > mdadm -S /dev/md0
> > mdadm -As
> > The error messages:
> > [197519.814302] md: loop1 does not have a valid v1.2 superblock, not
> > importing!
> > [197519.821686] md: md_import_device returned -22
> >
> > After reshape the data offset is changed. It selects backwards direction in
> > this condition.
> > In function super_1_load it compares the available space of the underlying
> > device with
> > sb->data_size. The new data offset gets bigger after reshape. So
> > super_1_load returns -EINVAL.
> > sb->data_size is updated in md_finish_reshape. So add md_finish_reshape in
> > end_reshape.
> 
> IMO, md_finish_reshape doesn't update sb->data_size directly, but it
> updates rdev->sectors,
> then super_1_sync sets sb->data_size based on rdev->sectors.

Ah yes, thanks for pointing this. It should be rdev->sectors is updated
in md_finish_reshape. 

Regards
Xiao
> 
> Acked-by: Guoqing Jiang <gqjiang@suse.com>
> 
> Thanks,
> Guoqing
> 
> > Signed-off-by: Xiao Ni <xni@redhat.com>
> > ---
> >   drivers/md/raid5.c | 4 +---
> >   1 file changed, 1 insertion(+), 3 deletions(-)
> >
> > diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> > index ec0f951..e7f527c 100644
> > --- a/drivers/md/raid5.c
> > +++ b/drivers/md/raid5.c
> > @@ -7947,12 +7947,10 @@ static void end_reshape(struct r5conf *conf)
> >   {
> >   
> >   	if (!test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery)) {
> > -		struct md_rdev *rdev;
> >   
> >   		spin_lock_irq(&conf->device_lock);
> >   		conf->previous_raid_disks = conf->raid_disks;
> > -		rdev_for_each(rdev, conf->mddev)
> > -			rdev->data_offset = rdev->new_data_offset;
> > +		md_finish_reshape(conf->mddev);
> >   		smp_wmb();
> >   		conf->reshape_progress = MaxSector;
> >   		conf->mddev->reshape_position = MaxSector;
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-07-05  9:27 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-04  6:29 [MD PATCH 1/1] Raid5 should update rdev->sectors after reshape Xiao Ni
2017-07-05  8:51 ` Guoqing Jiang
2017-07-05  9:27   ` Xiao Ni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).