linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] reshape fixlets for 2.6.37
@ 2010-10-22 18:03 Dan Williams
  2010-10-22 18:03 ` [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case Dan Williams
  2010-10-22 18:03 ` [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks Dan Williams
  0 siblings, 2 replies; 9+ messages in thread
From: Dan Williams @ 2010-10-22 18:03 UTC (permalink / raw)
  To: neilb
  Cc: linux-raid, wojciech.neubauer, adam.kwolek, marcin.labun,
	ed.ciechanowski

Hi Neil,

Here are two fixes that we discovered while implementing the external
reshape code.  The second one is tagged for -stable as current versions
of mdadm may not assemble a native array that is being expanded by more
than 1 disk.  Incremental assembly, however, does work in this case.

---

Dan Williams (2):
      md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case
      md/raid5: initialize ->recovery_offset when growing raid_disks

 drivers/md/raid5.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case
  2010-10-22 18:03 [PATCH 0/2] reshape fixlets for 2.6.37 Dan Williams
@ 2010-10-22 18:03 ` Dan Williams
  2010-10-26  5:22   ` Neil Brown
  2010-10-22 18:03 ` [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks Dan Williams
  1 sibling, 1 reply; 9+ messages in thread
From: Dan Williams @ 2010-10-22 18:03 UTC (permalink / raw)
  To: neilb
  Cc: linux-raid, wojciech.neubauer, adam.kwolek, marcin.labun,
	ed.ciechanowski

mdmon is orchestrating the reshape progress and we rely on it to manage the
reshape position and metadata updates.

Reported-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/md/raid5.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 69b0a16..9e8ecd5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4243,7 +4243,7 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
 		set_bit(MD_CHANGE_DEVS, &mddev->flags);
 		md_wakeup_thread(mddev->thread);
 		wait_event(mddev->sb_wait, mddev->flags == 0 ||
-			   kthread_should_stop());
+			   !mddev->persistent || kthread_should_stop());
 		spin_lock_irq(&conf->device_lock);
 		conf->reshape_safe = mddev->reshape_position;
 		spin_unlock_irq(&conf->device_lock);
@@ -4344,8 +4344,8 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
 		set_bit(MD_CHANGE_DEVS, &mddev->flags);
 		md_wakeup_thread(mddev->thread);
 		wait_event(mddev->sb_wait,
-			   !test_bit(MD_CHANGE_DEVS, &mddev->flags)
-			   || kthread_should_stop());
+			   !test_bit(MD_CHANGE_DEVS, &mddev->flags) ||
+			   !mddev->persistent || kthread_should_stop());
 		spin_lock_irq(&conf->device_lock);
 		conf->reshape_safe = mddev->reshape_position;
 		spin_unlock_irq(&conf->device_lock);


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks
  2010-10-22 18:03 [PATCH 0/2] reshape fixlets for 2.6.37 Dan Williams
  2010-10-22 18:03 ` [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case Dan Williams
@ 2010-10-22 18:03 ` Dan Williams
  2010-10-26  5:35   ` Neil Brown
  1 sibling, 1 reply; 9+ messages in thread
From: Dan Williams @ 2010-10-22 18:03 UTC (permalink / raw)
  To: neilb
  Cc: linux-raid, wojciech.neubauer, adam.kwolek, marcin.labun,
	ed.ciechanowski

We mark the disk in-sync, and also need to init ->recovery_offset lest
we confuse older versions of mdadm that don't consider this case at
assembly (i.e. that when growing we assume the disk is insync).

mdadm: /dev/md0 assembled from 3 drives and  1 rebuilding - not enough to start the array.

Cc: <stable@kernel.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/md/raid5.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 9e8ecd5..f8a27d4 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
 			if (raid5_add_disk(mddev, rdev) == 0) {
 				char nm[20];
 				if (rdev->raid_disk >= conf->previous_raid_disks) {
+					rdev->recovery_offset = MaxSector;
 					set_bit(In_sync, &rdev->flags);
 					added_devices++;
 				} else


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case
  2010-10-22 18:03 ` [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case Dan Williams
@ 2010-10-26  5:22   ` Neil Brown
  2010-10-26  7:20     ` Dan Williams
  0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-10-26  5:22 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-raid, wojciech.neubauer, adam.kwolek, marcin.labun,
	ed.ciechanowski

On Fri, 22 Oct 2010 11:03:54 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> mdmon is orchestrating the reshape progress and we rely on it to manage the
> reshape position and metadata updates.
> 
> Reported-by: Adam Kwolek <adam.kwolek@intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/md/raid5.c |    6 +++---
>  1 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 69b0a16..9e8ecd5 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -4243,7 +4243,7 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
>  		set_bit(MD_CHANGE_DEVS, &mddev->flags);
>  		md_wakeup_thread(mddev->thread);
>  		wait_event(mddev->sb_wait, mddev->flags == 0 ||
> -			   kthread_should_stop());
> +			   !mddev->persistent || kthread_should_stop());
>  		spin_lock_irq(&conf->device_lock);
>  		conf->reshape_safe = mddev->reshape_position;
>  		spin_unlock_irq(&conf->device_lock);
> @@ -4344,8 +4344,8 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
>  		set_bit(MD_CHANGE_DEVS, &mddev->flags);
>  		md_wakeup_thread(mddev->thread);
>  		wait_event(mddev->sb_wait,
> -			   !test_bit(MD_CHANGE_DEVS, &mddev->flags)
> -			   || kthread_should_stop());
> +			   !test_bit(MD_CHANGE_DEVS, &mddev->flags) ||
> +			   !mddev->persistent || kthread_should_stop());
>  		spin_lock_irq(&conf->device_lock);
>  		conf->reshape_safe = mddev->reshape_position;
>  		spin_unlock_irq(&conf->device_lock);


Do we really need this?

If we set MD_CHANGE_DEVS as wake up the thread, the thread will call
md_update_sb which, in the !persistent case, will clear the bit and wakeup
sb_wait.  So we should block on those wait_events anyway...

Am I missing something?

And as ->persistent doesn't change, I would rather have that test outside the
wait_event() to make the meaning more explicit.  So if I have missed
something, I'll make that change.

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks
  2010-10-22 18:03 ` [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks Dan Williams
@ 2010-10-26  5:35   ` Neil Brown
  2010-10-26  7:26     ` Dan Williams
  0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-10-26  5:35 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-raid, wojciech.neubauer, adam.kwolek, marcin.labun,
	ed.ciechanowski

On Fri, 22 Oct 2010 11:03:59 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> We mark the disk in-sync, and also need to init ->recovery_offset lest
> we confuse older versions of mdadm that don't consider this case at
> assembly (i.e. that when growing we assume the disk is insync).
> 
> mdadm: /dev/md0 assembled from 3 drives and  1 rebuilding - not enough to start the array.
> 
> Cc: <stable@kernel.org>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> ---
>  drivers/md/raid5.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 9e8ecd5..f8a27d4 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
>  			if (raid5_add_disk(mddev, rdev) == 0) {
>  				char nm[20];
>  				if (rdev->raid_disk >= conf->previous_raid_disks) {
> +					rdev->recovery_offset = MaxSector;
>  					set_bit(In_sync, &rdev->flags);
>  					added_devices++;
>  				} else


Sorry, but I'm not getting this one....

rdev->recovery_offset is only ever used when In_sync is clear.  So it makes
no sense to give it a value when In_sync is set.

Maybe there are some places where we clear In_sync that need to have
recovery_offset set to zero, but it isn't obvious to me that that would
explain your symptom.

Can you give a bit more detail of the problem you are seeing please?

Thanks,
NeilBrown

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case
  2010-10-26  5:22   ` Neil Brown
@ 2010-10-26  7:20     ` Dan Williams
  0 siblings, 0 replies; 9+ messages in thread
From: Dan Williams @ 2010-10-26  7:20 UTC (permalink / raw)
  To: Neil Brown
  Cc: linux-raid@vger.kernel.org, Neubauer, Wojciech, Kwolek, Adam,
	Labun, Marcin, Ciechanowski, Ed

On 10/25/2010 10:22 PM, Neil Brown wrote:
> On Fri, 22 Oct 2010 11:03:54 -0700
> Dan Williams<dan.j.williams@intel.com>  wrote:
>
>> mdmon is orchestrating the reshape progress and we rely on it to manage the
>> reshape position and metadata updates.
>>
>> Reported-by: Adam Kwolek<adam.kwolek@intel.com>
>> Signed-off-by: Dan Williams<dan.j.williams@intel.com>
>> ---
>>   drivers/md/raid5.c |    6 +++---
>>   1 files changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 69b0a16..9e8ecd5 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -4243,7 +4243,7 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
>>   		set_bit(MD_CHANGE_DEVS,&mddev->flags);
>>   		md_wakeup_thread(mddev->thread);
>>   		wait_event(mddev->sb_wait, mddev->flags == 0 ||
>> -			   kthread_should_stop());
>> +			   !mddev->persistent || kthread_should_stop());
>>   		spin_lock_irq(&conf->device_lock);
>>   		conf->reshape_safe = mddev->reshape_position;
>>   		spin_unlock_irq(&conf->device_lock);
>> @@ -4344,8 +4344,8 @@ static sector_t reshape_request(mddev_t *mddev, sector_t sector_nr, int *skipped
>>   		set_bit(MD_CHANGE_DEVS,&mddev->flags);
>>   		md_wakeup_thread(mddev->thread);
>>   		wait_event(mddev->sb_wait,
>> -			   !test_bit(MD_CHANGE_DEVS,&mddev->flags)
>> -			   || kthread_should_stop());
>> +			   !test_bit(MD_CHANGE_DEVS,&mddev->flags) ||
>> +			   !mddev->persistent || kthread_should_stop());
>>   		spin_lock_irq(&conf->device_lock);
>>   		conf->reshape_safe = mddev->reshape_position;
>>   		spin_unlock_irq(&conf->device_lock);
>
>
> Do we really need this?

Sorry, not anymore, this patch predated the md_update_sb() rework and is 
no longer needed as far as I can see.

--
Dan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks
  2010-10-26  5:35   ` Neil Brown
@ 2010-10-26  7:26     ` Dan Williams
  2010-10-26  7:54       ` Neil Brown
  0 siblings, 1 reply; 9+ messages in thread
From: Dan Williams @ 2010-10-26  7:26 UTC (permalink / raw)
  To: Neil Brown
  Cc: linux-raid@vger.kernel.org, Neubauer, Wojciech, Kwolek, Adam,
	Labun, Marcin, Ciechanowski, Ed

On 10/25/2010 10:35 PM, Neil Brown wrote:
> On Fri, 22 Oct 2010 11:03:59 -0700
> Dan Williams<dan.j.williams@intel.com>  wrote:
>
>> We mark the disk in-sync, and also need to init ->recovery_offset lest
>> we confuse older versions of mdadm that don't consider this case at
>> assembly (i.e. that when growing we assume the disk is insync).
>>
>> mdadm: /dev/md0 assembled from 3 drives and  1 rebuilding - not enough to start the array.
>>
>> Cc:<stable@kernel.org>
>> Signed-off-by: Dan Williams<dan.j.williams@intel.com>
>> ---
>>   drivers/md/raid5.c |    1 +
>>   1 files changed, 1 insertions(+), 0 deletions(-)
>>
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 9e8ecd5..f8a27d4 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
>>   			if (raid5_add_disk(mddev, rdev) == 0) {
>>   				char nm[20];
>>   				if (rdev->raid_disk>= conf->previous_raid_disks) {
>> +					rdev->recovery_offset = MaxSector;
>>   					set_bit(In_sync,&rdev->flags);
>>   					added_devices++;
>>   				} else
>
>
> Sorry, but I'm not getting this one....
>
> rdev->recovery_offset is only ever used when In_sync is clear.  So it makes
> no sense to give it a value when In_sync is set.
>
> Maybe there are some places where we clear In_sync that need to have
> recovery_offset set to zero, but it isn't obvious to me that that would
> explain your symptom.
>
> Can you give a bit more detail of the problem you are seeing please?

mdadm -A an array growing raid disks by more than max_degraded.

Here is the related mdadm fix, but I figured it was worhtwhile to also 
address this in the kernel for the corner case of new kernel + old mdadm.

> commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
> Author: Dan Williams <dan.j.williams@intel.com>
> Date:   Mon Jul 12 12:03:13 2010 -0700
>
>     Assemble: fix assembly in the delta_disks > max_degraded case
>
>     Incremental assembly works on such an array because the kernel sees the
>     disk as in-sync and that the array is reshaping.  Teach Assemble() the
>     same assumptions.
>
>     This is only needed on kernels that do not initialize ->recovery_offset
>     when activating spares for reshape.
>
>     Signed-off-by: Dan Williams <dan.j.williams@intel.com>
>
> diff --git a/Assemble.c b/Assemble.c
> index afd4e60..409f0d7 100644
> --- a/Assemble.c
> +++ b/Assemble.c
> @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
>                     devices[most_recent].i.events) {
>                         devices[j].uptodate = 1;
>                         if (i < content->array.raid_disks) {
> -                               if (devices[j].i.recovery_start == MaxSector) {
> +                               if (devices[j].i.recovery_start == MaxSector ||
> +                                   (content->reshape_active &&
> +                                    j >= content->array.raid_disks - content->delta_disks)) {
>                                         okcnt++;
>                                         avail[i]=1;
>                                 } else

--
Dan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks
  2010-10-26  7:26     ` Dan Williams
@ 2010-10-26  7:54       ` Neil Brown
  2010-10-26  8:43         ` Dan Williams
  0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-10-26  7:54 UTC (permalink / raw)
  To: Dan Williams
  Cc: linux-raid@vger.kernel.org, Neubauer, Wojciech, Kwolek, Adam,
	Labun, Marcin, Ciechanowski, Ed

On Tue, 26 Oct 2010 00:26:33 -0700
Dan Williams <dan.j.williams@intel.com> wrote:

> On 10/25/2010 10:35 PM, Neil Brown wrote:
> > On Fri, 22 Oct 2010 11:03:59 -0700
> > Dan Williams<dan.j.williams@intel.com>  wrote:
> >
> >> We mark the disk in-sync, and also need to init ->recovery_offset lest
> >> we confuse older versions of mdadm that don't consider this case at
> >> assembly (i.e. that when growing we assume the disk is insync).
> >>
> >> mdadm: /dev/md0 assembled from 3 drives and  1 rebuilding - not enough to start the array.
> >>
> >> Cc:<stable@kernel.org>
> >> Signed-off-by: Dan Williams<dan.j.williams@intel.com>
> >> ---
> >>   drivers/md/raid5.c |    1 +
> >>   1 files changed, 1 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> >> index 9e8ecd5..f8a27d4 100644
> >> --- a/drivers/md/raid5.c
> >> +++ b/drivers/md/raid5.c
> >> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
> >>   			if (raid5_add_disk(mddev, rdev) == 0) {
> >>   				char nm[20];
> >>   				if (rdev->raid_disk>= conf->previous_raid_disks) {
> >> +					rdev->recovery_offset = MaxSector;
> >>   					set_bit(In_sync,&rdev->flags);
> >>   					added_devices++;
> >>   				} else
> >
> >
> > Sorry, but I'm not getting this one....
> >
> > rdev->recovery_offset is only ever used when In_sync is clear.  So it makes
> > no sense to give it a value when In_sync is set.
> >
> > Maybe there are some places where we clear In_sync that need to have
> > recovery_offset set to zero, but it isn't obvious to me that that would
> > explain your symptom.
> >
> > Can you give a bit more detail of the problem you are seeing please?
> 
> mdadm -A an array growing raid disks by more than max_degraded.
> 
> Here is the related mdadm fix, but I figured it was worhtwhile to also 
> address this in the kernel for the corner case of new kernel + old mdadm.
> 
> > commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
> > Author: Dan Williams <dan.j.williams@intel.com>
> > Date:   Mon Jul 12 12:03:13 2010 -0700
> >
> >     Assemble: fix assembly in the delta_disks > max_degraded case
> >
> >     Incremental assembly works on such an array because the kernel sees the
> >     disk as in-sync and that the array is reshaping.  Teach Assemble() the
> >     same assumptions.
> >
> >     This is only needed on kernels that do not initialize ->recovery_offset
> >     when activating spares for reshape.
> >
> >     Signed-off-by: Dan Williams <dan.j.williams@intel.com>
> >
> > diff --git a/Assemble.c b/Assemble.c
> > index afd4e60..409f0d7 100644
> > --- a/Assemble.c
> > +++ b/Assemble.c
> > @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
> >                     devices[most_recent].i.events) {
> >                         devices[j].uptodate = 1;
> >                         if (i < content->array.raid_disks) {
> > -                               if (devices[j].i.recovery_start == MaxSector) {
> > +                               if (devices[j].i.recovery_start == MaxSector ||
> > +                                   (content->reshape_active &&
> > +                                    j >= content->array.raid_disks - content->delta_disks)) {
> >                                         okcnt++;
> >                                         avail[i]=1;
> >                                 } else
> 
> --
> Dan


I think the issue here is probably that the metadata handler isn't setting
recovery_start "properly" for devices that are in-sync just as far as the
the current reshape has progressed.

I really don't think the kernel has anything to do with it as it doesn't
report any recovery_start value at all when the device is 'in_sync'.

So I think this patch is wrong and you need to fix the getinfo_super method
to set recovery_start to MaxSector when the device is a new device in an
array being reshaped.

Could that make sense?

NeilBrown
 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks
  2010-10-26  7:54       ` Neil Brown
@ 2010-10-26  8:43         ` Dan Williams
  0 siblings, 0 replies; 9+ messages in thread
From: Dan Williams @ 2010-10-26  8:43 UTC (permalink / raw)
  To: Neil Brown
  Cc: linux-raid@vger.kernel.org, Neubauer, Wojciech, Kwolek, Adam,
	Labun, Marcin, Ciechanowski, Ed

On 10/26/2010 12:54 AM, Neil Brown wrote:
> On Tue, 26 Oct 2010 00:26:33 -0700
> Dan Williams<dan.j.williams@intel.com>  wrote:
>
>> On 10/25/2010 10:35 PM, Neil Brown wrote:
>>> On Fri, 22 Oct 2010 11:03:59 -0700
>>> Dan Williams<dan.j.williams@intel.com>   wrote:
>>>
>>>> We mark the disk in-sync, and also need to init ->recovery_offset lest
>>>> we confuse older versions of mdadm that don't consider this case at
>>>> assembly (i.e. that when growing we assume the disk is insync).
>>>>
>>>> mdadm: /dev/md0 assembled from 3 drives and  1 rebuilding - not enough to start the array.
>>>>
>>>> Cc:<stable@kernel.org>
>>>> Signed-off-by: Dan Williams<dan.j.williams@intel.com>
>>>> ---
>>>>    drivers/md/raid5.c |    1 +
>>>>    1 files changed, 1 insertions(+), 0 deletions(-)
>>>>
>>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>>>> index 9e8ecd5..f8a27d4 100644
>>>> --- a/drivers/md/raid5.c
>>>> +++ b/drivers/md/raid5.c
>>>> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
>>>>    			if (raid5_add_disk(mddev, rdev) == 0) {
>>>>    				char nm[20];
>>>>    				if (rdev->raid_disk>= conf->previous_raid_disks) {
>>>> +					rdev->recovery_offset = MaxSector;
>>>>    					set_bit(In_sync,&rdev->flags);
>>>>    					added_devices++;
>>>>    				} else
>>>
>>>
>>> Sorry, but I'm not getting this one....
>>>
>>> rdev->recovery_offset is only ever used when In_sync is clear.  So it makes
>>> no sense to give it a value when In_sync is set.
>>>
>>> Maybe there are some places where we clear In_sync that need to have
>>> recovery_offset set to zero, but it isn't obvious to me that that would
>>> explain your symptom.
>>>
>>> Can you give a bit more detail of the problem you are seeing please?
>>
>> mdadm -A an array growing raid disks by more than max_degraded.
>>
>> Here is the related mdadm fix, but I figured it was worhtwhile to also
>> address this in the kernel for the corner case of new kernel + old mdadm.
>>
>>> commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
>>> Author: Dan Williams<dan.j.williams@intel.com>
>>> Date:   Mon Jul 12 12:03:13 2010 -0700
>>>
>>>      Assemble: fix assembly in the delta_disks>  max_degraded case
>>>
>>>      Incremental assembly works on such an array because the kernel sees the
>>>      disk as in-sync and that the array is reshaping.  Teach Assemble() the
>>>      same assumptions.
>>>
>>>      This is only needed on kernels that do not initialize ->recovery_offset
>>>      when activating spares for reshape.
>>>
>>>      Signed-off-by: Dan Williams<dan.j.williams@intel.com>
>>>
>>> diff --git a/Assemble.c b/Assemble.c
>>> index afd4e60..409f0d7 100644
>>> --- a/Assemble.c
>>> +++ b/Assemble.c
>>> @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
>>>                      devices[most_recent].i.events) {
>>>                          devices[j].uptodate = 1;
>>>                          if (i<  content->array.raid_disks) {
>>> -                               if (devices[j].i.recovery_start == MaxSector) {
>>> +                               if (devices[j].i.recovery_start == MaxSector ||
>>> +                                   (content->reshape_active&&
>>> +                                    j>= content->array.raid_disks - content->delta_disks)) {
>>>                                          okcnt++;
>>>                                          avail[i]=1;
>>>                                  } else
>>
>> --
>> Dan
>
>
> I think the issue here is probably that the metadata handler isn't setting
> recovery_start "properly" for devices that are in-sync just as far as the
> the current reshape has progressed.
>
> I really don't think the kernel has anything to do with it as it doesn't
> report any recovery_start value at all when the device is 'in_sync'.

This was a few months back, but it looks like I looked at the mdadm fix 
and thought I could derive a kernel fix, but yes, the real problem is in 
the mdadm metadata handler.

> So I think this patch is wrong

The kernel patch, not the mdadm patch.

> and you need to fix the getinfo_super method
> to set recovery_start to MaxSector when the device is a new device in an
> array being reshaped.
>
> Could that make sense?

Yes, I can push this fix down to super1.c to avoid:

# mdadm -A /dev/md1 /dev/loop[0-3]
mdadm: /dev/md1 assembled from 3 drives and  1 rebuilding - not enough 
to start the array.

--
Dan

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-10-26  8:43 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-10-22 18:03 [PATCH 0/2] reshape fixlets for 2.6.37 Dan Williams
2010-10-22 18:03 ` [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case Dan Williams
2010-10-26  5:22   ` Neil Brown
2010-10-26  7:20     ` Dan Williams
2010-10-22 18:03 ` [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks Dan Williams
2010-10-26  5:35   ` Neil Brown
2010-10-26  7:26     ` Dan Williams
2010-10-26  7:54       ` Neil Brown
2010-10-26  8:43         ` Dan Williams

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).