From: Dan Williams <dan.j.williams@intel.com>
To: Neil Brown <neilb@suse.de>
Cc: "linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
"Neubauer, Wojciech" <Wojciech.Neubauer@intel.com>,
"Kwolek, Adam" <adam.kwolek@intel.com>,
"Labun, Marcin" <Marcin.Labun@intel.com>,
"Ciechanowski, Ed" <ed.ciechanowski@intel.com>
Subject: Re: [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks
Date: Tue, 26 Oct 2010 01:43:10 -0700 [thread overview]
Message-ID: <4CC6949E.8050605@intel.com> (raw)
In-Reply-To: <20101026185424.6d13f43b@notabene>
On 10/26/2010 12:54 AM, Neil Brown wrote:
> On Tue, 26 Oct 2010 00:26:33 -0700
> Dan Williams<dan.j.williams@intel.com> wrote:
>
>> On 10/25/2010 10:35 PM, Neil Brown wrote:
>>> On Fri, 22 Oct 2010 11:03:59 -0700
>>> Dan Williams<dan.j.williams@intel.com> wrote:
>>>
>>>> We mark the disk in-sync, and also need to init ->recovery_offset lest
>>>> we confuse older versions of mdadm that don't consider this case at
>>>> assembly (i.e. that when growing we assume the disk is insync).
>>>>
>>>> mdadm: /dev/md0 assembled from 3 drives and 1 rebuilding - not enough to start the array.
>>>>
>>>> Cc:<stable@kernel.org>
>>>> Signed-off-by: Dan Williams<dan.j.williams@intel.com>
>>>> ---
>>>> drivers/md/raid5.c | 1 +
>>>> 1 files changed, 1 insertions(+), 0 deletions(-)
>>>>
>>>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>>>> index 9e8ecd5..f8a27d4 100644
>>>> --- a/drivers/md/raid5.c
>>>> +++ b/drivers/md/raid5.c
>>>> @@ -5580,6 +5580,7 @@ static int raid5_start_reshape(mddev_t *mddev)
>>>> if (raid5_add_disk(mddev, rdev) == 0) {
>>>> char nm[20];
>>>> if (rdev->raid_disk>= conf->previous_raid_disks) {
>>>> + rdev->recovery_offset = MaxSector;
>>>> set_bit(In_sync,&rdev->flags);
>>>> added_devices++;
>>>> } else
>>>
>>>
>>> Sorry, but I'm not getting this one....
>>>
>>> rdev->recovery_offset is only ever used when In_sync is clear. So it makes
>>> no sense to give it a value when In_sync is set.
>>>
>>> Maybe there are some places where we clear In_sync that need to have
>>> recovery_offset set to zero, but it isn't obvious to me that that would
>>> explain your symptom.
>>>
>>> Can you give a bit more detail of the problem you are seeing please?
>>
>> mdadm -A an array growing raid disks by more than max_degraded.
>>
>> Here is the related mdadm fix, but I figured it was worhtwhile to also
>> address this in the kernel for the corner case of new kernel + old mdadm.
>>
>>> commit 156a33719d956fe90cbb1625b13beb52a0d6fd87
>>> Author: Dan Williams<dan.j.williams@intel.com>
>>> Date: Mon Jul 12 12:03:13 2010 -0700
>>>
>>> Assemble: fix assembly in the delta_disks> max_degraded case
>>>
>>> Incremental assembly works on such an array because the kernel sees the
>>> disk as in-sync and that the array is reshaping. Teach Assemble() the
>>> same assumptions.
>>>
>>> This is only needed on kernels that do not initialize ->recovery_offset
>>> when activating spares for reshape.
>>>
>>> Signed-off-by: Dan Williams<dan.j.williams@intel.com>
>>>
>>> diff --git a/Assemble.c b/Assemble.c
>>> index afd4e60..409f0d7 100644
>>> --- a/Assemble.c
>>> +++ b/Assemble.c
>>> @@ -804,7 +804,9 @@ int Assemble(struct supertype *st, char *mddev,
>>> devices[most_recent].i.events) {
>>> devices[j].uptodate = 1;
>>> if (i< content->array.raid_disks) {
>>> - if (devices[j].i.recovery_start == MaxSector) {
>>> + if (devices[j].i.recovery_start == MaxSector ||
>>> + (content->reshape_active&&
>>> + j>= content->array.raid_disks - content->delta_disks)) {
>>> okcnt++;
>>> avail[i]=1;
>>> } else
>>
>> --
>> Dan
>
>
> I think the issue here is probably that the metadata handler isn't setting
> recovery_start "properly" for devices that are in-sync just as far as the
> the current reshape has progressed.
>
> I really don't think the kernel has anything to do with it as it doesn't
> report any recovery_start value at all when the device is 'in_sync'.
This was a few months back, but it looks like I looked at the mdadm fix
and thought I could derive a kernel fix, but yes, the real problem is in
the mdadm metadata handler.
> So I think this patch is wrong
The kernel patch, not the mdadm patch.
> and you need to fix the getinfo_super method
> to set recovery_start to MaxSector when the device is a new device in an
> array being reshaped.
>
> Could that make sense?
Yes, I can push this fix down to super1.c to avoid:
# mdadm -A /dev/md1 /dev/loop[0-3]
mdadm: /dev/md1 assembled from 3 drives and 1 rebuilding - not enough
to start the array.
--
Dan
prev parent reply other threads:[~2010-10-26 8:43 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-22 18:03 [PATCH 0/2] reshape fixlets for 2.6.37 Dan Williams
2010-10-22 18:03 ` [PATCH 1/2] md/raid5: skip wait for MD_CHANGE_DEVS acknowledgement in the external case Dan Williams
2010-10-26 5:22 ` Neil Brown
2010-10-26 7:20 ` Dan Williams
2010-10-22 18:03 ` [PATCH 2/2] md/raid5: initialize ->recovery_offset when growing raid_disks Dan Williams
2010-10-26 5:35 ` Neil Brown
2010-10-26 7:26 ` Dan Williams
2010-10-26 7:54 ` Neil Brown
2010-10-26 8:43 ` Dan Williams [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4CC6949E.8050605@intel.com \
--to=dan.j.williams@intel.com \
--cc=Marcin.Labun@intel.com \
--cc=Wojciech.Neubauer@intel.com \
--cc=adam.kwolek@intel.com \
--cc=ed.ciechanowski@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).