linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Manjunath Patil <manjunath.b.patil@oracle.com>,
	jgross@suse.com, konrad.wilk@oracle.com, roger.pau@citrix.com,
	axboe@kernel.dk
Cc: linux-block@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen-blkfront: use old rinfo after enomem during migration
Date: Fri, 30 Nov 2018 17:33:46 -0500	[thread overview]
Message-ID: <3da66993-a044-c65c-88a6-c0672ab8814f@oracle.com> (raw)
In-Reply-To: <1dafcf3d-c3b6-e6c5-f5d4-fbdb549aaa9c@oracle.com>

On 11/30/18 4:49 PM, Manjunath Patil wrote:
> Thank you Boris for your comments. I removed faulty email of mine.
>
> replies inline.
> On 11/30/2018 12:42 PM, Boris Ostrovsky wrote:
>> On 11/29/18 12:17 AM, Manjunath Patil wrote:
>>> Hi,
>>> Feel free to suggest/comment on this.
>>>
>>> I am trying to do the following at dst during the migration now.
>>> 1. Dont clear the old rinfo in blkif_free(). Instead just clean it.
>>> 2. Store the old rinfo and nr_rings into temp variables in
>>> negotiate_mq()
>>> 3. let nr_rings get re-calculated based on backend data
>>> 4. try allocating new memory based on new nr_rings
>> Since I suspect number of rings will likely be the same why not reuse
>> the rings in the common case?
> I thought attaching devices will be more often than migration. Hence
> did not want add to an extra check for
>   - if I am inside migration code path and
>   - if new nr_rings is equal to old nr_rings or not
>
> Sure addition of such a thing would avoid the memory allocation
> altogether in migration path,
> but it would add a little overhead for normal device addition.
>
> Do you think its worth adding that change?


IMO a couple of extra checks are not going to make much difference.

I wonder though --- have you actually seen the case where you did fail
allocation and changes provided in this patch made things work? I am
asking because right after negotiate_mq() we will call setup_blkring()
and it will want to allocate bunch of memory. A failure there is fatal
(to ring setup). So it seems to me that you will survive negotiate_mq()
but then will likely fail soon after.


>>
>>
>>> 5.
>>>    a. If memory allocation is a success
>>>       - free the old rinfo and proceed to use the new rinfo
>>>    b. If memory allocation is a failure
>>>       - use the old the rinfo
>>>       - adjust the nr_rings to the lowest of new nr_rings and old
>>> nr_rings
>>
>>> @@ -1918,10 +1936,24 @@ static int negotiate_mq(struct blkfront_info
>>> *info)
>>>                     sizeof(struct blkfront_ring_info),
>>>                     GFP_KERNEL);
>>>       if (!info->rinfo) {
>>> -        xenbus_dev_fatal(info->xbdev, -ENOMEM, "allocating
>>> ring_info structure");
>>> -        info->nr_rings = 0;
>>> -        return -ENOMEM;
>>> -    }
>>> +        if (unlikely(nr_rings_old)) {
>>> +            /* We might waste some memory if
>>> +             * info->nr_rings < nr_rings_old
>>> +             */
>>> +            info->rinfo = rinfo_old;
>>> +            if (info->nr_rings > nr_rings_old)
>>> +                info->nr_rings = nr_rings_old;
>>> +            xenbus_dev_fatal(info->xbdev, -ENOMEM,
>>
>> Why xenbus_dev_fatal()?
> I wanted to make sure that this msg is seen on console by default. So
> that we know there was a enomem event happened and we recovered from it.
> What do you suggest instead? xenbus_dev_error?

Neither. xenbus_dev_fatal() is going to change connection state so it is
certainly not what we want. And even xenbus_dev_error() doesn't look
like the right thing to do since as far as block device setup is
concerned there are no errors.

Maybe pr_warn().

-boris


>>
>> -boris
>>
>>
>>> +            "reusing old ring_info structure(new ring size=%d)",
>>> +                info->nr_rings);
>>> +        } else {
>>> +            xenbus_dev_fatal(info->xbdev, -ENOMEM,
>>> +                "allocating ring_info structure");
>>> +            info->nr_rings = 0;
>>> +            return -ENOMEM;
>>> +        }
>>> +    } else if (unlikely(nr_rings_old))
>>> +        kfree(rinfo_old);
>>>         for (i = 0; i < info->nr_rings; i++) {
>>>           struct blkfront_ring_info *rinfo;
>


  reply	other threads:[~2018-11-30 22:34 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-29  5:17 [PATCH] xen-blkfront: use old rinfo after enomem during migration Manjunath Patil
2018-11-30 20:42 ` [Xen-devel] " Boris Ostrovsky
2018-11-30 21:49   ` Manjunath Patil
2018-11-30 22:33     ` Boris Ostrovsky [this message]
2018-12-02 20:31       ` Manjunath Patil
2018-12-03 16:07         ` Boris Ostrovsky
2018-12-04  1:14           ` Dongli Zhang
2018-12-04  2:16             ` Boris Ostrovsky
2018-12-04  2:49               ` Manjunath Patil
2018-12-04  3:17                 ` Dongli Zhang
2018-12-04  5:57             ` Juergen Gross

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3da66993-a044-c65c-88a6-c0672ab8814f@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=jgross@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-block@vger.kernel.org \
    --cc=manjunath.b.patil@oracle.com \
    --cc=roger.pau@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).