xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	zhiyuan.lv@intel.com, xen-devel@lists.xen.org
Subject: Re: [PATCH RFC] x86/ioreq server: Optimize p2m cleaning up code in p2m_finish_type_change().
Date: Wed, 10 May 2017 13:27:33 +0800	[thread overview]
Message-ID: <5912A4C5.8050600@linux.intel.com> (raw)
In-Reply-To: <59120A6F0200007800158530@prv-mh.provo.novell.com>



On 5/10/2017 12:29 AM, Jan Beulich wrote:
>>>> On 05.04.17 at 10:59, <yu.c.zhang@linux.intel.com> wrote:
>> --- a/xen/arch/x86/hvm/dm.c
>> +++ b/xen/arch/x86/hvm/dm.c
>> @@ -411,14 +411,17 @@ static int dm_op(domid_t domid,
>>               while ( read_atomic(&p2m->ioreq.entry_count) &&
>>                       first_gfn <= p2m->max_mapped_pfn )
>>               {
>> +                bool changed = false;
>> +
>>                   /* Iterate p2m table for 256 gfns each time. */
>>                   p2m_finish_type_change(d, _gfn(first_gfn), 256,
>> -                                       p2m_ioreq_server, p2m_ram_rw);
>> +                                       p2m_ioreq_server, p2m_ram_rw, &changed);
>>   
>>                   first_gfn += 256;
>>   
>>                   /* Check for continuation if it's not the last iteration. */
>>                   if ( first_gfn <= p2m->max_mapped_pfn &&
>> +                     changed &&
>>                        hypercall_preempt_check() )
>>                   {
>>                       rc = -ERESTART;
> I appreciate and support the intention, but you're opening up a
> long lasting loop here in case very little or no changes need to
> be done. You need to check for preemption every so many
> iterations even if you've never seen "changed" come back set.

Thanks for your comments, Jan.
Indeed, this patch is problematic. Another thought is - since current 
p2m sweeping
implementation disables live migration when there's ioreq server entries 
left, and George
had proposed a generic p2m change solution previously. I'd like to leave 
the optimization
together with the generic solution in future xen release. :-)


Yu

> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-05-10  5:27 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-05  8:59 [PATCH RFC] x86/ioreq server: Optimize p2m cleaning up code in p2m_finish_type_change() Yu Zhang
2017-04-05 15:10 ` George Dunlap
2017-04-05 15:11   ` George Dunlap
2017-04-05 16:28     ` Yu Zhang
2017-05-09 16:29 ` Jan Beulich
2017-05-10  5:27   ` Yu Zhang [this message]
2017-05-09 16:30 ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5912A4C5.8050600@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).