xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: George Dunlap <george.dunlap@citrix.com>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>,
	Jan Beulich <JBeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Lv, Zhiyuan" <zhiyuan.lv@intel.com>,
	"jun.nakajima@intel.com" <jun.nakajima@intel.com>
Subject: Re: [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server
Date: Thu, 14 Apr 2016 18:45:08 +0800	[thread overview]
Message-ID: <570F74B4.7040209@linux.intel.com> (raw)
In-Reply-To: <570B8738.8010303@linux.intel.com>

On 4/11/2016 7:15 PM, Yu, Zhang wrote:
>
>
> On 4/8/2016 7:01 PM, George Dunlap wrote:
>> On 08/04/16 11:10, Yu, Zhang wrote:
>> [snip]
>>> BTW, I noticed your reply has not be CCed to mailing list, and I also
>>> wonder if we should raise this last question in community?
>>
>> Oops -- that was a mistake on my part.  :-)  I appreciate the
>> discretion; just so you know in the future, if I'm purposely changing
>> the CC list (removing xen-devel and/or adding extra people), I'll almost
>> always say so at the top of the mail.
>>
>>>> And then of course there's the p2m_ioreq_server -> p2m_ram_logdirty
>>>> transition -- I assume that live migration is incompatible with this
>>>> functionality?  Is there anything that prevents a live migration from
>>>> being started when there are outstanding p2m_ioreq_server entries?
>>>>
>>>
>>> Another good question, and the answer is unfortunately yes. :-)
>>>
>>> If live migration happens during the normal emulation process, entries
>>> marked with p2m_ioreq_server will be changed to p2m_log_dirty in
>>> resolve_misconfig(), and later write operations will change them to
>>> p2m_ram_rw, thereafter these pages can not be forwarded to device model.
>>>  From this point of view, this functionality is incompatible with live
>>> migration.
>>>
>>> But for XenGT, I think this is acceptable, because, if live migration
>>> is to be supported in the future, intervention from backend device
>>> model will be necessary. At that time, we can guarantee from the device
>>> model side that there's no outdated p2m_ioreq_server entries, hence no
>>> need to reset the p2m type back to p2m_ram_rw(and do not include
>>> p2m_ioreq_server in the P2M_CHANGEABLE_TYPES). By "outdated", I mean
>>> when an ioreq server is detached from p2m_ioreq_server, or before an
>>> ioreq server is attached to this type, entries marked with
>>> p2m_ioreq_server should be regarded as outdated.
>>>
>>> Is this acceptible to you? Any suggestions?
>>
>> So the question is, as of this series, what happens if someone tries to
>> initiate a live migration while there are outstanding p2m_ioreq_server
>> entries?
>>
>> If the answer is "the ioreq server suddenly loses all control of the
>> memory", that's something that needs to be changed.
>>
>
> Sorry, for this patch series, I'm afraid the above description is the
> answer.
>
> Besides, I find it's hard to change current code to both support the
> deferred resetting of p2m_ioreq_server and the live migration at the
> same time. One reason is that a page with p2m_ioreq_server behaves
> differently in different situations.
>
> My assumption of XenGT is that, for live migration to work, the device
> model should guarantee there's no outstanding p2m_ioreq_server pages
> in hypervisor(no need to use the deferred recalculation), and it is our
> device model who should be responsible for the copying of the write
> protected guest pages later.
>
> And another solution I can think of: when unmapping the ioreq server,
> we walk the p2m table and reset entries with p2m_ioreq_server back
> directly, instead of deferring the reset. And of course, this means
> performance impact. But since the mapping and unmapping of an ioreq
> server is not a frequent one, the performance penalty may be acceptable.
> How do you think about this approach?
>

George, sorry to bother you. Any comments on above option? :)

Another choice might be to let live migration fail if there's
outstanding p2m_ioreq_server entries. But I'm not quite inclined to do
so, because:
1> I'd still like to keep live migration feature for XenGT.
2> Not easy to know if there's outstanding p2m_ioreq_server entries. I
mean, since p2m type change is not only triggered by hypercall, to keep
a counter for remaining p2m_ioreq_server entries means a lot code
changes;

Besides, I wonder whether the requirement to reset the p2m_ioreq_server
is indispensable, could we let the device model side to be responsible
for this? The worst case I can imagine for device model failing to do
so is that operations of a gfn might be delivered to a wrong device
model. I'm not clear what kind of damage would this cause to the
hypervisor or other VM.

Does any other maintainers have any suggestions?
Thanks in advance! :)
>> If the answer is, "everything just works", that's perfect.
>>
>> If the answer is, "Before logdirty mode is set, the ioreq server has the
>> opportunity to detach, removing the p2m_ioreq_server entries, and
>> operating without that functionality", that's good too.
>>
>> If the answer is, "the live migration request fails and the guest
>> continues to run", that's also acceptable.  If you want this series to
>> be checked in today (the last day for 4.7), this is probably your best
>> bet.
>>
>>   -George
>>
>>
>>
>

Regards
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-04-14 10:45 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-31 10:53 [PATCH v2 0/3] x86/ioreq server: introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-03-31 10:53 ` [PATCH v2 1/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-04-05 13:57   ` George Dunlap
2016-04-05 14:08     ` George Dunlap
2016-04-08 13:25   ` Andrew Cooper
2016-03-31 10:53 ` [PATCH v2 2/3] x86/ioreq server: Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-04-05 14:38   ` George Dunlap
2016-04-08 13:26   ` Andrew Cooper
2016-04-08 21:48   ` Jan Beulich
2016-04-18  8:41     ` Paul Durrant
2016-04-18  9:10       ` George Dunlap
2016-04-18  9:14         ` Wei Liu
2016-04-18  9:45           ` Paul Durrant
2016-04-18 16:40       ` Jan Beulich
2016-04-18 16:45         ` Paul Durrant
2016-04-18 16:47           ` Jan Beulich
2016-04-18 16:58             ` Paul Durrant
2016-04-19 11:02               ` Yu, Zhang
2016-04-19 11:15                 ` Paul Durrant
2016-04-19 11:38                   ` Yu, Zhang
2016-04-19 11:50                     ` Paul Durrant
2016-04-19 16:51                     ` Jan Beulich
2016-04-20 14:59                       ` Wei Liu
2016-04-20 15:02                 ` George Dunlap
2016-04-20 16:30                   ` George Dunlap
2016-04-20 16:52                     ` Jan Beulich
2016-04-20 16:58                       ` Paul Durrant
2016-04-20 17:06                         ` George Dunlap
2016-04-20 17:09                           ` Paul Durrant
2016-04-21 12:24                           ` Yu, Zhang
2016-04-21 13:31                             ` Paul Durrant
2016-04-21 13:48                               ` Yu, Zhang
2016-04-21 13:56                                 ` Paul Durrant
2016-04-21 14:09                                   ` George Dunlap
2016-04-20 17:08                       ` George Dunlap
2016-04-21 12:04                       ` Yu, Zhang
2016-03-31 10:53 ` [PATCH v2 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
     [not found]   ` <20160404082556.GC28633@deinos.phlegethon.org>
2016-04-05  6:01     ` Yu, Zhang
2016-04-06 17:13   ` George Dunlap
2016-04-07  7:01     ` Yu, Zhang
     [not found]       ` <CAFLBxZbLp2zWzCzQTaJNWbanQSmTJ57ZyTh0qaD-+YUn8o8pyQ@mail.gmail.com>
2016-04-08 10:39         ` George Dunlap
     [not found]         ` <5707839F.9060803@linux.intel.com>
2016-04-08 11:01           ` George Dunlap
2016-04-11 11:15             ` Yu, Zhang
2016-04-14 10:45               ` Yu, Zhang [this message]
2016-04-18 15:57                 ` Paul Durrant
2016-04-19  9:11                   ` Yu, Zhang
2016-04-19  9:21                     ` Paul Durrant
2016-04-19  9:44                       ` Yu, Zhang
2016-04-19 10:05                         ` Paul Durrant
2016-04-19 11:17                           ` Yu, Zhang
2016-04-19 11:47                             ` Paul Durrant
2016-04-19 11:59                               ` Yu, Zhang
2016-04-20 14:50                                 ` George Dunlap
2016-04-20 14:57                                   ` Paul Durrant
2016-04-20 15:37                                     ` George Dunlap
2016-04-20 16:30                                       ` Paul Durrant
2016-04-20 16:58                                         ` George Dunlap
2016-04-21 13:28                                         ` Yu, Zhang
2016-04-21 13:21                                   ` Yu, Zhang
2016-04-22 11:27                                     ` Wei Liu
2016-04-22 11:30                                       ` George Dunlap
2016-04-19  4:37                 ` Tian, Kevin
2016-04-19  9:21                   ` Yu, Zhang
2016-04-08 13:33   ` Andrew Cooper
2016-04-11 11:14     ` Yu, Zhang
2016-04-11 12:20       ` Andrew Cooper
2016-04-11 16:25         ` Jan Beulich
2016-04-08 22:28   ` Jan Beulich
2016-04-11 11:14     ` Yu, Zhang
2016-04-11 16:31       ` Jan Beulich
2016-04-12  9:37         ` Yu, Zhang
2016-04-12 15:08           ` Jan Beulich
2016-04-14  9:56             ` Yu, Zhang
2016-04-19  4:50               ` Tian, Kevin
2016-04-19  8:46                 ` Paul Durrant
2016-04-19  9:27                   ` Yu, Zhang
2016-04-19  9:40                     ` Paul Durrant
2016-04-19  9:49                       ` Yu, Zhang
2016-04-19 10:01                         ` Paul Durrant
2016-04-19  9:54                           ` Yu, Zhang
2016-04-19  9:15                 ` Yu, Zhang
2016-04-19  9:23                   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=570F74B4.7040209@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=JBeulich@suse.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=jun.nakajima@intel.com \
    --cc=kevin.tian@intel.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).