From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Paul Durrant <Paul.Durrant@citrix.com>, Jan Beulich <JBeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim (Xen.org)" <tim@xen.org>,
George Dunlap <George.Dunlap@citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
"zhiyuan.lv@intel.com" <zhiyuan.lv@intel.com>,
Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server.
Date: Tue, 14 Mar 2017 20:03:32 +0800 [thread overview]
Message-ID: <58C7DC14.3050603@linux.intel.com> (raw)
In-Reply-To: <23d843519d0447289e5627c02b348beb@AMSPEX02CL03.citrite.net>
On 3/14/2017 6:40 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Yu Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 14 March 2017 09:53
>> To: Paul Durrant <Paul.Durrant@citrix.com>; Jan Beulich
>> <JBeulich@suse.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
>> <George.Dunlap@citrix.com>; Jun Nakajima <jun.nakajima@intel.com>;
>> Kevin Tian <kevin.tian@intel.com>; zhiyuan.lv@intel.com; xen-
>> devel@lists.xen.org; Tim (Xen.org) <tim@xen.org>
>> Subject: Re: [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram
>> with p2m_ioreq_server to an ioreq server.
>>
>>
>>
>> On 3/14/2017 5:40 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>> [snip]
>>>>>>>> +struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain
>> *d,
>>>>>>>> + unsigned int *flags)
>>>>>>>> +{
>>>>>>>> + struct p2m_domain *p2m = p2m_get_hostp2m(d);
>>>>>>>> + struct hvm_ioreq_server *s;
>>>>>>>> +
>>>>>>>> + spin_lock(&p2m->ioreq.lock);
>>>>>>>> +
>>>>>>>> + s = p2m->ioreq.server;
>>>>>>>> + *flags = p2m->ioreq.flags;
>>>>>>>> +
>>>>>>>> + spin_unlock(&p2m->ioreq.lock);
>>>>>>>> + return s;
>>>>>>>> +}
>>>>>>> I'm afraid this question was asked before, but since there's no
>>>>>>> comment here or anywhere, I can't recall if there was a reason why
>>>>>>> s potentially being stale by the time the caller looks at it is not a
>>>>>>> problem.
>>>>>> Well, it is possibe that s is stale. I did not take it as a problem
>>>>>> because the device model
>>>>>> will later discard such io request. And I believe current
>>>>>> hvm_select_ioreq_server() also
>>>>>> has the same issue - the returned s should be considered to be stale, if
>>>>>> the MMIO/PIO
>>>>>> address is removed from the ioreq server's rangeset.
>>> An enabled emulator has to be prepared to receive ioreqs for ranges it has
>> unmapped since there is no domain_pause() to prevent a race.
>>
>> Thank you for the reply, Paul.
>> So you mean using the ioreq server lock around this process will not
>> prevent this either? Why?
>>
> Well, if emulation as already sampled the value on another vcpu then that emulation request may race with the range being disabled. Remember that (for good reason) the lock is not held by hvm_select_ioreq_server() and is only held by hvm_unmap_io_range_from_ioreq_server() to protect against other invocations of similar manipulation functions.
Oh. So that's why you mentioned the domain_pause() to prevent the race,
right?
>>>>>> Another thought is, if you think it is inappropriate for device model to
>>>>>> do the check,
>>>>>> we can use spin_lock_recursive on ioreq_server.lock to protect all the
>>>>>> ioreq server select
>>>>>> and release the lock after the ioreq server is sent out.
>>>>> Well, let's first ask Paul as to what his perspective here is, both
>>>>> specifically for this change and more generally regarding what
>>>>> you say above.
>>>> Paul, any suggestions on this and the above one? :)
>>> Well, as I said above, the device model has to check whether it is willing to
>> handle and ioreq it is passed and terminate it appropriately under all
>> circumstances. There is no option for it to reject the I/O.
>>> This may be ok for MMIO regions coming and going, but is there anything
>> more to consider here it we change a page time from ioreq_server back to
>> RAM? Clearly we need to make sure that there is no scope for I/O to that
>> page being incorrectly handled during transition.
>>
>> I don't think we need to worry about the p2m type change, patch 1
>> prevents this. The s returned by
>> p2m_get_ioreq_server() may be stale(and is then discarded by device
>> model in our current code), but
>> p2m type will not be stale. I agree device model has responsibility to
>> do such check, but I also wonder
>> if it is possible for the hypervisor to provide some kind insurance.
>>
> Not in a cheap way. More locking code be added but it's likely to be convoluted and have a detrimental effect on performance.
Yep. Sounds reasonable. :)
So, Jan. Could we draw a conclusion here, that although the selected
ioreq server may be stale, but
it should be the device model to do the check?
Thanks
Yu
> Paul
>
>> Thanks
>> Yu
>>
>>> Paul
>>>
>>>> Thanks
>>>> Yu
>>>>> Jan
>>>>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-03-14 12:03 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-08 15:33 [PATCH v7 0/5] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-03-08 15:33 ` [PATCH v7 1/5] x86/ioreq server: Release the p2m lock after mmio is handled Yu Zhang
2017-03-08 15:33 ` [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2017-03-10 15:29 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:20 ` Jan Beulich
2017-03-14 7:28 ` Yu Zhang
2017-03-14 9:40 ` Paul Durrant
2017-03-14 9:52 ` Yu Zhang
2017-03-14 10:40 ` Paul Durrant
2017-03-14 12:03 ` Yu Zhang [this message]
2017-03-14 13:10 ` Jan Beulich
2017-03-14 13:28 ` Yu Zhang
2017-03-14 10:26 ` Jan Beulich
2017-03-08 15:33 ` [PATCH v7 3/5] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages Yu Zhang
2017-03-10 15:33 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:22 ` Jan Beulich
2017-03-14 7:28 ` Yu Zhang
2017-03-08 15:33 ` [PATCH v7 4/5] ix86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries Yu Zhang
2017-03-10 16:03 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:24 ` Jan Beulich
2017-03-14 7:42 ` Yu Zhang
2017-03-14 10:49 ` Jan Beulich
2017-03-14 12:18 ` Yu Zhang
2017-03-14 13:11 ` Jan Beulich
2017-03-14 13:29 ` Yu Zhang
2017-03-08 15:33 ` [PATCH v7 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2017-03-10 16:17 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:24 ` Jan Beulich
2017-03-10 16:59 ` Andrew Cooper
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:32 ` Jan Beulich
2017-03-14 7:42 ` Yu Zhang
2017-03-14 10:51 ` Jan Beulich
2017-03-14 12:22 ` Yu Zhang
2017-03-14 13:12 ` Jan Beulich
2017-03-14 13:29 ` Yu Zhang
-- strict thread matches above, loose matches on Subject: below --
2017-03-08 13:32 [PATCH v7 0/5] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-03-08 13:32 ` [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=58C7DC14.3050603@linux.intel.com \
--to=yu.c.zhang@linux.intel.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=JBeulich@suse.com \
--cc=Paul.Durrant@citrix.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).