From: Yu Zhang <yu.c.zhang@linux.intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Paul Durrant <paul.durrant@citrix.com>,
zhiyuan.lv@intel.com, xen-devel@lists.xen.org
Subject: Re: [PATCH v7 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
Date: Tue, 14 Mar 2017 15:42:24 +0800 [thread overview]
Message-ID: <58C79EE0.7070600@linux.intel.com> (raw)
In-Reply-To: <58C691420200007800142726@prv-mh.provo.novell.com>
On 3/13/2017 7:32 PM, Jan Beulich wrote:
>>>> On 11.03.17 at 09:42, <yu.c.zhang@linux.intel.com> wrote:
>> On 3/11/2017 12:59 AM, Andrew Cooper wrote:
>>> On 08/03/17 15:33, Yu Zhang wrote:
>>>> --- a/xen/arch/x86/hvm/dm.c
>>>> +++ b/xen/arch/x86/hvm/dm.c
>>>> @@ -288,6 +288,7 @@ static int inject_event(struct domain *d,
>>>> return 0;
>>>> }
>>>>
>>>> +#define DMOP_op_mask 0xff
>>>> static int dm_op(domid_t domid,
>>>> unsigned int nr_bufs,
>>>> xen_dm_op_buf_t bufs[])
>>>> @@ -315,10 +316,8 @@ static int dm_op(domid_t domid,
>>>> }
>>>>
>>>> rc = -EINVAL;
>>>> - if ( op.pad )
>>>> - goto out;
>>>>
>>>> - switch ( op.op )
>>>> + switch ( op.op & DMOP_op_mask )
>>> Nack to changes like this. HVMop continuations only existed in this
>>> form because we had to retrofit it to longterm-stable ABIs in the past,
>>> and there were literally no free bits anywhere else.
>> Thank you, Andrew. Frankly, I'm not surprised to see disagreement from
>> you and Jan
>> on this interface. :)
>>
>> Using the HVMop like continuation is a hard choice for me. I saw you
>> removed these
>> from the DMops, and I also agree that the new interface is much cleaner.
>>
>> I noticed there are other 2 DMops to continuable, the set_mem_type and
>> the modified_mem.
>> Both definitions of their structure have fields like first_gfn and nr
>> which can be updated to
>> be used in the hypercall continuation.
>> But for map_mem_type_to_ioreq_server, however we do not need a gfn
>> number exposed in
>> this interface(and I do not think exposing a gfn is correct), it is only
>> used when an ioreq server
>> detaches from p2m_ioreq_server and the p2m sweeping is not finished. So
>> I changed definition
>> of the xen_dm_op, removed the pad field and extend the size of op to 64
>> bit(so that we can have
>> free bits). But I do not think this is an optimal solution either.
>>
>>> Currently, the DMOP ABI isn't set in stone, so you have until code
>>> freeze in 4.9 to make changes. (i.e. soon now.) We should also
>>> consider which other DMops might potentially need to become continuable,
>>> and take preventative action before the freeze.
>> I can only see set_mem_type and modified_mem need to become continuable,
>> and they does
>> this quite elegantly now.
>>
>>> If we need to make similar changes once the ABI truely is frozen, there
>>> are plenty of free bits in the end of the union which can be used
>>> without breaking the ABI.
>> Another propose is to change the definition of
>> xen_dm_op_map_mem_type_to_ioreq_server,
>> and extend the flags field from uint32_t to uint64_t and use the upper
>> bits to store the gfn.
> Well, you introduce a brand new sub-op in patch 2. Why would
> you even try to re-use part of some other field in such a
> situation, when you can right away add an opaque 64-bit field
> you require the caller to set to zero upon first call? The caller
> not playing by this would - afaict - only harm the guest it controls.
Thanks, Jan.
So you mean change the definition of to
xen_dm_op_map_mem_type_to_ioreq_server
to something like this?
struct xen_dm_op_map_mem_type_to_ioreq_server {
ioservid_t id; /* IN - ioreq server id */
uint16_t type; /* IN - memory type */
uint32_t flags; /* IN - types of accesses to be forwarded to the
ioreq server. flags with 0 means to unmap the
ioreq server */
uint64_t opaque; /* only used for hypercall continuation, should
be set to zero by the caller */
};
If so, is there any approach in hypervisor to differentiate the first
call from the continued
hypercall? Do we need some form of check on this opaque? And yes, not
playing by this
will only harm the guest the device model controls.
Thanks
Yu
> Jan
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-03-14 7:42 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-08 15:33 [PATCH v7 0/5] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-03-08 15:33 ` [PATCH v7 1/5] x86/ioreq server: Release the p2m lock after mmio is handled Yu Zhang
2017-03-08 15:33 ` [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2017-03-10 15:29 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:20 ` Jan Beulich
2017-03-14 7:28 ` Yu Zhang
2017-03-14 9:40 ` Paul Durrant
2017-03-14 9:52 ` Yu Zhang
2017-03-14 10:40 ` Paul Durrant
2017-03-14 12:03 ` Yu Zhang
2017-03-14 13:10 ` Jan Beulich
2017-03-14 13:28 ` Yu Zhang
2017-03-14 10:26 ` Jan Beulich
2017-03-08 15:33 ` [PATCH v7 3/5] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages Yu Zhang
2017-03-10 15:33 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:22 ` Jan Beulich
2017-03-14 7:28 ` Yu Zhang
2017-03-08 15:33 ` [PATCH v7 4/5] ix86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries Yu Zhang
2017-03-10 16:03 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:24 ` Jan Beulich
2017-03-14 7:42 ` Yu Zhang
2017-03-14 10:49 ` Jan Beulich
2017-03-14 12:18 ` Yu Zhang
2017-03-14 13:11 ` Jan Beulich
2017-03-14 13:29 ` Yu Zhang
2017-03-08 15:33 ` [PATCH v7 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2017-03-10 16:17 ` Jan Beulich
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:24 ` Jan Beulich
2017-03-10 16:59 ` Andrew Cooper
2017-03-11 8:42 ` Yu Zhang
2017-03-13 11:32 ` Jan Beulich
2017-03-14 7:42 ` Yu Zhang [this message]
2017-03-14 10:51 ` Jan Beulich
2017-03-14 12:22 ` Yu Zhang
2017-03-14 13:12 ` Jan Beulich
2017-03-14 13:29 ` Yu Zhang
-- strict thread matches above, loose matches on Subject: below --
2017-03-08 13:32 [PATCH v7 0/5] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-03-08 13:32 ` [PATCH v7 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=58C79EE0.7070600@linux.intel.com \
--to=yu.c.zhang@linux.intel.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=paul.durrant@citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).