xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: Paul Durrant <Paul.Durrant@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>,
	"Keir (Xen.org)" <keir@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	"zhiyuan.lv@intel.com" <zhiyuan.lv@intel.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by ioreq server
Date: Tue, 11 Aug 2015 16:40:40 +0800	[thread overview]
Message-ID: <55C9B508.9040502@linux.intel.com> (raw)
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02F57DD79@AMSPEX01CL01.citrite.net>



On 8/11/2015 4:25 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 11 August 2015 08:57
>> To: Paul Durrant; Andrew Cooper; Wei Liu
>> Cc: xen-devel@lists.xen.org; Ian Jackson; Stefano Stabellini; Ian Campbell;
>> Keir (Xen.org); jbeulich@suse.com; Kevin Tian; zhiyuan.lv@intel.com
>> Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by ioreq
>> server
>>
>>
>>
>> On 8/10/2015 6:57 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>>>> Sent: 10 August 2015 11:56
>>>> To: Paul Durrant; Wei Liu; Yu Zhang
>>>> Cc: xen-devel@lists.xen.org; Ian Jackson; Stefano Stabellini; Ian Campbell;
>>>> Keir (Xen.org); jbeulich@suse.com; Kevin Tian; zhiyuan.lv@intel.com
>>>> Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by
>> ioreq
>>>> server
>>>>
>>>> On 10/08/15 09:33, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: Wei Liu [mailto:wei.liu2@citrix.com]
>>>>>> Sent: 10 August 2015 09:26
>>>>>> To: Yu Zhang
>>>>>> Cc: xen-devel@lists.xen.org; Paul Durrant; Ian Jackson; Stefano
>> Stabellini;
>>>> Ian
>>>>>> Campbell; Wei Liu; Keir (Xen.org); jbeulich@suse.com; Andrew Cooper;
>>>>>> Kevin Tian; zhiyuan.lv@intel.com
>>>>>> Subject: Re: [PATCH v3 1/2] Differentiate IO/mem resources tracked by
>>>> ioreq
>>>>>> server
>>>>>>
>>>>>> On Mon, Aug 10, 2015 at 11:33:40AM +0800, Yu Zhang wrote:
>>>>>>> Currently in ioreq server, guest write-protected ram pages are
>>>>>>> tracked in the same rangeset with device mmio resources. Yet
>>>>>>> unlike device mmio, which can be in big chunks, the guest write-
>>>>>>> protected pages may be discrete ranges with 4K bytes each.
>>>>>>>
>>>>>>> This patch uses a seperate rangeset for the guest ram pages.
>>>>>>> And a new ioreq type, IOREQ_TYPE_MEM, is defined.
>>>>>>>
>>>>>>> Note: Previously, a new hypercall or subop was suggested to map
>>>>>>> write-protected pages into ioreq server. However, it turned out
>>>>>>> handler of this new hypercall would be almost the same with the
>>>>>>> existing pair - HVMOP_[un]map_io_range_to_ioreq_server, and
>> there's
>>>>>>> already a type parameter in this hypercall. So no new hypercall
>>>>>>> defined, only a new type is introduced.
>>>>>>>
>>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>>>> ---
>>>>>>>    tools/libxc/include/xenctrl.h    | 39 +++++++++++++++++++++++---
>>>>>>>    tools/libxc/xc_domain.c          | 59
>>>>>> ++++++++++++++++++++++++++++++++++++++--
>>>>>>
>>>>>> FWIW the hypercall wrappers look correct to me.
>>>>>>
>>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>>> index 014546a..9106cb9 100644
>>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>>> @@ -329,8 +329,9 @@ struct xen_hvm_io_range {
>>>>>>>        ioservid_t id;               /* IN - server id */
>>>>>>>        uint32_t type;               /* IN - type of range */
>>>>>>>    # define HVMOP_IO_RANGE_PORT   0 /* I/O port range */
>>>>>>> -# define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */
>>>>>>> +# define HVMOP_IO_RANGE_MMIO   1 /* MMIO range */
>>>>>>>    # define HVMOP_IO_RANGE_PCI    2 /* PCI segment/bus/dev/func
>>>> range
>>>>>> */
>>>>>>> +# define HVMOP_IO_RANGE_MEMORY 3 /* MEMORY range */
>>>>>> This looks problematic. Maybe you can get away with this because this
>> is
>>>>>> a toolstack-only interface?
>>>>>>
>>>>> Indeed, the old name is a bit problematic. Presumably re-use like this
>>>> would require an interface version change and some if-defery.
>>>>
>>>> I assume it is an interface used by qemu, so this patch in its currently
>>>> state will break things.
>>>
>>> If QEMU were re-built against the updated header, yes.
>>
>> Thank you, Andrew & Paul. :)
>> Are you referring to the xen_map/unmap_memory_section routines in
>> QEMU?
>> I noticed they are called by xen_region_add/del in QEMU. And I wonder,
>> are these 2 routines used to track a memory region or to track a MMIO
>> region? If the region to be added is a MMIO, I guess the new interface
>> should be fine, but if it is memory region to be added into ioreq
>> server, maybe a patch in QEMU is necessary(e.g. use some if-defery for
>> this new interface version you suggested)?
>>
>
> I was forgetting that QEMU uses libxenctrl so your change to xc_hvm_map_io_range_to_ioreq_server() means everything will continue to work as before. There is still the (admittedly academic) problem of some unknown emulator out there that rolls its own hypercalls and blindly updates to the new version of hvm_op.h suddenly starting to register memory ranges rather than mmio ranges though. I would leave the existing definitions as-is and come up with a new name.

So, how about we keep the  HVMOP_IO_RANGE_MEMORY name for MMIO, and use
a new one, say HVMOP_IO_RANGE_WP_MEM, for write-protected rams only? :)

Thanks
Yu

>
>    Paul
>
>> Thanks
>> Yu
>>>
>>>     Paul
>>>
>>>>
>>>> ~Andrew
>>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

  reply	other threads:[~2015-08-11  8:40 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-10  3:33 [PATCH v3 0/2] Refactor ioreq server for better performance Yu Zhang
2015-08-10  3:33 ` [PATCH v3 1/2] Differentiate IO/mem resources tracked by ioreq server Yu Zhang
2015-08-10  8:26   ` Wei Liu
2015-08-10  8:33     ` Paul Durrant
2015-08-10 10:56       ` Andrew Cooper
2015-08-10 10:57         ` Paul Durrant
2015-08-11  7:56           ` Yu, Zhang
2015-08-11  8:25             ` Paul Durrant
2015-08-11  8:40               ` Yu, Zhang [this message]
2015-08-11  8:55                 ` Paul Durrant
2015-08-11  7:55     ` Yu, Zhang
2015-08-10  3:33 ` [PATCH v3 2/2] Refactor rangeset structure for better performance Yu Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55C9B508.9040502@linux.intel.com \
    --to=yu.c.zhang@linux.intel.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=Paul.Durrant@citrix.com \
    --cc=Stefano.Stabellini@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).