xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Lan Tianyu' <tianyu.lan@intel.com>,
	Wei Liu <wei.liu2@citrix.com>, Kevin Tian <kevin.tian@intel.com>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [RFC PATCH 5/23] Tools/libxc: Add viommu operations in libxc
Date: Tue, 18 Apr 2017 14:15:29 +0000	[thread overview]
Message-ID: <dd8f8be83d80418db766cb0557b4f9a7@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <6b86c767-0dc3-7b69-6dcd-829e79815d0e@intel.com>

> -----Original Message-----
[snip]
> >
> > Not quite sure I understand this. The QEMu device model does not 'pass
> DMA requests' as such, it maps guest RAM and reads or writes to emulate
> DMA, right? So, what's needed is a mechanism to map guest RAM by 'bus
> address'... i.e. an address that will need to be translated through the
> vIOMMU mappings. This is just an evolution of the current 'priv mapping'
> operations that allow guest RAM to be mapped by guest physical address. So
> you don't need a vIOMMU 'device model' as such, do you?
> 
> 
> Guest also may enable DMA protection mechanism in linux kernel which
> limits address space of emulated device and this depends on the vIOMMU's
> DMA translation function. In vIOMMU's MMIO emulation part is in the Xen
> hypersior and the guest shadow IO page table will be only in the
> hypervisor. To translate emulated device's DMA request. It's necessary
> to pass the DMA request to hypervisor.
> 

What do you mean by DMA request though? Are you intending to make some form of hypercall to read or write guest memory? If so then why not introduce a call to map the guest memory (via bus address) and read or write directly.

> So far we don't support DMA translation and so doesn't pass DMA request.
> 

Indeed. We map guest memory using guest physical address because, without an emulated IOMMU, guest physical address === bus address. This is why I suggest a new mapping operation rather than 'passing a DMA request' to the hypervisor.

> Map/umap guest memory already support in Qemu and just like emulated
> device model access guest memory. Qemu also provides vIOMMU hook to
> receive DMA request and return target guest address. vIOMMU framework
> will read/write target address.

That's the part I don't get... why have the vIOMMU code do the reads and writes? Why not have it provide a mapping function and then have the device model in QEMU read and write directly as it does now?

> What we need to do is to translate DMA
> request to target address according shadow IO page table in the hypervisor.
> 

Yes, so the mapping has to be done by the hypervisor (as is the case for priv mapping or grant mapping) but the memory accesses themselves can be done directly by the device model in QEMU.

> 
> 
> >
> >> Qemu is required to use DMOP hypercall and
> >> tool stack may use domctl hyercall. vIOMMU hypercalls will be divided
> >> into two part.
> >>
> >> Domctl:
> >> 	create, destroy and query.
> >> DMOP:
> >> 	vDev's DMA related operations.
> >
> > Yes, the mapping/unmapping operations should be DMOPs and IMO
> should be designed such that they can be unified with replacements for
> current 'priv map' ops such that QEMU can use the same function call, but
> with different address space identifiers (i.e. bus address, guest physical
> address, etc.). BTW, I say 'etc.' because we should also consider mapping the
> ioreq pages from Xen using the same call - with a dedicated address space
> identifier - as well.
> >
> 
> So you agree to divide vIOMMU's hypercalls into two parts(DMOP and
> Domctl), right?
> 

Yes, I agree with the logic of the split.

  Cheers,

   Paul

> 
> >   Cheers,
> >
> >     Paul
> >
> >>
> >> Is this OK?
> >>
> >>>
> >>> Thanks,
> >>> Chao
> >>>
> >>>>
> >>>>  Paul
> >>>>
> >>>>>>> The following toolstack code is to add XEN_DMOP_viommu_XXX
> >> syscalls:
> >>>>>>
> >>>>>> Hypercalls, not syscalls.
> >>>>>>
> >>>>>>>  - query capabilities of vIOMMU emulated by Xen
> >>>>>>>  - create vIOMMU in Xen hypervisor with base address, capability
> >>>>>>>  - destroy vIOMMU specified by viommu_id
> >>>>>>>
> >>>>>>> Signed-off-by: Chao Gao <chao.gao@intel.com>
> >>>>>>> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> >>>>>>> ---
> >>>>>>>  tools/libs/devicemodel/core.c                   | 69
> >>>>> +++++++++++++++++++++++++
> >>>>>>>  tools/libs/devicemodel/include/xendevicemodel.h | 35
> >> +++++++++++++
> >>>>>>>  tools/libs/devicemodel/libxendevicemodel.map    |  3 ++
> >>>>>>>  tools/libxc/include/xenctrl_compat.h            |  5 ++
> >>>>>>>  tools/libxc/xc_devicemodel_compat.c             | 18 +++++++
> >>>>>>>  5 files changed, 130 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/tools/libs/devicemodel/core.c
> >> b/tools/libs/devicemodel/core.c
> >>>>>>> index a85cb49..aee1150 100644
> >>>>>>> --- a/tools/libs/devicemodel/core.c
> >>>>>>> +++ b/tools/libs/devicemodel/core.c
> >>>>>>
> >>>>>> Bear in mind that this library is stable, so whatever ends up here can
> >>>>>> change in the future.
> >>>>>>
> >>>>>> This is not saying the following code is problematic. It is just a
> >>>>>> general FYI.
> >>>>>>
> >>>>>> Obviously the toolstack side is going to follow the hypervisor
> >>>>>> interface, so I will do a detailed review later.
> >>>>>
> >>>>> Sure. If the hypervisor interface settles down, we can inform you.
> >>>>>
> >>>>>>
> >>>>>>> +int xendevicemodel_viommu_destroy(
> >>>>>>> +    xendevicemodel_handle *dmod, domid_t dom, uint32_t
> >> viommu_id);
> >>>>>>>  #endif /* __XEN_TOOLS__ */
> >>>>>>>
> >>>>>>>  #endif /* XENDEVICEMODEL_H */
> >>>>>>> diff --git a/tools/libs/devicemodel/libxendevicemodel.map
> >>>>> b/tools/libs/devicemodel/libxendevicemodel.map
> >>>>>>> index 45c773e..c2e0968 100644
> >>>>>>> --- a/tools/libs/devicemodel/libxendevicemodel.map
> >>>>>>> +++ b/tools/libs/devicemodel/libxendevicemodel.map
> >>>>>>> @@ -17,6 +17,9 @@ VERS_1.0 {
> >>>>>>>  		xendevicemodel_modified_memory;
> >>>>>>>  		xendevicemodel_set_mem_type;
> >>>>>>>  		xendevicemodel_inject_event;
> >>>>>>> +		xendevicemodel_viommu_query_cap;
> >>>>>>> +		xendevicemodel_viommu_create;
> >>>>>>> +		xendevicemodel_viommu_destroy;
> >>>>>>>  		xendevicemodel_restrict;
> >>>>>>>  		xendevicemodel_close;
> >>>>>>
> >>>>>> I suppose this series is going to miss 4.9.
> >>>>>>
> >>>>>> Please add these functions to VERS_1.1.
> >>>>>
> >>>>> Yes. We will fix this.
> >>>>>
> >>>>>>
> >>>>>>>  	local: *; /* Do not expose anything by default */
> >>>>>>> diff --git a/tools/libxc/include/xenctrl_compat.h
> >>>>> b/tools/libxc/include/xenctrl_compat.h
> >>>>>>> index 040e7b2..315c45d 100644
> >>>>>>> --- a/tools/libxc/include/xenctrl_compat.h
> >>>>>>> +++ b/tools/libxc/include/xenctrl_compat.h
> >>>>>>> @@ -164,6 +164,11 @@ int xc_hvm_set_mem_type(
> >>>>>>>  int xc_hvm_inject_trap(
> >>>>>>>      xc_interface *xch, domid_t domid, int vcpu, uint8_t vector,
> >>>>>>>      uint8_t type, uint32_t error_code, uint8_t insn_len, uint64_t
> cr2);
> >>>>>>> +int xc_viommu_query_cap(xc_interface *xch, domid_t dom,
> >> uint64_t
> >>>>> *cap);
> >>>>>>> +int xc_viommu_create(
> >>>>>>> +    xc_interface *xch, domid_t dom, uint64_t base_addr, uint64_t
> >> cap,
> >>>>>>> +    uint32_t *viommu_id);
> >>>>>>> +int xc_viommu_destroy(xc_interface *xch, domid_t dom,
> uint32_t
> >>>>> viommu_id);
> >>>>>>>
> >>>>>>>  #endif /* XC_WANT_COMPAT_DEVICEMODEL_API */
> >>>>>>>
> >>>>>>> diff --git a/tools/libxc/xc_devicemodel_compat.c
> >>>>> b/tools/libxc/xc_devicemodel_compat.c
> >>>>>>> index e4edeea..62f703a 100644
> >>>>>>> --- a/tools/libxc/xc_devicemodel_compat.c
> >>>>>>> +++ b/tools/libxc/xc_devicemodel_compat.c
> >>>>>>
> >>>>>> I don't think you need to provide compat wrappers for them. They
> are
> >> new
> >>>>>> APIs.
> >>>>>
> >>>>> OK. Got it.
> >>>>>
> >>>>> Thanks,
> >>>>> Chao
> >>>>>>
> >>>>>> Wei.
> >>>>>
> >>>>> _______________________________________________
> >>>>> Xen-devel mailing list
> >>>>> Xen-devel@lists.xen.org
> >>>>> https://lists.xen.org/xen-devel
> 
> 
> --
> Best regards
> Tianyu Lan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-04-18 14:15 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-17 11:27 [RFC PATCH 00/23] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion on Intel platform Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 1/23] VIOMMU: Add vIOMMU helper functions to create, destroy and query capabilities Lan Tianyu
2017-03-21 19:56   ` Julien Grall
2017-03-22  8:36     ` Tian, Kevin
2017-03-22 12:41       ` Lan, Tianyu
2017-03-22  8:45     ` Lan Tianyu
2017-03-22 11:40       ` Julien Grall
2017-03-22 13:32         ` Lan, Tianyu
2017-03-17 11:27 ` [RFC PATCH 2/23] DMOP: Introduce new DMOP commands for vIOMMU support Lan Tianyu
2017-04-17 14:36   ` Konrad Rzeszutek Wilk
2017-04-18  7:24     ` Lan Tianyu
2017-04-18 13:32       ` Konrad Rzeszutek Wilk
2017-03-17 11:27 ` [RFC PATCH 3/23] VIOMMU: Add irq request callback to deal with irq remapping Lan Tianyu
2017-04-17 14:39   ` Konrad Rzeszutek Wilk
2017-04-18  8:18     ` Lan Tianyu
2017-04-18 13:36       ` Konrad Rzeszutek Wilk
2017-03-17 11:27 ` [RFC PATCH 4/23] VIOMMU: Add get irq info callback to convert irq remapping request Lan Tianyu
2017-04-17 14:39   ` Konrad Rzeszutek Wilk
2017-03-17 11:27 ` [RFC PATCH 5/23] Tools/libxc: Add viommu operations in libxc Lan Tianyu
2017-03-28 16:24   ` Wei Liu
2017-03-29  0:40     ` Chao Gao
2017-03-29  9:08       ` Paul Durrant
2017-03-30 19:57         ` Chao Gao
2017-04-14 15:38           ` Lan, Tianyu
2017-04-17 11:08             ` Wei Liu
2017-04-17 12:01               ` Lan Tianyu
2017-05-11 12:35                 ` Wei Liu
2017-05-11 12:31                   ` Lan Tianyu
2017-04-18  9:08             ` Paul Durrant
2017-04-18  9:59               ` Lan Tianyu
2017-04-18 14:15                 ` Paul Durrant [this message]
2017-04-19 12:21                   ` Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 6/23] Tools/libacpi: Add DMA remapping reporting (DMAR) ACPI table structures Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 7/23] Tools/libacpi: Add new fields in acpi_config to build DMAR table Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 8/23] Tools/libacpi: Add a user configurable parameter to control vIOMMU attributes Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 9/23] Tools/libxl: Inform device model to create a guest with a vIOMMU device Lan Tianyu
2017-03-28 16:24   ` Wei Liu
2017-03-17 11:27 ` [RFC PATCH 10/23] x86/hvm: Introduce a emulated VTD for HVM Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 11/23] X86/vvtd: Add MMIO handler for VVTD Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 12/23] X86/vvtd: Set Interrupt Remapping Table Pointer through GCMD Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 13/23] X86/vvtd: Process interrupt remapping request Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 14/23] X86/vvtd: decode interrupt attribute from IRTE Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 15/23] X86/vioapic: Hook interrupt delivery of vIOAPIC Lan Tianyu
2017-04-17 14:43   ` Konrad Rzeszutek Wilk
2017-04-18  8:34     ` Lan Tianyu
2017-04-18 13:37       ` Konrad Rzeszutek Wilk
2017-03-17 11:27 ` [RFC PATCH 16/23] X86/vvtd: Enable Queued Invalidation through GCMD Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 17/23] X86/vvtd: Enable Interrupt Remapping " Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 18/23] x86/vpt: Get interrupt vector through a vioapic interface Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 19/23] passthrough: move some fields of hvm_gmsi_info to a sub-structure Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 20/23] Tools/libxc: Add a new interface to bind msi-ir with pirq Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 21/23] X86/vmsi: Hook guest MSI injection Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 22/23] X86/vvtd: Handle interrupt translation faults Lan Tianyu
2017-03-17 11:27 ` [RFC PATCH 23/23] X86/vvtd: Add queued invalidation (QI) support Lan Tianyu
2017-03-20 14:23 ` [RFC PATCH 00/23] xen/vIOMMU: Add vIOMMU support with irq remapping fucntion on Intel platform Roger Pau Monné
2017-03-21  2:28   ` Lan Tianyu
2017-03-21  5:29     ` Lan Tianyu
2017-03-29  8:00       ` Roger Pau Monné
2017-03-29  3:52         ` Chao Gao
2017-04-17 14:41   ` Konrad Rzeszutek Wilk
2017-04-18  8:19     ` Lan Tianyu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=dd8f8be83d80418db766cb0557b4f9a7@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=kevin.tian@intel.com \
    --cc=tianyu.lan@intel.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).