From: Paul Durrant <Paul.Durrant@citrix.com>
To: Kevin Tian <kevin.tian@intel.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Lan, Tianyu" <tianyu.lan@intel.com>,
"jbeulich@suse.com" <jbeulich@suse.com>,
"sstabellini@kernel.org" <sstabellini@kernel.org>,
Ian Jackson <Ian.Jackson@citrix.com>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
Eddie Dong <eddie.dong@intel.com>,
"Nakajima, Jun" <jun.nakajima@intel.com>,
"yang.zhang.wz@gmail.com" <yang.zhang.wz@gmail.com>,
Anthony Perard <anthony.perard@citrix.com>
Subject: Re: Discussion about virtual iommu support for Xen guest
Date: Fri, 27 May 2016 08:46:57 +0000 [thread overview]
Message-ID: <0c833fbe809c4f34bf9defd69144a37b@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <AADFC41AFE54684AB9EE6CBC0274A5D15F87B5C7@SHSMSX101.ccr.corp.intel.com>
> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> Tian, Kevin
> Sent: 27 May 2016 09:35
> To: Andrew Cooper; Lan, Tianyu; jbeulich@suse.com; sstabellini@kernel.org;
> Ian Jackson; xen-devel@lists.xensource.com; Eddie Dong; Nakajima, Jun;
> yang.zhang.wz@gmail.com; Anthony Perard
> Subject: Re: [Xen-devel] Discussion about virtual iommu support for Xen
> guest
>
> > From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> > Sent: Thursday, May 26, 2016 7:36 PM
> >
> > On 26/05/16 09:29, Lan Tianyu wrote:
> > > Hi All:
> > > We try pushing virtual iommu support for Xen guest and there are some
> > > features blocked by it.
> > >
> > > Motivation:
> > > -----------------------
> > > 1) Add SVM(Shared Virtual Memory) support for Xen guest
> > > To support iGFX pass-through for SVM enabled devices, it requires
> > > virtual iommu support to emulate related registers and intercept/handle
> > > guest SVM configure in the VMM.
> > >
> > > 2) Increase max vcpu support for one VM.
> > >
> > > So far, max vcpu for Xen hvm guest is 128. For HPC(High Performance
> > > Computing) cloud computing, it requires more vcpus support in a single
> > > VM. The usage model is to create just one VM on a machine with the
> > > same number vcpus as logical cpus on the host and pin vcpu on each
> > > logical cpu in order to get good compute performance.
> > >
> > > Intel Xeon phi KNL(Knights Landing) is dedicated to HPC market and
> > > supports 288 logical cpus. So we hope VM can support 288 vcpu
> > > to meet HPC requirement.
> > >
> > > Current Linux kernel requires IR(interrupt remapping) when MAX APIC
> > > ID is > 255 because interrupt only can be delivered among 0~255 cpus
> > > without IR. IR in VM relies on the virtual iommu support.
> > >
> > > KVM Virtual iommu support status
> > > ------------------------
> > > Current, Qemu has a basic virtual iommu to do address translation for
> > > virtual device and it only works for the Q35 machine type. KVM reuses it
> > > and Redhat is adding IR to support more than 255 vcpus.
> > >
> > > How to add virtual iommu for Xen?
> > > -------------------------
> > > First idea came to my mind is to reuse Qemu virtual iommu but Xen didn't
> > > support Q35 so far. Enabling Q35 for Xen seems not a short term task.
> > > Anthony did some related jobs before.
> > >
> > > I'd like to see your comments about how to implement virtual iommu for
> Xen.
> > >
> > > 1) Reuse Qemu virtual iommu or write a separate one for Xen?
> > > 2) Enable Q35 for Xen to reuse Qemu virtual iommu?
> > >
> > > Your comments are very appreciated. Thanks a lot.
> >
> > To be viable going forwards, any solution must work with PVH/HVMLite as
> > much as HVM. This alone negates qemu as a viable option.
>
> KVM wants things done in Qemu as much as possible. Now Xen may
> have more things moved into hypervisor instead for HVMLite. The end
> result is that many new platform features from IHVs will require
> double effort in the future (nvdimm is another example) which means
> much longer enabling path to bring those new features to customers.
>
> I can understand the importance of covering HVMLite in Xen community,
> but is it really the only factor to negate Qemu option?
>
> >
> > From a design point of view, having Xen needing to delegate to qemu to
> > inject an interrupt into a guest seems backwards.
> >
> >
> > A whole lot of this would be easier to reason about if/when we get a
> > basic root port implementation in Xen, which is necessary for HVMLite,
> > and which will make the interaction with qemu rather more clean. It is
> > probably worth coordinating work in this area.
>
> Would it make Xen too complex? Qemu also has its own root port
> implementation, and then you need some tricks within Qemu to not
> use its own root port but instead registering to Xen root port. Why is
> such movement more clean?
>
Upstream QEMU already registers PCI BDFs with Xen, and Xen already handles cf8 and cfc accesses (to turn them into single config space read/write ioreqs). So, it really isn't much of a leap to put the root port implementation in Xen.
Paul
> >
> >
> > As for the individual issue of 288vcpu support, there are already issues
> > with 64vcpu guests at the moment. While it is certainly fine to remove
> > the hard limit at 255 vcpus, there is a lot of other work required to
> > even get 128vcpu guests stable.
> >
>
> Thanks
> Kevin
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-05-27 8:46 UTC|newest]
Thread overview: 86+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-26 8:29 Discussion about virtual iommu support for Xen guest Lan Tianyu
2016-05-26 8:42 ` Dong, Eddie
2016-05-27 2:26 ` Lan Tianyu
2016-05-27 8:11 ` Tian, Kevin
2016-05-26 11:35 ` Andrew Cooper
2016-05-27 8:19 ` Lan Tianyu
2016-06-02 15:03 ` Lan, Tianyu
2016-06-02 18:58 ` Andrew Cooper
2016-06-03 11:01 ` Current PVH/HVMlite work and planning (was :Re: Discussion about virtual iommu support for Xen guest) Roger Pau Monne
2016-06-03 11:21 ` Tian, Kevin
2016-06-03 11:52 ` Roger Pau Monne
2016-06-03 12:11 ` Tian, Kevin
2016-06-03 16:56 ` Stefano Stabellini
2016-06-07 5:48 ` Tian, Kevin
2016-06-03 11:17 ` Discussion about virtual iommu support for Xen guest Tian, Kevin
2016-06-03 13:09 ` Lan, Tianyu
2016-06-03 14:00 ` Andrew Cooper
2016-06-03 13:51 ` Andrew Cooper
2016-06-03 14:31 ` Jan Beulich
2016-06-03 17:14 ` Stefano Stabellini
2016-06-07 5:14 ` Tian, Kevin
2016-06-07 7:26 ` Jan Beulich
2016-06-07 10:07 ` Stefano Stabellini
2016-06-08 8:11 ` Tian, Kevin
2016-06-26 13:42 ` Lan, Tianyu
2016-06-29 3:04 ` Tian, Kevin
2016-07-05 13:37 ` Lan, Tianyu
2016-07-05 13:57 ` Jan Beulich
2016-07-05 14:19 ` Lan, Tianyu
2016-08-17 12:05 ` Xen virtual IOMMU high level design doc Lan, Tianyu
2016-08-17 12:42 ` Paul Durrant
2016-08-18 2:57 ` Lan, Tianyu
2016-08-25 11:11 ` Jan Beulich
2016-08-31 8:39 ` Lan Tianyu
2016-08-31 12:02 ` Jan Beulich
2016-09-01 1:26 ` Tian, Kevin
2016-09-01 2:35 ` Lan Tianyu
2016-09-15 14:22 ` Lan, Tianyu
2016-10-05 18:36 ` Konrad Rzeszutek Wilk
2016-10-11 1:52 ` Lan Tianyu
2016-11-23 18:19 ` Edgar E. Iglesias
2016-11-23 19:09 ` Stefano Stabellini
2016-11-24 2:00 ` Tian, Kevin
2016-11-24 4:09 ` Edgar E. Iglesias
2016-11-24 6:49 ` Lan Tianyu
2016-11-24 13:37 ` Edgar E. Iglesias
2016-11-25 2:01 ` Xuquan (Quan Xu)
2016-11-25 5:53 ` Lan, Tianyu
2016-10-18 14:14 ` Xen virtual IOMMU high level design doc V2 Lan Tianyu
2016-10-18 19:17 ` Andrew Cooper
2016-10-20 9:53 ` Tian, Kevin
2016-10-20 18:10 ` Andrew Cooper
2016-10-20 14:17 ` Lan Tianyu
2016-10-20 20:36 ` Andrew Cooper
2016-10-22 7:32 ` Lan, Tianyu
2016-10-26 9:39 ` Jan Beulich
2016-10-26 15:03 ` Lan, Tianyu
2016-11-03 15:41 ` Lan, Tianyu
2016-10-28 15:36 ` Lan Tianyu
2016-10-18 20:26 ` Konrad Rzeszutek Wilk
2016-10-20 10:11 ` Tian, Kevin
2016-10-20 14:56 ` Lan, Tianyu
2016-10-26 9:36 ` Jan Beulich
2016-10-26 14:53 ` Lan, Tianyu
2016-11-17 15:36 ` Xen virtual IOMMU high level design doc V3 Lan Tianyu
2016-11-18 19:43 ` Julien Grall
2016-11-21 2:21 ` Lan, Tianyu
2016-11-21 13:17 ` Julien Grall
2016-11-21 18:24 ` Stefano Stabellini
2016-11-21 7:05 ` Tian, Kevin
2016-11-23 1:36 ` Lan Tianyu
2016-11-21 13:41 ` Andrew Cooper
2016-11-22 6:02 ` Tian, Kevin
2016-11-22 8:32 ` Lan Tianyu
2016-11-22 10:24 ` Jan Beulich
2016-11-24 2:34 ` Lan Tianyu
2016-06-03 19:51 ` Is: 'basic pci bridge and root device support. 'Was:Re: Discussion about virtual iommu support for Xen guest Konrad Rzeszutek Wilk
2016-06-06 9:55 ` Jan Beulich
2016-06-06 17:25 ` Konrad Rzeszutek Wilk
2016-08-02 15:15 ` Lan, Tianyu
2016-05-27 8:35 ` Tian, Kevin
2016-05-27 8:46 ` Paul Durrant [this message]
2016-05-27 9:39 ` Tian, Kevin
2016-05-31 9:43 ` George Dunlap
2016-05-27 2:26 ` Yang Zhang
2016-05-27 8:13 ` Tian, Kevin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0c833fbe809c4f34bf9defd69144a37b@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=anthony.perard@citrix.com \
--cc=eddie.dong@intel.com \
--cc=jbeulich@suse.com \
--cc=jun.nakajima@intel.com \
--cc=kevin.tian@intel.com \
--cc=sstabellini@kernel.org \
--cc=tianyu.lan@intel.com \
--cc=xen-devel@lists.xensource.com \
--cc=yang.zhang.wz@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).