From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paul Durrant Subject: Re: [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges. Date: Thu, 4 Feb 2016 13:47:57 +0000 Message-ID: <347fa8fe5fdc4ba8adb6bc4502926838@AMSPEX02CL03.citrite.net> References: <1454064314-7799-1-git-send-email-yu.c.zhang@linux.intel.com> <56B0CE8102000078000CD8D4@prv-mh.provo.novell.com> <56B0C485.8070206@linux.intel.com> <56B0D7A202000078000CD989@prv-mh.provo.novell.com> <56B1A7C9.2010708@linux.intel.com> <56B1C93002000078000CDD4B@prv-mh.provo.novell.com> <6b6d0558d3c24f9483ad41d88ced9837@AMSPEX02CL03.citrite.net> <56B2023E02000078000CE01A@prv-mh.provo.novell.com> <7316ea5cb41543d69d7727721368e3c8@AMSPEX02CL03.citrite.net> <56B207EA02000078000CE0A8@prv-mh.provo.novell.com> <9467b97e15bc4cb1b8d6c948ad4fc926@AMSPEX02CL03.citrite.net> <56B20BFA02000078000CE0E7@prv-mh.provo.novell.com> <621ce95774ac4742b96ed9d504c08670@AMSPEX02CL03.citrite.net> <22194.4639.132613.604758@mariner.uk.xensource.com> <56B310D7.7010506@linux.intel.com> <44e528cd11744242961d46c6f87d2bb9@AMSPEX02CL03.citrite.net> <56B31C1C.3000907@linux.intel.com> <56B3373B02000078000CE86C@prv-mh.provo.novell.com> <22195.21299.624759.118961@mariner.uk.xensource.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <22195.21299.624759.118961@mariner.uk.xensource.com> Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Jackson , Jan Beulich Cc: Kevin Tian , Wei Liu , Ian Campbell , Andrew Cooper , George Dunlap , "xen-devel@lists.xen.org" , Zhang Yu , "zhiyuan.lv@intel.com" , Stefano Stabellini , "Keir (Xen.org)" List-Id: xen-devel@lists.xenproject.org > -----Original Message----- > From: Ian Jackson [mailto:Ian.Jackson@eu.citrix.com] > Sent: 04 February 2016 13:34 > To: Jan Beulich > Cc: Zhang Yu; Andrew Cooper; George Dunlap; Ian Campbell; Paul Durrant; > Stefano Stabellini; Wei Liu; Kevin Tian; zhiyuan.lv@intel.com; xen- > devel@lists.xen.org; Keir (Xen.org) > Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter > max_wp_ram_ranges. > > Jan Beulich writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce > parameter max_wp_ram_ranges."): > > On 04.02.16 at 10:38, wrote: > > > So another question is, if value of this limit really matters, will a > > > lower one be more acceptable(the current 256 being not enough)? > > > > If you've carefully read George's replies, [...] > > Thanks to George for the very clear explanation, and also to him for > an illuminating in-person discussion. > > It is disturbing that as a result of me as a tools maintainer asking > questions about what seems to me to be a troublesome a user-visible > control setting in libxl, we are now apparently revisiting lower > layers of the hypervisor design, which have already been committed. > > While I find George's line of argument convincing, neither I nor > George are maintainers of the relevant hypervisor code. I am not > going to insist that anything in the hypervisor is done different and > am not trying to use my tools maintainer position to that end. > > Clearly there has been a failure of our workflow to consider and > review everything properly together. But given where we are now, I > think that this discussion about hypervisor internals is probably a > distraction. > > > Let pose again some questions that I still don't have clear answers > to: > > * Is it possible for libxl to somehow tell from the rest of the > configuration that this larger limit should be applied ? > > AFAICT there is nothing in libxl directly involving vgpu. How can > libxl be used to create a guest with vgpu enabled ? I had thought > that this was done merely with the existing PCI passthrough > configuration, but it now seems that somehow a second device model > would have to be started. libxl doesn't have code to do that. > AIUI if the setting of the increased limit is tied to provisioning a gvt-g instance for a VM then I don't there needs to be extra information in the VM config. These seems like the most sensible thing to do. > * In the configurations where a larger number is needed, what larger > limit is appropriate ? How should it be calculated ? > > AFAICT from the discussion, 8192 is a reasonable bet. Is everyone > happy with it. > > Ian. > > PS: Earlier I asked: > > * How do we know that this does not itself give an opportunity for > hypervisor resource exhaustion attacks by guests ? (Note: if it > _does_ give such an opportunity, this should be mentioned more > clearly in the documentation.) > > * If we are talking about mmio ranges for ioreq servers, why do > guests which do not use this feature have the ability to create > them at all ? > > I now understand that these mmio ranges are created by the device > model. Of course the device model needs to be able to create mmio > ranges for the guest. And since they consume hypervisor resources, > the number of these must be limited (device models not necessarily > being trusted). ...but I think there is still an open question as to whether the toolstack is allowed to set that limit for a VM or not. IMO the toolstack should be allowed to set that limit when creating a domain. Paul