From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paul Durrant Subject: Re: [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges. Date: Wed, 3 Feb 2016 12:50:08 +0000 Message-ID: <7316ea5cb41543d69d7727721368e3c8@AMSPEX02CL03.citrite.net> References: <1454064314-7799-1-git-send-email-yu.c.zhang@linux.intel.com> <1454064314-7799-4-git-send-email-yu.c.zhang@linux.intel.com> <56ABA26C02000078000CC7CD@prv-mh.provo.novell.com> <56ACCAD5.8030503@linux.intel.com> <56AF1CE302000078000CCBBD@prv-mh.provo.novell.com> <20160201120244.GT25660@citrix.com> <56AF5A6402000078000CCEB2@prv-mh.provo.novell.com> <20160201124959.GX25660@citrix.com> <56AF669F02000078000CCF8D@prv-mh.provo.novell.com> <56AF7657.1000200@linux.intel.com> <56AF85A3.6010803@linux.intel.com> <56AF977602000078000CD1BE@prv-mh.provo.novell.com> <56AF89BA.4060105@linux.intel.com> <22191.36960.564997.621538@mariner.uk.xensource.com> <56B093B802000078000CD5FE@prv-mh.provo.novell.com> <56B08B51.2030108@linux.intel.com> <56B09D2B02000078000CD696@prv-mh.provo.novell.com> <56B0B6D0.60303@linux.intel.com> <56B0CE8102000078000CD8D4@prv-mh.provo.novell.com> <56B0C485.8070206@linux.intel.com> <56B0D7A202000078000CD989@prv-mh.provo.novell.com> <56B1A7C9.2010708@linux.intel.com> <56B1C93002000078000CDD4B@prv-mh.provo.novell.com> <6b6d0558d3c24f9483ad41d88ced9837@AMSPEX02CL03.citrite.net> <56B2023E02000078000CE01A@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <56B2023E02000078000CE01A@prv-mh.provo.novell.com> Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Kevin Tian , Wei Liu , Ian Campbell , Andrew Cooper , Zhang Yu , "xen-devel@lists.xen.org" , Stefano Stabellini , "zhiyuan.lv@intel.com" , Ian Jackson , "Keir (Xen.org)" List-Id: xen-devel@lists.xenproject.org > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: 03 February 2016 12:36 > To: Paul Durrant > Cc: Andrew Cooper; Ian Campbell; Ian Jackson; Stefano Stabellini; Wei Liu; > Kevin Tian; zhiyuan.lv@intel.com; Zhang Yu; xen-devel@lists.xen.org; Keir > (Xen.org) > Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter > max_wp_ram_ranges. > > >>> On 03.02.16 at 13:20, wrote: > >> -----Original Message----- > >> From: Jan Beulich [mailto:JBeulich@suse.com] > >> Sent: 03 February 2016 08:33 > >> To: Zhang Yu > >> Cc: Andrew Cooper; Ian Campbell; Paul Durrant; Wei Liu; Ian Jackson; > Stefano > >> Stabellini; Kevin Tian; zhiyuan.lv@intel.com; xen-devel@lists.xen.org; Keir > >> (Xen.org) > >> Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter > >> max_wp_ram_ranges. > >> > >> >>> On 03.02.16 at 08:10, wrote: > >> > On 2/2/2016 11:21 PM, Jan Beulich wrote: > >> >>>>> On 02.02.16 at 16:00, wrote: > >> >>> The limit of 4G is to avoid the data missing from uint64 to uint32 > >> >>> assignment. And I can accept the 8K limit for XenGT in practice. > >> >>> After all, it is vGPU page tables we are trying to trap and emulate, > >> >>> not normal page frames. > >> >>> > >> >>> And I guess the reason that one domain exhausting Xen's memory > can > >> >>> affect another domain is because rangeset uses Xen heap, instead of > >> the > >> >>> per-domain memory. So what about we use a 8K limit by now for > XenGT, > >> >>> and in the future, if a per-domain memory allocation solution for > >> >>> rangeset is ready, we do need to limit the rangeset size. Does this > >> >>> sounds more acceptable? > >> >> > >> >> The lower the limit the better (but no matter how low the limit > >> >> it won't make this a pretty thing). Anyway I'd still like to wait > >> >> for what Ian may further say on this. > >> >> > >> > Hi Jan, I just had a discussion with my colleague. We believe 8K could > >> > be the biggest limit for the write-protected ram ranges. If in the > >> > future, number of vGPU page tables exceeds this limit, we will modify > >> > our back-end device model to find a trade-off method, instead of > >> > extending this limit. If you can accept this value as the upper bound > >> > of rangeset, maybe we do not need to add any tool stack parameters, > but > >> > define a MAX_NR_WR_RAM_RANGES for the write-protected ram > >> rangesset. As > >> > to other rangesets, we keep their limit as 256. Does this sounds OK? :) > >> > >> I'm getting the impression that we're moving in circles. A blanket > >> limit above the 256 one for all domains is _not_ going to be > >> acceptable; going to 8k will still need host admin consent. With > >> your rangeset performance improvement patch, each range is > >> going to be tracked by a 40 byte structure (up from 32), which > >> already means an overhead increase for all the other ranges. 8k > >> of wp ranges implies an overhead beyond 448k (including the > >> xmalloc() overhead), which is not _that_ much, but also not > >> negligible. > >> > > > > ... which means we are still going to need a toolstack parameter to set the > > limit. We already have a parameter for VRAM size so is having a parameter > for > > max. GTT shadow ranges such a bad thing? > > It's workable, but not nice (see also Ian's earlier response). > > > Is the fact that the memory comes > > from xenheap rather than domheap the real problem? > > Not the primary one, since except on huge memory machines > both heaps are identical. To me the primary one is the quite > more significant resource consumption in the first place (I'm not > going to repeat what I've written in already way too many > replies before). Ok. Well the only way round tracking specific ranges for emulation (and consequently suffering the overhead) is tracking by type. For XenGT I guess it would be possible to live with a situation where a single ioreq server can register all wp mem emulations for a given VM. I can't say I particularly like that way of doing things but if it's the only way forward then I guess we may have to live with it. Paul > > Jan