From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Yu, Zhang" Subject: Re: [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges. Date: Tue, 2 Feb 2016 16:04:14 +0800 Message-ID: <56B062FE.4030907@linux.intel.com> References: <1454064314-7799-1-git-send-email-yu.c.zhang@linux.intel.com> <1454064314-7799-4-git-send-email-yu.c.zhang@linux.intel.com> <56ABA26C02000078000CC7CD@prv-mh.provo.novell.com> <56ACCAD5.8030503@linux.intel.com> <56AF1CE302000078000CCBBD@prv-mh.provo.novell.com> <20160201120244.GT25660@citrix.com> <56AF5A6402000078000CCEB2@prv-mh.provo.novell.com> <20160201124959.GX25660@citrix.com> <56AF669F02000078000CCF8D@prv-mh.provo.novell.com> <56AF7657.1000200@linux.intel.com> <56AF85A3.6010803@linux.intel.com> <56AF977602000078000CD1BE@prv-mh.provo.novell.com> <56AF89BA.4060105@linux.intel.com> <22191.36960.564997.621538@mariner.uk.xensource.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <22191.36960.564997.621538@mariner.uk.xensource.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Jackson Cc: kevin.tian@intel.com, wei.liu2@citrix.com, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xen.org, Paul.Durrant@citrix.com, zhiyuan.lv@intel.com, Jan Beulich , keir@xen.org List-Id: xen-devel@lists.xenproject.org Thanks for your reply, Ian. On 2/2/2016 1:05 AM, Ian Jackson wrote: > Yu, Zhang writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges."): >> On 2/2/2016 12:35 AM, Jan Beulich wrote: >>> On 01.02.16 at 17:19, wrote: >>>> So, we need also validate this param in hvm_allow_set_param, >>>> current although hvm_allow_set_param has not performed any >>>> validation other parameters. We need to do this for the new >>>> ones. Is this understanding correct? >>> >>> Yes. >>> >>>> Another question is: as to the tool stack side, do you think >>>> an error message would suffice? Shouldn't xl be terminated? >>> >>> I have no idea what consistent behavior in such a case would >>> be - I'll defer input on this to the tool stack maintainers. >> >> Thank you. >> Wei, which one do you prefer? > > I think that arrangements should be made for the hypercall failure to > be properly reported to the caller, and properly logged. > > I don't think it is desirable to duplicate the sanity check in > xl/libxl/libxc. That would simply result in there being two limits to > update. > Sorry, I do not follow. What does "being two limits to update" mean? > I have to say, though, that the situation with this parameter seems > quite unsatisfactory. It seems to be a kind of bodge. > By "situation with this parameter", do you mean: a> the introduction of this parameter in tool stack, or b> the sanitizing for this parameter(In fact I'd prefer not to treat the check of this parameter as a sanitizing, cause it only checks the input against 4G to avoid data missing from uint64 to uint32 assignment in hvm_ioreq_server_alloc_rangesets)? > The changeable limit is there to prevent excessive resource usage by a > guest. But the docs suggest that the excessive usage might be > normal. That sounds like a suboptimal design to me. > Yes, there might be situations that this limit be set to some large value. But I that situation would be very rare. Like the docs suggested, for XenGT, 8K is a big enough one for most cases. >For reference, here is the docs proposed in this patch: > > =item B > > Limit the maximum write-protected ram ranges that can be tracked > inside one ioreq server rangeset. > > Ioreq server uses a group of rangesets to track the I/O or memory > resources to be emulated. Default limit of ranges that one rangeset > can allocate is set to a small value, due to the fact that these > ranges are allocated in xen heap. Yet for the write-protected ram > ranges, there are circumstances under which the upper limit inside > one rangeset should exceed the default one. E.g. in Intel GVT-g, > when tracking the PPGTT(per-process graphic translation tables) on > Intel broadwell platforms, the number of page tables concerned will > be of several thousand. > > For Intel GVT-g broadwell platform, 8192 is a suggested value for > this parameter in most cases. But users who set his item explicitly > are also supposed to know the specific scenarios that necessitate > this configuration. Especially when this parameter is used: > 1> for virtual devices other than vGPU in GVT-g; > 2> for GVT-g, there also might be some extreme cases, e.g. too many > graphic related applications in one VM, which create a great deal of > per-process graphic translation tables; > 3> for GVT-g, future cpu platforms which provide even more per-process > graphic translation tables. > > Having said that, if the hypervisor maintainers are happy with a > situation where this value is configured explicitly, and the > configurations where a non-default value is required is expected to be > rare, then I guess we can live with it. > > Ian. > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel > Thanks Yu