From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets Date: Tue, 28 Jan 2014 15:21:46 +0000 Message-ID: <52E7CB0A.3000509@citrix.com> References: <1386236334-15410-1-git-send-email-dongxiao.xu@intel.com> <1386236334-15410-5-git-send-email-dongxiao.xu@intel.com> <52DD38240200007800115102@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com> <52E7BFF8.1090605@citrix.com> <52E7D4EC02000078001179FE@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Xu, Dongxiao" Cc: "keir@xen.org" , "Ian.Campbell@citrix.com" , "stefano.stabellini@eu.citrix.com" , "dario.faggioli@citrix.com" , "Ian.Jackson@eu.citrix.com" , "xen-devel@lists.xen.org" , Jan Beulich , "dgdegra@tycho.nsa.gov" List-Id: xen-devel@lists.xenproject.org On 28/01/14 15:15, Xu, Dongxiao wrote: >> -----Original Message----- >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: Tuesday, January 28, 2014 11:04 PM >> To: Andrew Cooper; Xu, Dongxiao >> Cc: dario.faggioli@citrix.com; Ian.Campbell@citrix.com; >> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com; >> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; >> keir@xen.org >> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets >> >>>>> On 28.01.14 at 15:34, Andrew Cooper wrote: >>> On 28/01/14 14:23, Xu, Dongxiao wrote: >>>>> And finally, I think the total size of the buffer here can easily >>>>> exceed a page, i.e. this then ends up being a non-order-0 >>>>> allocation, which may _never_ succeed (i.e. the operation is >>>>> then rendered useless). I guest it'd be better to e.g. vmap() >>>>> the MFNs underlying the guest buffer. >>>> Do you mean we check the size of the total size, and allocate MFNs one by >>> one, then vmap them? >>> >>> I still think this is barking mad as a method of getting this quantity >>> of data from Xen to the toolstack in a repeated fashon. >>> >>> Xen should allocate a per-socket buffer at the start of day (or perhaps >>> on first use of CQM), and the CQM monitoring tool gets to map those >>> per-socket buffers read-only. >>> >>> This way, all processing of the CQM data happens in dom0 userspace, not >>> in Xen in hypercall context; All Xen has to do is periodically dump the >>> MSRs into the pages. >> Indeed - if the nature of the data is such that it can be exposed >> read-only to suitably privileged entities, then this would be the >> much better interface. > If the data fetching is not hypercall driven, do you have a recommendation on how frequent Xen dumps the MSRs into the share page? > > Thanks, > Dongxiao There is nothing preventing a hypercall existing which does a synchronous prompt to dump data right now, but that is substantially less overhead than then having the hypercall go and then rotate a matrix of data so it can be consumed in a form convenient for userspace. Other solutions involve having a single read/write control page where the toolstack could set a bit indicating "please dump the msr when next convenient" and the rmid context switching code could do a test_and_clear() bit on it, which even solves the problem of "which core on some other socket do I decide to interrupt". ~Andrew