From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring feature Date: Tue, 18 Mar 2014 10:51:17 +0000 Message-ID: <53282525.2040607@citrix.com> References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com> <1392791564-37170-2-git-send-email-dongxiao.xu@intel.com> <530B616E020000780011ECD5@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A91195B196@SHSMSX104.ccr.corp.intel.com> <531598910200007800120BE4@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A9119746E2@SHSMSX104.ccr.corp.intel.com> <5328269102000078001250E3@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A911974E05@SHSMSX104.ccr.corp.intel.com> <532829630200007800125106@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A911974E2C@SHSMSX104.ccr.corp.intel.com> <53282DC80200007800125135@nat28.tlf.novell.com> <40776A41FC278F40B59438AD47D147A911974E88@SHSMSX104.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <40776A41FC278F40B59438AD47D147A911974E88@SHSMSX104.ccr.corp.intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Xu, Dongxiao" Cc: "keir@xen.org" , "Ian.Campbell@citrix.com" , "stefano.stabellini@eu.citrix.com" , "Ian.Jackson@eu.citrix.com" , "xen-devel@lists.xen.org" , Jan Beulich , "dgdegra@tycho.nsa.gov" List-Id: xen-devel@lists.xenproject.org On 18/03/14 10:46, Xu, Dongxiao wrote: >> -----Original Message----- >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Sent: Tuesday, March 18, 2014 6:28 PM >> To: Xu, Dongxiao >> Cc: andrew.cooper3@citrix.com; Ian.Campbell@citrix.com; >> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com; >> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; >> keir@xen.org >> Subject: RE: [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring >> feature >> >>>>> On 18.03.14 at 11:15, "Xu, Dongxiao" wrote: >>>> -----Original Message----- >>>> From: Jan Beulich [mailto:JBeulich@suse.com] >>>> Sent: Tuesday, March 18, 2014 6:09 PM >>>> To: Xu, Dongxiao >>>> Cc: andrew.cooper3@citrix.com; Ian.Campbell@citrix.com; >>>> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com; >>>> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov; >>>> keir@xen.org >>>> Subject: RE: [PATCH v9 1/6] x86: detect and initialize Cache QoS Monitoring >>>> feature >>>> >>>>>>> On 18.03.14 at 11:02, "Xu, Dongxiao" wrote: >>>>> Previously due to the large amount of data, we use the method to let Xen >>>>> share the CQM pages to Dom0 in read only way to avoid data copy. >>>>> However since the data amount is reduced a lot with above approach, do >> you >>>>> think whether we still need to use this share way? Or like most of the >>> other >>>>> sysctl/domctl (e.g., xl list) command, which uses copy_to_guest() to pass >>> the >>>>> data? >>>> Iirc we're talking about a square table with each dimension being the >>>> socket count. Considering an 8-node 4-socket system, that would >>>> still be 1024 entries, i.e. exceeding a page in size. Hence I would >>>> think that the read-only sharing approach might still be better. Of >>>> course, if the data amount was smaller (and by so much that even >>>> on huge systems it's no more than a page), that would be different. >>> Okay. >>> >>> By using dynamic memory allocation and data sharing mechanism, we may >> need >>> two hypercalls when Dom0 tool stack is querying CQM related info. >>> - 1st hypercall is to let Xen allocate the memory and put CQM data there. >>> - 2nd hypercall is to indicate Dom0 tool stack already digested the CQM data >>> and Xen can free the memory. >> Why would that memory ever need de-allocating? >> >> Anyway, could you clarify again what amount of data we're >> talking about, without me having to dig out the old patch series? > Originally we statically allocate memory in initialization time, and the size is "rmid_max * socket_max", which may be a very big value. As the propose in today's first mail, we can use the dynamic memory allocation as "rmid_inuse * socket_inuse" for CQM related pages when user is really issuing query operations, this can save the allocated memory size, because at that point, we know the exact socket number in the system and exact the in use RMID, and no need to predict them as maximum values. > > Back to the above question why memory need de-allocating: > Since the rmid_inuse and socket_inuse may be changing from time to time, the allocating memory size will be different. Therefore we need to allocate them when user issues the hypercall, and then free them after the data digest is finished. > > Not sure whether I stated the problem clearly for you. > > Thanks, > Dongxiao There is a sensible upper bound for rmid_max, in the init function. There should be a set of pointers (one per socket), allocated on use, which can contain rmid_max data. Once allocated, they are large enough for any eventuality, and don't need de/reallocating. This way, the amount of memory used is sockets_inuse * rmid_max. ~Andrew