From: Andrew Cooper <andrew.cooper3@citrix.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "keir@xen.org" <keir@xen.org>,
"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
"stefano.stabellini@eu.citrix.com"
<stefano.stabellini@eu.citrix.com>,
"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Jan Beulich <JBeulich@suse.com>,
"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
Date: Tue, 28 Jan 2014 15:21:46 +0000 [thread overview]
Message-ID: <52E7CB0A.3000509@citrix.com> (raw)
In-Reply-To: <40776A41FC278F40B59438AD47D147A911923992@SHSMSX104.ccr.corp.intel.com>
On 28/01/14 15:15, Xu, Dongxiao wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: Tuesday, January 28, 2014 11:04 PM
>> To: Andrew Cooper; Xu, Dongxiao
>> Cc: dario.faggioli@citrix.com; Ian.Campbell@citrix.com;
>> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
>> xen-devel@lists.xen.org; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov;
>> keir@xen.org
>> Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
>>
>>>>> On 28.01.14 at 15:34, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> On 28/01/14 14:23, Xu, Dongxiao wrote:
>>>>> And finally, I think the total size of the buffer here can easily
>>>>> exceed a page, i.e. this then ends up being a non-order-0
>>>>> allocation, which may _never_ succeed (i.e. the operation is
>>>>> then rendered useless). I guest it'd be better to e.g. vmap()
>>>>> the MFNs underlying the guest buffer.
>>>> Do you mean we check the size of the total size, and allocate MFNs one by
>>> one, then vmap them?
>>>
>>> I still think this is barking mad as a method of getting this quantity
>>> of data from Xen to the toolstack in a repeated fashon.
>>>
>>> Xen should allocate a per-socket buffer at the start of day (or perhaps
>>> on first use of CQM), and the CQM monitoring tool gets to map those
>>> per-socket buffers read-only.
>>>
>>> This way, all processing of the CQM data happens in dom0 userspace, not
>>> in Xen in hypercall context; All Xen has to do is periodically dump the
>>> MSRs into the pages.
>> Indeed - if the nature of the data is such that it can be exposed
>> read-only to suitably privileged entities, then this would be the
>> much better interface.
> If the data fetching is not hypercall driven, do you have a recommendation on how frequent Xen dumps the MSRs into the share page?
>
> Thanks,
> Dongxiao
There is nothing preventing a hypercall existing which does a
synchronous prompt to dump data right now, but that is substantially
less overhead than then having the hypercall go and then rotate a matrix
of data so it can be consumed in a form convenient for userspace.
Other solutions involve having a single read/write control page where
the toolstack could set a bit indicating "please dump the msr when next
convenient" and the rmid context switching code could do a
test_and_clear() bit on it, which even solves the problem of "which core
on some other socket do I decide to interrupt".
~Andrew
next prev parent reply other threads:[~2014-01-28 15:21 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-05 9:38 [PATCH v6 0/7] enable Cache QoS Monitoring (CQM) feature Dongxiao Xu
2013-12-05 9:38 ` [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring feature Dongxiao Xu
2013-12-05 10:54 ` Andrew Cooper
2013-12-06 8:29 ` Jan Beulich
2014-01-20 13:00 ` Jan Beulich
2014-01-28 14:01 ` Xu, Dongxiao
2014-01-28 14:04 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 2/7] x86: dynamically attach/detach CQM service for a guest Dongxiao Xu
2014-01-20 13:13 ` Jan Beulich
2014-01-20 13:17 ` Andrew Cooper
2014-01-20 14:03 ` Jan Beulich
2014-01-28 14:09 ` Xu, Dongxiao
2014-01-28 14:37 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 3/7] x86: initialize per socket cpu map Dongxiao Xu
2014-01-20 13:26 ` Jan Beulich
2014-01-28 14:12 ` Xu, Dongxiao
2014-01-28 14:41 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 4/7] x86: collect CQM information from all sockets Dongxiao Xu
2014-01-20 13:52 ` Jan Beulich
2014-01-28 14:23 ` Xu, Dongxiao
2014-01-28 14:34 ` Andrew Cooper
2014-01-28 15:03 ` Jan Beulich
2014-01-28 15:15 ` Xu, Dongxiao
2014-01-28 15:21 ` Andrew Cooper [this message]
2014-01-28 15:34 ` Jan Beulich
2014-01-28 14:47 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 5/7] x86: enable CQM monitoring for each domain RMID Dongxiao Xu
2014-01-20 13:58 ` Jan Beulich
2014-01-28 14:24 ` Xu, Dongxiao
2013-12-05 9:38 ` [PATCH v6 6/7] xsm: add platform QoS related xsm policies Dongxiao Xu
2013-12-05 9:38 ` [PATCH v6 7/7] tools: enable Cache QoS Monitoring feature for libxl/libxc Dongxiao Xu
2013-12-07 5:09 ` [PATCH v6 0/7] enable Cache QoS Monitoring (CQM) feature Xu, Dongxiao
2013-12-09 9:34 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52E7CB0A.3000509@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=dario.faggioli@citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=dongxiao.xu@intel.com \
--cc=keir@xen.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).