From: Andrew Cooper <andrew.cooper3@citrix.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Cc: "keir@xen.org" <keir@xen.org>,
"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
"stefano.stabellini@eu.citrix.com"
<stefano.stabellini@eu.citrix.com>,
"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Jan Beulich <JBeulich@suse.com>,
"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH v6 4/7] x86: collect CQM information from all sockets
Date: Tue, 28 Jan 2014 14:34:32 +0000 [thread overview]
Message-ID: <52E7BFF8.1090605@citrix.com> (raw)
In-Reply-To: <40776A41FC278F40B59438AD47D147A9119237AD@SHSMSX104.ccr.corp.intel.com>
On 28/01/14 14:23, Xu, Dongxiao wrote:
>
>>> + case XEN_SYSCTL_getcqminfo:
>>> + {
>>> + struct xen_socket_cqmdata *info;
>>> + uint32_t num_sockets;
>>> + uint32_t num_rmids;
>>> + cpumask_t cpu_cqmdata_map;
>> Unless absolutely avoidable, not CPU masks on the stack please.
> Okay, will allocate it by "xzalloc" function.
cpumask_var_t mask;
{z}alloc_cpumask_var(&mask);
...
free_cpumask_var(mask);
This will switch between a long on the stack and an allocated structure
depending on whether Xen is compiled with fewer or more than 64 pcpu.
>
>>> +
>>> + if ( !system_supports_cqm() )
>>> + {
>>> + ret = -ENODEV;
>>> + break;
>>> + }
>>> +
>>> + select_socket_cpu(&cpu_cqmdata_map);
>>> +
>>> + num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>>> + sysctl->u.getcqminfo.num_sockets);
>>> + num_rmids = get_cqm_count();
>>> + info = xzalloc_array(struct xen_socket_cqmdata,
>>> + num_rmids * num_sockets);
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>>
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>>
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about the caller (libxc) issue an hypercall to get the two values from hypervisor, then allocating the same size of num_rmid and num_sockets?
>
>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> Do you mean we check the size of the total size, and allocate MFNs one by one, then vmap them?
I still think this is barking mad as a method of getting this quantity
of data from Xen to the toolstack in a repeated fashon.
Xen should allocate a per-socket buffer at the start of day (or perhaps
on first use of CQM), and the CQM monitoring tool gets to map those
per-socket buffers read-only.
This way, all processing of the CQM data happens in dom0 userspace, not
in Xen in hypercall context; All Xen has to do is periodically dump the
MSRs into the pages.
~Andrew
next prev parent reply other threads:[~2014-01-28 14:34 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-05 9:38 [PATCH v6 0/7] enable Cache QoS Monitoring (CQM) feature Dongxiao Xu
2013-12-05 9:38 ` [PATCH v6 1/7] x86: detect and initialize Cache QoS Monitoring feature Dongxiao Xu
2013-12-05 10:54 ` Andrew Cooper
2013-12-06 8:29 ` Jan Beulich
2014-01-20 13:00 ` Jan Beulich
2014-01-28 14:01 ` Xu, Dongxiao
2014-01-28 14:04 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 2/7] x86: dynamically attach/detach CQM service for a guest Dongxiao Xu
2014-01-20 13:13 ` Jan Beulich
2014-01-20 13:17 ` Andrew Cooper
2014-01-20 14:03 ` Jan Beulich
2014-01-28 14:09 ` Xu, Dongxiao
2014-01-28 14:37 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 3/7] x86: initialize per socket cpu map Dongxiao Xu
2014-01-20 13:26 ` Jan Beulich
2014-01-28 14:12 ` Xu, Dongxiao
2014-01-28 14:41 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 4/7] x86: collect CQM information from all sockets Dongxiao Xu
2014-01-20 13:52 ` Jan Beulich
2014-01-28 14:23 ` Xu, Dongxiao
2014-01-28 14:34 ` Andrew Cooper [this message]
2014-01-28 15:03 ` Jan Beulich
2014-01-28 15:15 ` Xu, Dongxiao
2014-01-28 15:21 ` Andrew Cooper
2014-01-28 15:34 ` Jan Beulich
2014-01-28 14:47 ` Jan Beulich
2013-12-05 9:38 ` [PATCH v6 5/7] x86: enable CQM monitoring for each domain RMID Dongxiao Xu
2014-01-20 13:58 ` Jan Beulich
2014-01-28 14:24 ` Xu, Dongxiao
2013-12-05 9:38 ` [PATCH v6 6/7] xsm: add platform QoS related xsm policies Dongxiao Xu
2013-12-05 9:38 ` [PATCH v6 7/7] tools: enable Cache QoS Monitoring feature for libxl/libxc Dongxiao Xu
2013-12-07 5:09 ` [PATCH v6 0/7] enable Cache QoS Monitoring (CQM) feature Xu, Dongxiao
2013-12-09 9:34 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52E7BFF8.1090605@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=dario.faggioli@citrix.com \
--cc=dgdegra@tycho.nsa.gov \
--cc=dongxiao.xu@intel.com \
--cc=keir@xen.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).