From: Andrew Cooper <andrew.cooper3@citrix.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>,
Jan Beulich <jbeulich@suse.com>,
"george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>
Cc: "Auld, Will" <will.auld@intel.com>,
"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
"Nakajima, Jun" <jun.nakajima@intel.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Xen Platform QoS design discussion
Date: Thu, 29 May 2014 10:13:32 +0100 [thread overview]
Message-ID: <5386FA3C.3010201@citrix.com> (raw)
In-Reply-To: <40776A41FC278F40B59438AD47D147A911A20944@SHSMSX104.ccr.corp.intel.com>
On 29/05/2014 08:31, Xu, Dongxiao wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:jbeulich@suse.com]
>> Sent: Thursday, May 29, 2014 3:02 PM
>> To: george.dunlap@eu.citrix.com; Xu, Dongxiao
>> Cc: andrew.cooper3@citrix.com; Ian.Campbell@citrix.com;
>> xen-devel@lists.xen.org
>> Subject: Re: RE: [Xen-devel] Xen Platform QoS design discussion
>>
>>>>> "Xu, Dongxiao" <dongxiao.xu@intel.com> 05/29/14 2:46 AM >>>
>>> I think Jan's opinion here is similar to what I proposed in the beginning of this
>> thread.
>>> The only difference is that, Jan prefers to get the CQM data per-socket and
>> per-domain
>>> with data copying, while I proposed to get the CQM data per-domain for all
>> sockets
>>> that can reduce the amount of hypercalls.
>> I don't think I ever voiced any preference between these two. All I said it
>> depends on
>> prevalent usage models, and to date I don't think I've seen a proper analysis of
>> what
>> the main usage model would be - it all seems guesswork and/or taking random
>> examples.
>>
>> What I did say I'd prefer is to have all this done outside the hypervisor, with the
>> hypervisor just providing fundamental infrastructure (MSR accesses).
> Okay. If I understand correctly, you prefer to implement a pure MSR access hypercall for one CPU, and put all other CQM things in libxc/libxl layer.
>
> In this case, if libvert/XenAPI is trying to query a domain's cache utilization in the system (say 2 sockets), then it will trigger _two_ such MSR access hypercalls for CPUs in the 2 different sockets.
> If you are okay with this idea, I am going to implement it.
>
> Thanks,
> Dongxiao
While I can see the use and attraction of a generic MSR access
hypercalls, using this method for getting QoS data is going to have
subsantitally higher overhead than even the original domctl suggestion.
I do not believe it will be an effective means of getting large
quantities of data from ring0 MSRs into dom0 userspace. This is not to
say that having a generic MSR interface is a bad thing, but I don't
think it should be used for this purpose.
~Andrew
next prev parent reply other threads:[~2014-05-29 9:13 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-30 16:47 Xen Platform QoS design discussion Xu, Dongxiao
2014-04-30 17:02 ` Ian Campbell
2014-05-01 0:56 ` Xu, Dongxiao
2014-05-02 9:23 ` Jan Beulich
2014-05-02 12:30 ` Xu, Dongxiao
2014-05-02 12:40 ` Jan Beulich
2014-05-04 0:46 ` Xu, Dongxiao
2014-05-06 9:10 ` Ian Campbell
2014-05-06 1:40 ` Xu, Dongxiao
2014-05-06 7:55 ` Jan Beulich
2014-05-06 10:06 ` Andrew Cooper
2014-05-07 2:08 ` Xu, Dongxiao
2014-05-07 9:10 ` Ian Campbell
2014-05-07 13:26 ` George Dunlap
2014-05-07 21:18 ` Andrew Cooper
2014-05-08 5:21 ` Xu, Dongxiao
2014-05-08 11:25 ` Andrew Cooper
2014-05-09 2:41 ` Xu, Dongxiao
2014-05-13 1:53 ` Xu, Dongxiao
2014-05-16 5:11 ` Xu, Dongxiao
2014-05-19 11:28 ` George Dunlap
2014-05-19 11:45 ` Jan Beulich
2014-05-19 12:13 ` George Dunlap
2014-05-19 12:41 ` Jan Beulich
2014-05-22 8:19 ` Xu, Dongxiao
2014-05-22 8:39 ` Jan Beulich
2014-05-22 9:27 ` George Dunlap
2014-05-26 0:51 ` Xu, Dongxiao
2014-05-29 0:45 ` Xu, Dongxiao
2014-05-29 7:01 ` Jan Beulich
2014-05-29 7:31 ` Xu, Dongxiao
2014-05-29 9:11 ` Jan Beulich
2014-05-30 9:10 ` Ian Campbell
2014-05-30 11:17 ` Jan Beulich
2014-05-30 12:33 ` Ian Campbell
2014-06-05 0:48 ` Xu, Dongxiao
2014-06-05 10:43 ` George Dunlap
2014-05-29 9:13 ` Andrew Cooper [this message]
2014-05-30 1:07 ` Xu, Dongxiao
2014-05-30 6:23 ` Jan Beulich
2014-05-30 7:51 ` Xu, Dongxiao
2014-05-30 11:15 ` Jan Beulich
2014-05-02 12:50 ` Andrew Cooper
2014-05-04 2:34 ` Xu, Dongxiao
2014-05-06 9:12 ` Ian Campbell
2014-05-06 10:00 ` Andrew Cooper
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5386FA3C.3010201@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=dongxiao.xu@intel.com \
--cc=george.dunlap@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=jun.nakajima@intel.com \
--cc=will.auld@intel.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).