From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH 06/10] x86/cpuid: Handle leaf 0x6 in guest_cpuid()
Date: Wed, 22 Feb 2017 09:12:56 +0000 [thread overview]
Message-ID: <accd9ac1-3b45-0e13-88a4-93f8526d8f1f@citrix.com> (raw)
In-Reply-To: <3d8e066c-a251-d632-97cb-ad553161eeda@citrix.com>
On 22/02/17 08:23, Andrew Cooper wrote:
> On 22/02/17 07:31, Jan Beulich wrote:
>>>>> On 21.02.17 at 18:40, <andrew.cooper3@citrix.com> wrote:
>>> On 21/02/17 17:25, Jan Beulich wrote:
>>>>>>> On 20.02.17 at 12:00, <andrew.cooper3@citrix.com> wrote:
>>>>> The thermal/performance leaf was previously hidden from HVM guests, but
>>> fully
>>>>> visible to PV guests. Most of the leaf refers to MSR availability, and
>>> there
>>>>> is nothing an unprivileged PV guest can do with the information, so hide the
>>>>> leaf entirely.
>>>>>
>>>>> The PV MSR handling logic as minimal support for some thermal/perf operations
>>>> ... has ...
>>>>
>>>>> from the hardware domain, so leak through the implemented subset of
>>>>> features.
>>>> Does it make sense to continue to special case PV hwdom here?
>>> Being able to play with these MSRs will be actively wrong for HVM
>>> context. It is already fairly wrong for PV context, as nothing prevents
>>> you being rescheduled across pcpus while in the middle of a read/write
>>> cycle on the MSRs.
>> So the MSRs in question are, afaics
>> - MSR_IA32_MPERF, MSR_IA32_APERF, MSR_IA32_PERF_CTL (all
>> of which are is_cpufreq_controller() dependent)
>> - MSR_IA32_THERM_CONTROL, MSR_IA32_ENERGY_PERF_BIAS
>> (both of which are is_pinned_vcpu() dependent)
>> For the latter your argument doesn't apply. For the former, I've
>> been wondering for a while whether we shouldn't do away with
>> "cpufreq=dom0-kernel".
> Hmm. All good points. If I can get away without leaking any of this,
> that would be ideal. (Lets see what Linux thinks of such a setup.)
Linux seems fine without any of this leakage. I have checked and C/P
state information is still propagated up to Xen. I will drop the entire
dynamic adjustment.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-02-22 9:12 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-20 11:00 [PATCH 00/10] x86/cpuid: Remove the legacy infrastructure Andrew Cooper
2017-02-20 11:00 ` [PATCH 01/10] x86/cpuid: Disallow policy updates once the domain is running Andrew Cooper
2017-02-21 16:37 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 02/10] x86/gen-cpuid: Clarify the intended meaning of AVX wrt feature dependencies Andrew Cooper
2017-02-21 16:40 ` Jan Beulich
2017-02-21 16:41 ` Andrew Cooper
2017-02-21 16:47 ` Jan Beulich
2017-02-21 16:53 ` Andrew Cooper
2017-02-21 17:07 ` Jan Beulich
2017-02-21 17:12 ` Andrew Cooper
2017-02-21 17:17 ` Jan Beulich
2017-02-21 17:42 ` Andrew Cooper
2017-02-22 7:13 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 03/10] x86/cpuid: Handle leaf 0x1 in guest_cpuid() Andrew Cooper
2017-02-21 16:59 ` Jan Beulich
2017-02-21 17:13 ` Andrew Cooper
2017-02-21 17:20 ` Jan Beulich
2017-02-21 17:29 ` Andrew Cooper
2017-02-22 7:16 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 04/10] x86/cpuid: Handle leaf 0x4 " Andrew Cooper
2017-02-21 17:16 ` Jan Beulich
2017-02-21 17:35 ` Andrew Cooper
2017-02-22 7:23 ` Jan Beulich
2017-02-22 7:55 ` Andrew Cooper
2017-03-10 16:27 ` [PATCH v2 " Andrew Cooper
2017-03-13 12:03 ` Jan Beulich
2017-03-13 12:51 ` Andrew Cooper
2017-03-13 13:05 ` Jan Beulich
2017-03-13 13:24 ` Andrew Cooper
2017-03-13 13:36 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 05/10] x86/cpuid: Handle leaf 0x5 " Andrew Cooper
2017-02-21 17:22 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 06/10] x86/cpuid: Handle leaf 0x6 " Andrew Cooper
2017-02-21 17:25 ` Jan Beulich
2017-02-21 17:40 ` Andrew Cooper
2017-02-21 17:44 ` Andrew Cooper
2017-02-22 7:31 ` Jan Beulich
2017-02-22 8:23 ` Andrew Cooper
2017-02-22 9:12 ` Andrew Cooper [this message]
2017-02-22 9:26 ` Jan Beulich
2017-02-27 14:30 ` Andrew Cooper
2017-03-10 16:32 ` [PATCH v2 " Andrew Cooper
2017-03-13 12:04 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 07/10] x86/cpuid: Handle leaf 0xa " Andrew Cooper
2017-02-22 9:11 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 08/10] x86/cpuid: Handle leaf 0xb " Andrew Cooper
2017-02-22 9:16 ` Jan Beulich
2017-02-22 10:22 ` Andrew Cooper
2017-02-22 10:37 ` Jan Beulich
2017-02-27 15:05 ` Andrew Cooper
2017-03-10 16:44 ` [PATCH v2 " Andrew Cooper
2017-03-13 12:13 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 09/10] x86/cpuid: Drop legacy CPUID infrastructure Andrew Cooper
2017-02-22 9:19 ` Jan Beulich
2017-02-20 11:00 ` [PATCH 10/10] x86/cpuid: Always enable faulting for the control domain Andrew Cooper
2017-02-22 9:23 ` Jan Beulich
2017-02-22 10:00 ` Andrew Cooper
2017-02-22 10:10 ` Jan Beulich
2017-02-27 15:10 ` Andrew Cooper
2017-02-28 9:31 ` Jan Beulich
2017-03-10 17:10 ` Andrew Cooper
2017-03-13 11:48 ` Jan Beulich
2017-03-14 15:06 ` Wei Liu
2017-03-14 15:13 ` Jan Beulich
2017-03-14 16:05 ` Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=accd9ac1-3b45-0e13-88a4-93f8526d8f1f@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).