xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid hypercall
Date: Thu, 27 Feb 2014 12:11:49 +0000	[thread overview]
Message-ID: <530F2B85.6060403@citrix.com> (raw)
In-Reply-To: <530F365B020000780011FD03@nat28.tlf.novell.com>

On 27/02/14 11:58, Jan Beulich wrote:
>>>> On 27.02.14 at 12:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> which permits a toolstack to execute an arbitrary cpuid instruction on a
>> specified physical cpu.
> For one - is it a good idea to expose the unprocessed CPUID to
> any guest code? After all even the Dom0 kernel only gets to see
> processed values, and the fact the without CPUID faulting apps
> in PV guests can inadvertently use the raw values is known to be
> a problem, not a feature.

Any toolstack which uses this specific hypercall to find out information
normally hidden from dom0 using faulting/masking/policy can only shoot
itself.

The usecase is for enumerating the real cache leaves, which are normally
faked up in the policy anyway, so of no use.

>
> And then - if you already have access to control operations, I
> don't think you need the hyypervisor to help you: Limit your
> vCPU's affinity to the particular pCPU you care about, and do
> what you need doing from the kernel (by also setting the
> processes affinity to the particular CPU you could achieve the
> same even from user land).
>
> Jan
>

Having a toolstack rely on being able to pin its vcpus around so some
userspace can enumerate the cache leaves is horrific.

Apart from forcibly messing with a balanced numa setup, what about cpu
pools, or toolstack disaggregation where pinning is restricted?

~Andrew

  reply	other threads:[~2014-02-27 12:11 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-27 11:11 [PATCH RFC 0/2] Support for hwloc Andrew Cooper
2014-02-27 11:11 ` [PATCH RFC 1/2] tools/libxc: Improved xc_{topology, numa}info functions Andrew Cooper
2014-03-12  8:34   ` Dario Faggioli
2014-03-12 10:41     ` Andrew Cooper
2014-03-12 11:00       ` Dario Faggioli
2014-03-14 14:41       ` Ian Campbell
2014-02-27 11:11 ` [PATCH RFC 2/2] SYSCTL subop to execute cpuid on a specified pcpu Andrew Cooper
2014-02-27 11:11 ` [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid hypercall Andrew Cooper
2014-02-27 11:58   ` Jan Beulich
2014-02-27 12:11     ` Andrew Cooper [this message]
2014-02-27 12:26       ` Jan Beulich
2014-02-27 15:57         ` Andrew Cooper
2014-03-14 14:45   ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=530F2B85.6060403@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=keir@xen.org \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).