From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid hypercall Date: Thu, 27 Feb 2014 12:11:49 +0000 Message-ID: <530F2B85.6060403@citrix.com> References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com> <1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com> <530F365B020000780011FD03@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <530F365B020000780011FD03@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Keir Fraser , Tim Deegan , Ian Jackson , Ian Campbell , Xen-devel List-Id: xen-devel@lists.xenproject.org On 27/02/14 11:58, Jan Beulich wrote: >>>> On 27.02.14 at 12:11, Andrew Cooper wrote: >> which permits a toolstack to execute an arbitrary cpuid instruction on a >> specified physical cpu. > For one - is it a good idea to expose the unprocessed CPUID to > any guest code? After all even the Dom0 kernel only gets to see > processed values, and the fact the without CPUID faulting apps > in PV guests can inadvertently use the raw values is known to be > a problem, not a feature. Any toolstack which uses this specific hypercall to find out information normally hidden from dom0 using faulting/masking/policy can only shoot itself. The usecase is for enumerating the real cache leaves, which are normally faked up in the policy anyway, so of no use. > > And then - if you already have access to control operations, I > don't think you need the hyypervisor to help you: Limit your > vCPU's affinity to the particular pCPU you care about, and do > what you need doing from the kernel (by also setting the > processes affinity to the particular CPU you could achieve the > same even from user land). > > Jan > Having a toolstack rely on being able to pin its vcpus around so some userspace can enumerate the cache leaves is horrific. Apart from forcibly messing with a balanced numa setup, what about cpu pools, or toolstack disaggregation where pinning is restricted? ~Andrew