From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=58668 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PQIuO-0006fj-RM for qemu-devel@nongnu.org; Wed, 08 Dec 2010 07:14:10 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PQIuN-0007lQ-Ak for qemu-devel@nongnu.org; Wed, 08 Dec 2010 07:14:08 -0500 Received: from mx1.redhat.com ([209.132.183.28]:63250) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PQIuN-0007lI-3x for qemu-devel@nongnu.org; Wed, 08 Dec 2010 07:14:07 -0500 Message-ID: <4CFF768A.4060006@redhat.com> Date: Wed, 08 Dec 2010 14:14:02 +0200 From: Avi Kivity MIME-Version: 1.0 References: <1291202264-3128-1-git-send-email-andre.przywara@amd.com> <4CF63E60.4050309@redhat.com> <4CF645AC.3070609@amd.com> In-Reply-To: <4CF645AC.3070609@amd.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH] kvm/x86: enlarge number of possible CPUID leaves List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andre Przywara Cc: "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" On 12/01/2010 02:55 PM, Andre Przywara wrote: > Avi Kivity wrote: >> On 12/01/2010 01:17 PM, Andre Przywara wrote: >>> Currently the number of CPUID leaves KVM handles is limited to 40. >>> My desktop machine (AthlonII) already has 35 and future CPUs will >>> expand this well beyond the limit. Extend the limit to 80 to make >>> room for future processors. >>> >>> Signed-off-by: Andre Przywara >>> --- >>> arch/x86/include/asm/kvm_host.h | 2 +- >>> 1 files changed, 1 insertions(+), 1 deletions(-) >>> >>> Hi, >>> I found that either KVM or QEMU (possibly both) are broken in respect >>> to handling more CPUID entries than the limit dictates. KVM will >>> return -E2BIG, which is the same error as if the user hasn't provided >>> enough space to hold all entries. Now QEMU will continue to enlarge >>> the allocated memory until it gets into an out-of-memory condition. >>> I have tried to fix this with teaching KVM how to deal with a capped >>> number of entries (there are some bugs in the current code), but this >>> will limit the number of CPUID entries KVM handles, which will surely >>> cut of the lastly appended PV leaves. >>> A proper fix would be to make this allocation dynamic. Is this a >>> feasible way or will this lead to issues or side-effects? >>> >> >> Well, there has to be some limit, or userspace can allocate unbounded >> kernel memory. > But this limit should not be a compile-time constant, but a runtime > one. The needed size depends on the host CPU (plus the KVM PV leaves) > and thus could be determined once for all VMs and vCPUs at module > load-time. But then we cannot use the static array allocation we > currently have in struct kvm_vcpu_arch: > struct kvm_cpuid_entry2 cpuid_entries[KVM_MAX_CPUID_ENTRIES]; > So we would use a kind-of dynamic allocation bounded by the host CPU's > need. But for the code is does not make much difference to a "real" > dynamic allocation. > > Also we could implement kvm_dev_ioctl_get_supported_cpuid without the > vmalloc, if we don't care about some dozens of copy_to_user() calls in > this function. Then we would not need this limit in > GET_SUPPORTED_CPUID at all, but it will strike us again at > KVM_SET_CPUID[2], where we may not fulfill the promises we gave earlier. > Having said this, what about that: > kvm_dev_ioctl_get_supported_cpuid is invariant to the VM or vCPU (as > it is used by a system ioctl), so it could be run once at > initialization, which would limit the ioctl implementation to a plain > bounded copy. > Would you want such a patch (removing the vmalloc and maybe even the > limit)? Making GET_SUPPORTED_CPUID data static would be an improvement, yes. -- error compiling committee.c: too many arguments to function