From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=34547 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PNmH5-0005cQ-Pq for qemu-devel@nongnu.org; Wed, 01 Dec 2010 07:59:08 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PNmH4-00074P-IG for qemu-devel@nongnu.org; Wed, 01 Dec 2010 07:59:07 -0500 Received: from am1ehsobe002.messaging.microsoft.com ([213.199.154.205]:57564 helo=AM1EHSOBE002.bigfish.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PNmH4-00074J-AS for qemu-devel@nongnu.org; Wed, 01 Dec 2010 07:59:06 -0500 Message-ID: <4CF645AC.3070609@amd.com> Date: Wed, 1 Dec 2010 13:55:08 +0100 From: Andre Przywara MIME-Version: 1.0 References: <1291202264-3128-1-git-send-email-andre.przywara@amd.com> <4CF63E60.4050309@redhat.com> In-Reply-To: <4CF63E60.4050309@redhat.com> Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH] kvm/x86: enlarge number of possible CPUID leaves List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" Avi Kivity wrote: > On 12/01/2010 01:17 PM, Andre Przywara wrote: >> Currently the number of CPUID leaves KVM handles is limited to 40. >> My desktop machine (AthlonII) already has 35 and future CPUs will >> expand this well beyond the limit. Extend the limit to 80 to make >> room for future processors. >> >> Signed-off-by: Andre Przywara >> --- >> arch/x86/include/asm/kvm_host.h | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> Hi, >> I found that either KVM or QEMU (possibly both) are broken in respect >> to handling more CPUID entries than the limit dictates. KVM will >> return -E2BIG, which is the same error as if the user hasn't provided >> enough space to hold all entries. Now QEMU will continue to enlarge >> the allocated memory until it gets into an out-of-memory condition. >> I have tried to fix this with teaching KVM how to deal with a capped >> number of entries (there are some bugs in the current code), but this >> will limit the number of CPUID entries KVM handles, which will surely >> cut of the lastly appended PV leaves. >> A proper fix would be to make this allocation dynamic. Is this a >> feasible way or will this lead to issues or side-effects? >> > > Well, there has to be some limit, or userspace can allocate unbounded > kernel memory. But this limit should not be a compile-time constant, but a runtime one. The needed size depends on the host CPU (plus the KVM PV leaves) and thus could be determined once for all VMs and vCPUs at module load-time. But then we cannot use the static array allocation we currently have in struct kvm_vcpu_arch: struct kvm_cpuid_entry2 cpuid_entries[KVM_MAX_CPUID_ENTRIES]; So we would use a kind-of dynamic allocation bounded by the host CPU's need. But for the code is does not make much difference to a "real" dynamic allocation. Also we could implement kvm_dev_ioctl_get_supported_cpuid without the vmalloc, if we don't care about some dozens of copy_to_user() calls in this function. Then we would not need this limit in GET_SUPPORTED_CPUID at all, but it will strike us again at KVM_SET_CPUID[2], where we may not fulfill the promises we gave earlier. Having said this, what about that: kvm_dev_ioctl_get_supported_cpuid is invariant to the VM or vCPU (as it is used by a system ioctl), so it could be run once at initialization, which would limit the ioctl implementation to a plain bounded copy. Would you want such a patch (removing the vmalloc and maybe even the limit)? Regards, Andre. -- Andre Przywara AMD-OSRC (Dresden) Tel: x29712