From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:37651) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UhN2k-0002qK-7e for qemu-devel@nongnu.org; Tue, 28 May 2013 12:46:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UhN2i-0001RJ-Pe for qemu-devel@nongnu.org; Tue, 28 May 2013 12:46:38 -0400 Received: from mx1.redhat.com ([209.132.183.28]:49956) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UhN2i-0001RD-Hy for qemu-devel@nongnu.org; Tue, 28 May 2013 12:46:36 -0400 Message-ID: <51A4DF60.2080600@redhat.com> Date: Tue, 28 May 2013 18:46:24 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <51A0596D.1050300@redhat.com> <20130527120951.GA2580@otherpad.lan.raisama.net> <51A34FD0.7010902@redhat.com> <20130527130712.GB2580@otherpad.lan.raisama.net> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] target-i386: Disable CPUID_EXT_MONITOR when KVM is enabled List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Bandan Das Cc: Igor Mammedov , Eduardo Habkost , =?ISO-8859-15?Q?Andreas_F=E4rber?= , qemu-devel@nongnu.org Il 28/05/2013 18:34, Bandan Das ha scritto: > Eduardo Habkost writes: > >> On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote: >>> Il 27/05/2013 14:09, Eduardo Habkost ha scritto: >>>> On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: >>>>> Il 25/05/2013 03:21, Bandan Das ha scritto: >>>>>> There is one user-visible effect: "-cpu ...,enforce" will stop failing >>>>>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly >>>>>> the point: there's no point in having CPU model definitions that would >>>>>> never work as-is with neither TCG or KVM. This patch is changing the >>>>>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what >>>>>> was already happening in practice. >>>>> >>>>> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it >>>>> worth it? >>>> >>>> No models match a "real" CPU this way, because neither TCG or KVM >>>> support all features supported by a real CPU. I ask the opposite >>>> question: is it worth maintaining an "accurate" CPU model definition >>>> that would never work without feature-bit tweaking in the command-line? >>> >>> It would work with TCG. Changing TCG to KVM should not change hardware >>> if you use "-cpu ...,enforce", so it is right that it fails when >>> starting with KVM. >>> >> >> Changing between KVM and TCG _does_ change hardware, today (with or >> without check/enforce). All CPU models on TCG have features not >> supported by TCG automatically removed. See the "if (!kvm_enabled())" >> block at x86_cpu_realizefn(). > > Yes, this is exactly why I was inclined to remove the monitor flag. > We already have uses of kvm_enabled() to set (or remove) kvm specific stuff, > and this change is no different. Do any of these affect something that is part of x86_def_t? > I can see Paolo's point though, having > a common definition probably makes sense too. >> (That's why I argue that we need separate classes/names for TCG and KVM >> modes. Otherwise our predefined models get less useful as they will >> require low-level feature-bit fiddling on the libvirt side to make them >> work as expected.) > > Agreed. From a user's perspective, I think the more a CPU model "just works", > whether it's KVM or TCG, the better. Yes, that's right. But I think extending the same expectation to "-cpu ...,enforce" is not necessary, and perhaps even wrong for "-cpu ...,check" since it's only a warning rather than a fatal error. Paolo