From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NSoQW-0007HF-Qf for qemu-devel@nongnu.org; Thu, 07 Jan 2010 04:13:08 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NSoQS-0007AG-1c for qemu-devel@nongnu.org; Thu, 07 Jan 2010 04:13:08 -0500 Received: from [199.232.76.173] (port=50720 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NSoQR-00079n-Dw for qemu-devel@nongnu.org; Thu, 07 Jan 2010 04:13:03 -0500 Received: from mx1.redhat.com ([209.132.183.28]:55024) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NSoQR-0001Jr-10 for qemu-devel@nongnu.org; Thu, 07 Jan 2010 04:13:03 -0500 Message-ID: <4B45A5B7.4040403@redhat.com> Date: Thu, 07 Jan 2010 11:13:27 +0200 From: Dor Laor MIME-Version: 1.0 Subject: Re: [Qemu-devel] cpuid problem in upstream qemu with kvm References: <4B449467.4070606@redhat.com> <4B4494FC.1080907@codemonkey.ws> <4B449608.7040102@redhat.com> <4B4496E9.2030201@redhat.com> <20100106142231.GF2248@redhat.com> <4B449EE7.4050401@redhat.com> <4B44A2C6.4050504@redhat.com> <4B44A965.9040300@codemonkey.ws> <4B459550.6000202@redhat.com> <20100107082452.GA16032@redhat.com> In-Reply-To: <20100107082452.GA16032@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: dlaor@redhat.com List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Daniel P. Berrange" Cc: kvm-devel , Gleb Natapov , "Michael S. Tsirkin" , John Cooper , Alexander Graf , qemu-devel@nongnu.org, Avi Kivity On 01/07/2010 10:24 AM, Daniel P. Berrange wrote: > On Thu, Jan 07, 2010 at 10:03:28AM +0200, Dor Laor wrote: >> On 01/06/2010 05:16 PM, Anthony Liguori wrote: >>> On 01/06/2010 08:48 AM, Dor Laor wrote: >>>> On 01/06/2010 04:32 PM, Avi Kivity wrote: >>>>> On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote: >>>>>>> We can probably default -enable-kvm to -cpu host, as long as we >>>>>>> explain >>>>>>> very carefully that if users wish to preserve cpu features across >>>>>>> upgrades, they can't depend on the default. >>>>>> Hardware upgrades or software upgrades? >>>>> >>>>> Yes. >>>>> >>>> >>>> I just want to remind all the the main motivation for using -cpu >>>> realModelThatWasOnceShiped is to provide correct cpu emulation for the >>>> guest. Using a random qemu|kvm64+flag1-flag2 might really cause >>>> trouble for the guest OS or guest apps. >>>> >>>> On top of -cpu nehalem we can always add fancy features like x2apic, etc. >>> >>> I think it boils down to, how are people going to use this. >>> >>> For individuals, code names like Nehalem are too obscure. From my own >>> personal experience, even power users often have no clue whether there >>> processor is a Nehalem or not. >>> >>> For management tools, Nehalem is a somewhat imprecise target because it >>> covers a wide range of potential processors. In general, I think what we >>> really need to do is simplify the process of going from, here's the >>> output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu >>> so that migration always works for these systems. >>> >>> I don't think -cpu nehalem really helps with that problem. -cpu none >>> helps a bit, but I hope we can find something nicer. >> >> We can debate about the exact name/model to represent the Nehalem >> family, I don't have an issue with that and actually Intel and Amd >> should define it. >> >> There are two main motivations behind the above approach: >> 1. Sound guest cpu definition. >> Using a predefined model should automatically set all the relevant >> vendor/stepping/cpuid flags/cache sizes/etc. >> We just can let every management application deal with it. It breaks >> guest OS/apps. For instance there are MSI support in windows guest >> relay on the stepping. >> >> 2. Simplifying end user and mgmt tools. >> qemu/kvm have the best knowledge about these low levels. If we push >> it up in the stack, eventually it reaches the user. The end user, >> not a 'qemu-devel user' which is actually far better from the >> average user. >> >> This means that such users will have to know what is popcount and >> whether or not to limit migration on one host by adding sse4.2 or >> not. >> >> This is exactly what vmware are doing: >> - Intel CPUs : >> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991 >> - AMD CPUs : >> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992 >> >> Why should we invent the wheel (qemu64..)? Let's learn from their >> experience. > > NB, be careful to distinguish the different levels of VMwares mgmt stack. In > terms of guest configuration, VMWare ESX APIs require the management app to > specify the raw CPUID masks. With VirtualCenter VMotion they defined this > handful of common Intel/AMD CPU sets, and will automatically classify hosts Live migration is the prime motivation for it. In addition, as we all know windows guest do not like to find a new cpu every time they boot. > into one of these sets and use that to specify a default CPUID mask, in the > case that the guest does not have an explicit one in its config. This gives > them good default, out-of-the-box behaviour, while also allowing mgmt apps > 100% control over each guest's CPUID should they want it. That's exactly what we need. > > Regards, > Daniel