kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
       [not found]                         ` <4B44A965.9040300@codemonkey.ws>
@ 2010-01-07  8:03                           ` Dor Laor
  2010-01-07  8:18                             ` Avi Kivity
  2010-01-07  8:24                             ` Daniel P. Berrange
  0 siblings, 2 replies; 20+ messages in thread
From: Dor Laor @ 2010-01-07  8:03 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: Gleb Natapov, Michael S. Tsirkin, John Cooper, qemu-devel,
	Alexander Graf, Avi Kivity, kvm-devel

On 01/06/2010 05:16 PM, Anthony Liguori wrote:
> On 01/06/2010 08:48 AM, Dor Laor wrote:
>> On 01/06/2010 04:32 PM, Avi Kivity wrote:
>>> On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
>>>>> We can probably default -enable-kvm to -cpu host, as long as we
>>>>> explain
>>>>> very carefully that if users wish to preserve cpu features across
>>>>> upgrades, they can't depend on the default.
>>>> Hardware upgrades or software upgrades?
>>>
>>> Yes.
>>>
>>
>> I just want to remind all the the main motivation for using -cpu
>> realModelThatWasOnceShiped is to provide correct cpu emulation for the
>> guest. Using a random qemu|kvm64+flag1-flag2 might really cause
>> trouble for the guest OS or guest apps.
>>
>> On top of -cpu nehalem we can always add fancy features like x2apic, etc.
>
> I think it boils down to, how are people going to use this.
>
> For individuals, code names like Nehalem are too obscure. From my own
> personal experience, even power users often have no clue whether there
> processor is a Nehalem or not.
>
> For management tools, Nehalem is a somewhat imprecise target because it
> covers a wide range of potential processors. In general, I think what we
> really need to do is simplify the process of going from, here's the
> output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
> so that migration always works for these systems.
>
> I don't think -cpu nehalem really helps with that problem. -cpu none
> helps a bit, but I hope we can find something nicer.

We can debate about the exact name/model to represent the Nehalem 
family, I don't have an issue with that and actually Intel and Amd 
should define it.

There are two main motivations behind the above approach:
1. Sound guest cpu definition.
    Using a predefined model should automatically set all the relevant
    vendor/stepping/cpuid flags/cache sizes/etc.
    We just can let every management application deal with it. It breaks
    guest OS/apps. For instance there are MSI support in windows guest
    relay on the stepping.

2. Simplifying end user and mgmt tools.
    qemu/kvm have the best knowledge about these low levels. If we push
    it up in the stack, eventually it reaches the user. The end user,
    not a 'qemu-devel user' which is actually far better from the
    average user.

    This means that such users will have to know what is popcount and
    whether or not to limit migration on one host by adding sse4.2 or
    not.

This is exactly what vmware are doing:
  - Intel CPUs : 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
  - AMD CPUs : 
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992

Why should we invent the wheel (qemu64..)? Let's learn from their 
experience.

This is the test description of the original patch by John:


     # Intel
     # -----

     # Management layers remove pentium3 by default.
     # It primarily remains here for testing of 32-bit migration.
     #
     [0:Pentium 3 Intel
             :vmx
             :pentium3;]

     # Core 2, 65nm
     # possible option sets: (+nx,+cx16), (+nx,+cx16,+ssse3)
     #
     1:Merom
             :vmx,sse2
             :qemu64,-nx,+sse2;

     # Core2 45nm
     #
     2:Penryn
             :vmx,sse2,nx,cx16,ssse3,sse4_1
             :qemu64,+sse2,+cx16,+ssse3,+sse4_1;

     # Core i7 45/32nm
     #
     3:Nehalem
             :vmx,sse2,nx,cx16,ssse3,sse4_1,sse4_2,popcnt
             :qemu64,+sse2,+cx16,+ssse3,+sse4_1,+sse4_2,+popcnt;


     # AMD
     # ---

     # Management layers remove pentium3 by default.
     # It primarily remains here for testing of 32-bit migration.
     #
     [0:Pentium 3 AMD
             :svm
             :pentium3;]

     # Opteron 90nm stepping E1/E4/E6
     # possible option sets: (-nx) for 130nm
     #
     1:Opteron G1
             :svm,sse2,nx
             :qemu64,+sse2;

     # Opteron 90nm stepping F2/F3
     #
     2:Opteron G2
             :svm,sse2,nx,cx16,rdtscp
             :qemu64,+sse2,+cx16,+rdtscp;

     # Opteron 65/45nm
     #
     3:Opteron G3
             :svm,sse2,nx,cx16,sse4a,misalignsse,popcnt,abm
             :qemu64,+sse2,+cx16,+sse4a,+misalignsse,+popcnt,+abm;



>
> Regards,
>
> Anthony Liguori
>
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  8:03                           ` [Qemu-devel] cpuid problem in upstream qemu with kvm Dor Laor
@ 2010-01-07  8:18                             ` Avi Kivity
  2010-01-07  9:11                               ` Dor Laor
  2010-01-07  8:24                             ` Daniel P. Berrange
  1 sibling, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2010-01-07  8:18 UTC (permalink / raw)
  To: dlaor
  Cc: Anthony Liguori, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	qemu-devel, Alexander Graf, kvm-devel

On 01/07/2010 10:03 AM, Dor Laor wrote:
>
> We can debate about the exact name/model to represent the Nehalem 
> family, I don't have an issue with that and actually Intel and Amd 
> should define it.

AMD and Intel already defined their names (in cat /proc/cpuinfo).  They 
don't define families, the whole idea is to segment the market.

>
> There are two main motivations behind the above approach:
> 1. Sound guest cpu definition.
>    Using a predefined model should automatically set all the relevant
>    vendor/stepping/cpuid flags/cache sizes/etc.
>    We just can let every management application deal with it. It breaks
>    guest OS/apps. For instance there are MSI support in windows guest
>    relay on the stepping.
>
> 2. Simplifying end user and mgmt tools.
>    qemu/kvm have the best knowledge about these low levels. If we push
>    it up in the stack, eventually it reaches the user. The end user,
>    not a 'qemu-devel user' which is actually far better from the
>    average user.
>
>    This means that such users will have to know what is popcount and
>    whether or not to limit migration on one host by adding sse4.2 or
>    not.
>
> This is exactly what vmware are doing:
>  - Intel CPUs : 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991 
>
>  - AMD CPUs : 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992 
>

They don't have to deal with different qemu and kvm versions.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  8:03                           ` [Qemu-devel] cpuid problem in upstream qemu with kvm Dor Laor
  2010-01-07  8:18                             ` Avi Kivity
@ 2010-01-07  8:24                             ` Daniel P. Berrange
  2010-01-07  9:13                               ` Dor Laor
  1 sibling, 1 reply; 20+ messages in thread
From: Daniel P. Berrange @ 2010-01-07  8:24 UTC (permalink / raw)
  To: Dor Laor
  Cc: Anthony Liguori, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	qemu-devel, Alexander Graf, Avi Kivity, kvm-devel

On Thu, Jan 07, 2010 at 10:03:28AM +0200, Dor Laor wrote:
> On 01/06/2010 05:16 PM, Anthony Liguori wrote:
> >On 01/06/2010 08:48 AM, Dor Laor wrote:
> >>On 01/06/2010 04:32 PM, Avi Kivity wrote:
> >>>On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
> >>>>>We can probably default -enable-kvm to -cpu host, as long as we
> >>>>>explain
> >>>>>very carefully that if users wish to preserve cpu features across
> >>>>>upgrades, they can't depend on the default.
> >>>>Hardware upgrades or software upgrades?
> >>>
> >>>Yes.
> >>>
> >>
> >>I just want to remind all the the main motivation for using -cpu
> >>realModelThatWasOnceShiped is to provide correct cpu emulation for the
> >>guest. Using a random qemu|kvm64+flag1-flag2 might really cause
> >>trouble for the guest OS or guest apps.
> >>
> >>On top of -cpu nehalem we can always add fancy features like x2apic, etc.
> >
> >I think it boils down to, how are people going to use this.
> >
> >For individuals, code names like Nehalem are too obscure. From my own
> >personal experience, even power users often have no clue whether there
> >processor is a Nehalem or not.
> >
> >For management tools, Nehalem is a somewhat imprecise target because it
> >covers a wide range of potential processors. In general, I think what we
> >really need to do is simplify the process of going from, here's the
> >output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
> >so that migration always works for these systems.
> >
> >I don't think -cpu nehalem really helps with that problem. -cpu none
> >helps a bit, but I hope we can find something nicer.
> 
> We can debate about the exact name/model to represent the Nehalem 
> family, I don't have an issue with that and actually Intel and Amd 
> should define it.
> 
> There are two main motivations behind the above approach:
> 1. Sound guest cpu definition.
>    Using a predefined model should automatically set all the relevant
>    vendor/stepping/cpuid flags/cache sizes/etc.
>    We just can let every management application deal with it. It breaks
>    guest OS/apps. For instance there are MSI support in windows guest
>    relay on the stepping.
> 
> 2. Simplifying end user and mgmt tools.
>    qemu/kvm have the best knowledge about these low levels. If we push
>    it up in the stack, eventually it reaches the user. The end user,
>    not a 'qemu-devel user' which is actually far better from the
>    average user.
> 
>    This means that such users will have to know what is popcount and
>    whether or not to limit migration on one host by adding sse4.2 or
>    not.
> 
> This is exactly what vmware are doing:
>  - Intel CPUs : 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
>  - AMD CPUs : 
> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992
> 
> Why should we invent the wheel (qemu64..)? Let's learn from their 
> experience.

NB, be careful to distinguish the different levels of VMwares mgmt stack. In
terms of guest configuration, VMWare ESX APIs require the management app to
specify the raw CPUID masks. With VirtualCenter VMotion they defined this 
handful of common Intel/AMD CPU sets, and will automatically classify hosts
into one  of these sets and use that to specify a default CPUID mask, in the
case that the guest does not have an explicit one in its config. This gives
them good default, out-of-the-box behaviour, while also allowing mgmt apps
100% control over each guest's CPUID should they want it.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  8:18                             ` Avi Kivity
@ 2010-01-07  9:11                               ` Dor Laor
  2010-01-07  9:24                                 ` Avi Kivity
  0 siblings, 1 reply; 20+ messages in thread
From: Dor Laor @ 2010-01-07  9:11 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Anthony Liguori, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	qemu-devel, Alexander Graf, kvm-devel

On 01/07/2010 10:18 AM, Avi Kivity wrote:
> On 01/07/2010 10:03 AM, Dor Laor wrote:
>>
>> We can debate about the exact name/model to represent the Nehalem
>> family, I don't have an issue with that and actually Intel and Amd
>> should define it.
>
> AMD and Intel already defined their names (in cat /proc/cpuinfo). They
> don't define families, the whole idea is to segment the market.

The idea here is to minimize the number of models we should have the 
following range for Intel for example:
   pentium3 - merom -  penry - Nehalem - host - kvm/qemu64
So we're supplying wide range of cpus, p3 for maximum flexibility and 
migration, nehalem for performance and migration, host for maximum 
performance and qemu/kvm64 for custom maid.

>
>>
>> There are two main motivations behind the above approach:
>> 1. Sound guest cpu definition.
>> Using a predefined model should automatically set all the relevant
>> vendor/stepping/cpuid flags/cache sizes/etc.
>> We just can let every management application deal with it. It breaks
>> guest OS/apps. For instance there are MSI support in windows guest
>> relay on the stepping.
>>
>> 2. Simplifying end user and mgmt tools.
>> qemu/kvm have the best knowledge about these low levels. If we push
>> it up in the stack, eventually it reaches the user. The end user,
>> not a 'qemu-devel user' which is actually far better from the
>> average user.
>>
>> This means that such users will have to know what is popcount and
>> whether or not to limit migration on one host by adding sse4.2 or
>> not.
>>
>> This is exactly what vmware are doing:
>> - Intel CPUs :
>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
>>
>> - AMD CPUs :
>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992
>>
>
> They don't have to deal with different qemu and kvm versions.
>

Both our customers - the end users. It's not their problem.
IMO what's missing today is a safe and sound cpu emulation that is 
simply and friendly to represent. qemu64,+popcount is not simple for the 
end user. There is no reason to through it on higher level mgmt.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  8:24                             ` Daniel P. Berrange
@ 2010-01-07  9:13                               ` Dor Laor
  0 siblings, 0 replies; 20+ messages in thread
From: Dor Laor @ 2010-01-07  9:13 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Anthony Liguori, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	qemu-devel, Alexander Graf, Avi Kivity, kvm-devel

On 01/07/2010 10:24 AM, Daniel P. Berrange wrote:
> On Thu, Jan 07, 2010 at 10:03:28AM +0200, Dor Laor wrote:
>> On 01/06/2010 05:16 PM, Anthony Liguori wrote:
>>> On 01/06/2010 08:48 AM, Dor Laor wrote:
>>>> On 01/06/2010 04:32 PM, Avi Kivity wrote:
>>>>> On 01/06/2010 04:22 PM, Michael S. Tsirkin wrote:
>>>>>>> We can probably default -enable-kvm to -cpu host, as long as we
>>>>>>> explain
>>>>>>> very carefully that if users wish to preserve cpu features across
>>>>>>> upgrades, they can't depend on the default.
>>>>>> Hardware upgrades or software upgrades?
>>>>>
>>>>> Yes.
>>>>>
>>>>
>>>> I just want to remind all the the main motivation for using -cpu
>>>> realModelThatWasOnceShiped is to provide correct cpu emulation for the
>>>> guest. Using a random qemu|kvm64+flag1-flag2 might really cause
>>>> trouble for the guest OS or guest apps.
>>>>
>>>> On top of -cpu nehalem we can always add fancy features like x2apic, etc.
>>>
>>> I think it boils down to, how are people going to use this.
>>>
>>> For individuals, code names like Nehalem are too obscure. From my own
>>> personal experience, even power users often have no clue whether there
>>> processor is a Nehalem or not.
>>>
>>> For management tools, Nehalem is a somewhat imprecise target because it
>>> covers a wide range of potential processors. In general, I think what we
>>> really need to do is simplify the process of going from, here's the
>>> output of /proc/cpuinfo for a 100 nodes, what do I need to pass to qemu
>>> so that migration always works for these systems.
>>>
>>> I don't think -cpu nehalem really helps with that problem. -cpu none
>>> helps a bit, but I hope we can find something nicer.
>>
>> We can debate about the exact name/model to represent the Nehalem
>> family, I don't have an issue with that and actually Intel and Amd
>> should define it.
>>
>> There are two main motivations behind the above approach:
>> 1. Sound guest cpu definition.
>>     Using a predefined model should automatically set all the relevant
>>     vendor/stepping/cpuid flags/cache sizes/etc.
>>     We just can let every management application deal with it. It breaks
>>     guest OS/apps. For instance there are MSI support in windows guest
>>     relay on the stepping.
>>
>> 2. Simplifying end user and mgmt tools.
>>     qemu/kvm have the best knowledge about these low levels. If we push
>>     it up in the stack, eventually it reaches the user. The end user,
>>     not a 'qemu-devel user' which is actually far better from the
>>     average user.
>>
>>     This means that such users will have to know what is popcount and
>>     whether or not to limit migration on one host by adding sse4.2 or
>>     not.
>>
>> This is exactly what vmware are doing:
>>   - Intel CPUs :
>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
>>   - AMD CPUs :
>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992
>>
>> Why should we invent the wheel (qemu64..)? Let's learn from their
>> experience.
>
> NB, be careful to distinguish the different levels of VMwares mgmt stack. In
> terms of guest configuration, VMWare ESX APIs require the management app to
> specify the raw CPUID masks. With VirtualCenter VMotion they defined this
> handful of common Intel/AMD CPU sets, and will automatically classify hosts

Live migration is the prime motivation for it.
In addition, as we all know windows guest do not like to find a new cpu 
every time they boot.

> into one  of these sets and use that to specify a default CPUID mask, in the
> case that the guest does not have an explicit one in its config. This gives
> them good default, out-of-the-box behaviour, while also allowing mgmt apps
> 100% control over each guest's CPUID should they want it.

That's exactly what we need.

>
> Regards,
> Daniel


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  9:11                               ` Dor Laor
@ 2010-01-07  9:24                                 ` Avi Kivity
  2010-01-07  9:40                                   ` Dor Laor
  0 siblings, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2010-01-07  9:24 UTC (permalink / raw)
  To: dlaor
  Cc: Anthony Liguori, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	qemu-devel, Alexander Graf, kvm-devel

On 01/07/2010 11:11 AM, Dor Laor wrote:
> On 01/07/2010 10:18 AM, Avi Kivity wrote:
>> On 01/07/2010 10:03 AM, Dor Laor wrote:
>>>
>>> We can debate about the exact name/model to represent the Nehalem
>>> family, I don't have an issue with that and actually Intel and Amd
>>> should define it.
>>
>> AMD and Intel already defined their names (in cat /proc/cpuinfo). They
>> don't define families, the whole idea is to segment the market.
>
> The idea here is to minimize the number of models we should have the 
> following range for Intel for example:
>   pentium3 - merom -  penry - Nehalem - host - kvm/qemu64
> So we're supplying wide range of cpus, p3 for maximum flexibility and 
> migration, nehalem for performance and migration, host for maximum 
> performance and qemu/kvm64 for custom maid.

There's no such thing as Nehalem.

>>>
>>> This is exactly what vmware are doing:
>>> - Intel CPUs :
>>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991 
>>>
>>>
>>> - AMD CPUs :
>>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992 
>>>
>>>
>>
>> They don't have to deal with different qemu and kvm versions.
>>
>
> Both our customers - the end users. It's not their problem.
> IMO what's missing today is a safe and sound cpu emulation that is 
> simply and friendly to represent. qemu64,+popcount is not simple for 
> the end user. There is no reason to through it on higher level mgmt.

There's no simple solution except to restrict features to what was 
available on the first processors.



-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  9:24                                 ` Avi Kivity
@ 2010-01-07  9:40                                   ` Dor Laor
  2010-01-07 11:39                                     ` Anthony Liguori
  2010-01-07 11:59                                     ` Avi Kivity
  0 siblings, 2 replies; 20+ messages in thread
From: Dor Laor @ 2010-01-07  9:40 UTC (permalink / raw)
  To: Avi Kivity
  Cc: kvm-devel, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	Alexander Graf, qemu-devel

On 01/07/2010 11:24 AM, Avi Kivity wrote:
> On 01/07/2010 11:11 AM, Dor Laor wrote:
>> On 01/07/2010 10:18 AM, Avi Kivity wrote:
>>> On 01/07/2010 10:03 AM, Dor Laor wrote:
>>>>
>>>> We can debate about the exact name/model to represent the Nehalem
>>>> family, I don't have an issue with that and actually Intel and Amd
>>>> should define it.
>>>
>>> AMD and Intel already defined their names (in cat /proc/cpuinfo). They
>>> don't define families, the whole idea is to segment the market.
>>
>> The idea here is to minimize the number of models we should have the
>> following range for Intel for example:
>> pentium3 - merom - penry - Nehalem - host - kvm/qemu64
>> So we're supplying wide range of cpus, p3 for maximum flexibility and
>> migration, nehalem for performance and migration, host for maximum
>> performance and qemu/kvm64 for custom maid.
>
> There's no such thing as Nehalem.

Intel were ok with it. Again, you can name is corei7 or xeon34234234234, 
I don't care, the principle remains the same.

>
>>>>
>>>> This is exactly what vmware are doing:
>>>> - Intel CPUs :
>>>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1991
>>>>
>>>>
>>>> - AMD CPUs :
>>>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1992
>>>>
>>>>
>>>
>>> They don't have to deal with different qemu and kvm versions.
>>>
>>
>> Both our customers - the end users. It's not their problem.
>> IMO what's missing today is a safe and sound cpu emulation that is
>> simply and friendly to represent. qemu64,+popcount is not simple for
>> the end user. There is no reason to through it on higher level mgmt.
>
> There's no simple solution except to restrict features to what was
> available on the first processors.

What's not simple about the above 4 options?
What's a better alternative (that insures users understand it and use it 
and guest msi and even skype application is happy about it)?



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  9:40                                   ` Dor Laor
@ 2010-01-07 11:39                                     ` Anthony Liguori
  2010-01-07 11:44                                       ` Dor Laor
  2010-01-07 11:59                                     ` Avi Kivity
  1 sibling, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-01-07 11:39 UTC (permalink / raw)
  To: dlaor
  Cc: Avi Kivity, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 03:40 AM, Dor Laor wrote:
>> There's no simple solution except to restrict features to what was
>> available on the first processors.
>
> What's not simple about the above 4 options?
> What's a better alternative (that insures users understand it and use 
> it and guest msi and even skype application is happy about it)?

Even if you have -cpu Nehalem, different versions of the KVM kernel 
module may additionally filter cpuid flags.

So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary 
to say:

(2.6.33) qemu -cpu Nehalem,-syscall
(2.6.18) qemu -cpu Nehalem

In order to be compatible.

Regards,

Anthony Liguori


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 11:39                                     ` Anthony Liguori
@ 2010-01-07 11:44                                       ` Dor Laor
  2010-01-07 12:00                                         ` Avi Kivity
  0 siblings, 1 reply; 20+ messages in thread
From: Dor Laor @ 2010-01-07 11:44 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: Avi Kivity, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 01:39 PM, Anthony Liguori wrote:
> On 01/07/2010 03:40 AM, Dor Laor wrote:
>>> There's no simple solution except to restrict features to what was
>>> available on the first processors.
>>
>> What's not simple about the above 4 options?
>> What's a better alternative (that insures users understand it and use
>> it and guest msi and even skype application is happy about it)?
>
> Even if you have -cpu Nehalem, different versions of the KVM kernel
> module may additionally filter cpuid flags.
>
> So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
> to say:
>
> (2.6.33) qemu -cpu Nehalem,-syscall
> (2.6.18) qemu -cpu Nehalem

Or let qemu do it automatically for you.

>
> In order to be compatible.
>
> Regards,
>
> Anthony Liguori
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07  9:40                                   ` Dor Laor
  2010-01-07 11:39                                     ` Anthony Liguori
@ 2010-01-07 11:59                                     ` Avi Kivity
  2010-01-07 12:17                                       ` Dor Laor
  1 sibling, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2010-01-07 11:59 UTC (permalink / raw)
  To: dlaor
  Cc: kvm-devel, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	Alexander Graf, qemu-devel

On 01/07/2010 11:40 AM, Dor Laor wrote:
>> There's no such thing as Nehalem.
>
>
> Intel were ok with it. Again, you can name is corei7 or 
> xeon34234234234, I don't care, the principle remains the same.
>

There are several processors belonging to the Nehalem family and each 
have different features.

>
> What's not simple about the above 4 options?

If a qemu/kvm/processor combo doesn't support a feature (say, nx) we 
have to remove it from the migration pool even if the Nehalem processor 
class says it's included.  Or else not admit that combination into the 
migration pool in the first place.

> What's a better alternative (that insures users understand it and use 
> it and guest msi and even skype application is happy about it)?
>

Have management scan new nodes and classify them.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 11:44                                       ` Dor Laor
@ 2010-01-07 12:00                                         ` Avi Kivity
  2010-01-07 12:20                                           ` Dor Laor
  0 siblings, 1 reply; 20+ messages in thread
From: Avi Kivity @ 2010-01-07 12:00 UTC (permalink / raw)
  To: dlaor
  Cc: Anthony Liguori, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 01:44 PM, Dor Laor wrote:
>> So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
>> to say:
>>
>> (2.6.33) qemu -cpu Nehalem,-syscall
>> (2.6.18) qemu -cpu Nehalem
>
>
> Or let qemu do it automatically for you.

qemu on 2.6.33 doesn't know that you're running qemu on 2.6.18 on 
another node.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 11:59                                     ` Avi Kivity
@ 2010-01-07 12:17                                       ` Dor Laor
  0 siblings, 0 replies; 20+ messages in thread
From: Dor Laor @ 2010-01-07 12:17 UTC (permalink / raw)
  To: Avi Kivity
  Cc: kvm-devel, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	Alexander Graf, qemu-devel

On 01/07/2010 01:59 PM, Avi Kivity wrote:
> On 01/07/2010 11:40 AM, Dor Laor wrote:
>>> There's no such thing as Nehalem.
>>
>>
>> Intel were ok with it. Again, you can name is corei7 or
>> xeon34234234234, I don't care, the principle remains the same.
>>
>
> There are several processors belonging to the Nehalem family and each
> have different features.

We can start with the older one, and once it get's that important add 
the newer ones.
Until that happens users can either use -host or have 
nehalem,+sse4_2,+newFeature

>
>>
>> What's not simple about the above 4 options?
>
> If a qemu/kvm/processor combo doesn't support a feature (say, nx) we
> have to remove it from the migration pool even if the Nehalem processor
> class says it's included. Or else not admit that combination into the
> migration pool in the first place.

It still management role to compute least common denominator for live 
migration sets.

btw: for nx disabled bios, we should just ignore it from product. Qemu 
direct users should add ,-nx

>
>> What's a better alternative (that insures users understand it and use
>> it and guest msi and even skype application is happy about it)?
>>
>
> Have management scan new nodes and classify them.
>

Of course. Just don't let management control stepping automatically, it 
should only be made for very advanced users and only manually.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:00                                         ` Avi Kivity
@ 2010-01-07 12:20                                           ` Dor Laor
  2010-01-07 12:33                                             ` Anthony Liguori
  0 siblings, 1 reply; 20+ messages in thread
From: Dor Laor @ 2010-01-07 12:20 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Anthony Liguori, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 02:00 PM, Avi Kivity wrote:
> On 01/07/2010 01:44 PM, Dor Laor wrote:
>>> So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
>>> to say:
>>>
>>> (2.6.33) qemu -cpu Nehalem,-syscall
>>> (2.6.18) qemu -cpu Nehalem
>>
>>
>> Or let qemu do it automatically for you.
>
> qemu on 2.6.33 doesn't know that you're running qemu on 2.6.18 on
> another node.
>

We can live with it, either have qemu realize the kernel version out of 
another existing feature or query uname.

Alternatively, the matching libvirt package can be the one adding or 
removing it in the right distribution.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:20                                           ` Dor Laor
@ 2010-01-07 12:33                                             ` Anthony Liguori
  2010-01-07 12:40                                               ` Avi Kivity
  0 siblings, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-01-07 12:33 UTC (permalink / raw)
  To: dlaor
  Cc: Avi Kivity, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 06:20 AM, Dor Laor wrote:
> On 01/07/2010 02:00 PM, Avi Kivity wrote:
>> On 01/07/2010 01:44 PM, Dor Laor wrote:
>>>> So if you had a 2.6.18 kernel and a 2.6.33 kernel, it may be necessary
>>>> to say:
>>>>
>>>> (2.6.33) qemu -cpu Nehalem,-syscall
>>>> (2.6.18) qemu -cpu Nehalem
>>>
>>>
>>> Or let qemu do it automatically for you.
>>
>> qemu on 2.6.33 doesn't know that you're running qemu on 2.6.18 on
>> another node.
>>
>
> We can live with it, either have qemu realize the kernel version out 
> of another existing feature or query uname.
>
> Alternatively, the matching libvirt package can be the one adding or 
> removing it in the right distribution.

There's another option.

Make cpuid information part of live migration protocol, and then support 
something like -cpu Xeon-3550.  We would remember the exact cpuid mask 
we present to the guest and then we could validate that we can obtain 
the same mask on the destination.

Regards,

Anthony Liguori

> -- 
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:33                                             ` Anthony Liguori
@ 2010-01-07 12:40                                               ` Avi Kivity
  2010-01-07 12:47                                                 ` Daniel P. Berrange
                                                                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Avi Kivity @ 2010-01-07 12:40 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: dlaor, kvm-devel, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	Alexander Graf, qemu-devel

On 01/07/2010 02:33 PM, Anthony Liguori wrote:
>
> There's another option.
>
> Make cpuid information part of live migration protocol, and then 
> support something like -cpu Xeon-3550.  We would remember the exact 
> cpuid mask we present to the guest and then we could validate that we 
> can obtain the same mask on the destination.

Currently, our policy is to only migrate dynamic (from the guest's point 
of view) state, and specify static state on the command line [1].

I think your suggestion makes a lot of sense, but I'd like to expand it 
to move all guest state, whether dynamic or static.  So '-m 1G' would be 
migrated as well (but not -mem-path).  Similarly, in -drive 
file=...,if=ide,index=1, everything but file=... would be migrated.

This has an advantage wrt hotplug: since qemu is responsible for 
migrating all guest visible information, the migrator is no longer 
responsible for replaying hotplug events in the exact sequence they 
happened.

In short, I think we should apply your suggestion as broadly as possible.

[1] cpuid state is actually dynamic; repeated cpuid instruction 
execution with the same operands can return different results.  kvm 
supports querying and setting this state.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:40                                               ` Avi Kivity
@ 2010-01-07 12:47                                                 ` Daniel P. Berrange
  2010-01-07 12:50                                                   ` Avi Kivity
  2010-01-07 13:14                                                 ` Anthony Liguori
  2010-01-11 13:26                                                 ` Markus Armbruster
  2 siblings, 1 reply; 20+ messages in thread
From: Daniel P. Berrange @ 2010-01-07 12:47 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Anthony Liguori, dlaor, kvm-devel, Gleb Natapov,
	Michael S. Tsirkin, John Cooper, Alexander Graf, qemu-devel

On Thu, Jan 07, 2010 at 02:40:34PM +0200, Avi Kivity wrote:
> On 01/07/2010 02:33 PM, Anthony Liguori wrote:
> >
> >There's another option.
> >
> >Make cpuid information part of live migration protocol, and then 
> >support something like -cpu Xeon-3550.  We would remember the exact 
> >cpuid mask we present to the guest and then we could validate that we 
> >can obtain the same mask on the destination.
> 
> Currently, our policy is to only migrate dynamic (from the guest's point 
> of view) state, and specify static state on the command line [1].
> 
> I think your suggestion makes a lot of sense, but I'd like to expand it 
> to move all guest state, whether dynamic or static.  So '-m 1G' would be 
> migrated as well (but not -mem-path).  Similarly, in -drive 
> file=...,if=ide,index=1, everything but file=... would be migrated.
> 
> This has an advantage wrt hotplug: since qemu is responsible for 
> migrating all guest visible information, the migrator is no longer 
> responsible for replaying hotplug events in the exact sequence they 
> happened.

With the introduction of the new -device spport, there's no need to
replay hotplug events in order any more. Instead just use static
PCI addresses when starting the guest, and the same addresses after
migration. You could argue that QEMU should preserve the addressing
automatically during migration, but apps need to do it manually
already to keep addreses stable across power-offs, so doing it manually
across migration too is no extra burden.

Regards,
Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:47                                                 ` Daniel P. Berrange
@ 2010-01-07 12:50                                                   ` Avi Kivity
  0 siblings, 0 replies; 20+ messages in thread
From: Avi Kivity @ 2010-01-07 12:50 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: Anthony Liguori, dlaor, kvm-devel, Gleb Natapov,
	Michael S. Tsirkin, John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 02:47 PM, Daniel P. Berrange wrote:
>
> With the introduction of the new -device spport, there's no need to
> replay hotplug events in order any more. Instead just use static
> PCI addresses when starting the guest, and the same addresses after
> migration. You could argue that QEMU should preserve the addressing
> automatically during migration, but apps need to do it manually
> already to keep addreses stable across power-offs, so doing it manually
> across migration too is no extra burden.
>
>    

That's true - shutdown and startup are an equivalent problem to live 
migration from that point of view.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:40                                               ` Avi Kivity
  2010-01-07 12:47                                                 ` Daniel P. Berrange
@ 2010-01-07 13:14                                                 ` Anthony Liguori
  2010-01-07 13:42                                                   ` Dor Laor
  2010-01-11 13:26                                                 ` Markus Armbruster
  2 siblings, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-01-07 13:14 UTC (permalink / raw)
  To: Avi Kivity
  Cc: dlaor, kvm-devel, Gleb Natapov, Michael S. Tsirkin, John Cooper,
	Alexander Graf, qemu-devel

On 01/07/2010 06:40 AM, Avi Kivity wrote:
> On 01/07/2010 02:33 PM, Anthony Liguori wrote:
>>
>> There's another option.
>>
>> Make cpuid information part of live migration protocol, and then 
>> support something like -cpu Xeon-3550.  We would remember the exact 
>> cpuid mask we present to the guest and then we could validate that we 
>> can obtain the same mask on the destination.
>
> Currently, our policy is to only migrate dynamic (from the guest's 
> point of view) state, and specify static state on the command line [1].
>
> I think your suggestion makes a lot of sense, but I'd like to expand 
> it to move all guest state, whether dynamic or static.  So '-m 1G' 
> would be migrated as well (but not -mem-path).  Similarly, in -drive 
> file=...,if=ide,index=1, everything but file=... would be migrated.

Yes, I agree with this and it should be in the form of an fdt.  This 
means we need full qdev conversion.

But I think cpuid is somewhere in the middle with respect to static vs. 
dynamic.  For instance, -cpu host is very dynamic in that you get very 
difficult results on different systems.  Likewise, because of kvm 
filtering, even -cpu qemu64 can be dynamic.

So if we didn't have filtering and -cpu host, I'd agree that it's 
totally static but I think in the current state, it's dynamic.

> This has an advantage wrt hotplug: since qemu is responsible for 
> migrating all guest visible information, the migrator is no longer 
> responsible for replaying hotplug events in the exact sequence they 
> happened.

Yup, 100% in agreement as a long term goal.

> In short, I think we should apply your suggestion as broadly as possible.
>
> [1] cpuid state is actually dynamic; repeated cpuid instruction 
> execution with the same operands can return different results.  kvm 
> supports querying and setting this state.

Yes, and we save some cpuid state in cpu.  We just don't save all of it.

Regards,

Anthony Liguori


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 13:14                                                 ` Anthony Liguori
@ 2010-01-07 13:42                                                   ` Dor Laor
  0 siblings, 0 replies; 20+ messages in thread
From: Dor Laor @ 2010-01-07 13:42 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: Avi Kivity, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, Alexander Graf, qemu-devel

On 01/07/2010 03:14 PM, Anthony Liguori wrote:
> On 01/07/2010 06:40 AM, Avi Kivity wrote:
>> On 01/07/2010 02:33 PM, Anthony Liguori wrote:
>>>
>>> There's another option.
>>>
>>> Make cpuid information part of live migration protocol, and then
>>> support something like -cpu Xeon-3550. We would remember the exact
>>> cpuid mask we present to the guest and then we could validate that we
>>> can obtain the same mask on the destination.

It solves controlling the destination qemu execution all right but does 
not change the initial spawning of the original guest - to know whether 
,-syscall is needed or not.

Anyway, I'm in favor of it too.

>>
>> Currently, our policy is to only migrate dynamic (from the guest's
>> point of view) state, and specify static state on the command line [1].
>>
>> I think your suggestion makes a lot of sense, but I'd like to expand
>> it to move all guest state, whether dynamic or static. So '-m 1G'
>> would be migrated as well (but not -mem-path). Similarly, in -drive
>> file=...,if=ide,index=1, everything but file=... would be migrated.
>
> Yes, I agree with this and it should be in the form of an fdt. This
> means we need full qdev conversion.
>
> But I think cpuid is somewhere in the middle with respect to static vs.
> dynamic. For instance, -cpu host is very dynamic in that you get very
> difficult results on different systems. Likewise, because of kvm
> filtering, even -cpu qemu64 can be dynamic.
>
> So if we didn't have filtering and -cpu host, I'd agree that it's
> totally static but I think in the current state, it's dynamic.
>
>> This has an advantage wrt hotplug: since qemu is responsible for
>> migrating all guest visible information, the migrator is no longer
>> responsible for replaying hotplug events in the exact sequence they
>> happened.
>
> Yup, 100% in agreement as a long term goal.
>
>> In short, I think we should apply your suggestion as broadly as possible.
>>
>> [1] cpuid state is actually dynamic; repeated cpuid instruction
>> execution with the same operands can return different results. kvm
>> supports querying and setting this state.
>
> Yes, and we save some cpuid state in cpu. We just don't save all of it.
>
> Regards,
>
> Anthony Liguori
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [Qemu-devel] cpuid problem in upstream qemu with kvm
  2010-01-07 12:40                                               ` Avi Kivity
  2010-01-07 12:47                                                 ` Daniel P. Berrange
  2010-01-07 13:14                                                 ` Anthony Liguori
@ 2010-01-11 13:26                                                 ` Markus Armbruster
  2 siblings, 0 replies; 20+ messages in thread
From: Markus Armbruster @ 2010-01-11 13:26 UTC (permalink / raw)
  To: Avi Kivity
  Cc: Anthony Liguori, kvm-devel, Gleb Natapov, Michael S. Tsirkin,
	John Cooper, dlaor, Alexander Graf, qemu-devel

Avi Kivity <avi@redhat.com> writes:

> On 01/07/2010 02:33 PM, Anthony Liguori wrote:
>>
>> There's another option.
>>
>> Make cpuid information part of live migration protocol, and then
>> support something like -cpu Xeon-3550.  We would remember the exact
>> cpuid mask we present to the guest and then we could validate that
>> we can obtain the same mask on the destination.
>
> Currently, our policy is to only migrate dynamic (from the guest's
> point of view) state, and specify static state on the command line
> [1].
>
> I think your suggestion makes a lot of sense, but I'd like to expand
> it to move all guest state, whether dynamic or static.  So '-m 1G'
> would be migrated as well (but not -mem-path).  Similarly, in -drive
> file=...,if=ide,index=1, everything but file=... would be migrated.

Becomes a bit clearer with the new way to configure stuff:

  -drive if=none,id=DRIVE-ID,...        <--- host, don't migrate
  -device ide=drive,drive=DRIVE-ID,...  <--- guest, do migrate

[...]

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2010-01-11 13:26 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <4B30EFDF.4060202@codemonkey.ws>
     [not found] ` <4B31F1BA.10005@redhat.com>
     [not found]   ` <4B43D4E2.9050102@codemonkey.ws>
     [not found]     ` <4B4402B1.1030605@redhat.com>
     [not found]       ` <4B448F36.8030605@codemonkey.ws>
     [not found]         ` <4B449467.4070606@redhat.com>
     [not found]           ` <4B4494FC.1080907@codemonkey.ws>
     [not found]             ` <4B449608.7040102@redhat.com>
     [not found]               ` <F67360FA-5DDF-41A1-87AA-726D13A87AD5@suse.de>
     [not found]                 ` <4B4496E9.2030201@redhat.com>
     [not found]                   ` <20100106142231.GF2248@redhat.com>
     [not found]                     ` <4B449EE7.4050401@redhat.com>
     [not found]                       ` <4B44A2C6.4050504@redhat.com>
     [not found]                         ` <4B44A965.9040300@codemonkey.ws>
2010-01-07  8:03                           ` [Qemu-devel] cpuid problem in upstream qemu with kvm Dor Laor
2010-01-07  8:18                             ` Avi Kivity
2010-01-07  9:11                               ` Dor Laor
2010-01-07  9:24                                 ` Avi Kivity
2010-01-07  9:40                                   ` Dor Laor
2010-01-07 11:39                                     ` Anthony Liguori
2010-01-07 11:44                                       ` Dor Laor
2010-01-07 12:00                                         ` Avi Kivity
2010-01-07 12:20                                           ` Dor Laor
2010-01-07 12:33                                             ` Anthony Liguori
2010-01-07 12:40                                               ` Avi Kivity
2010-01-07 12:47                                                 ` Daniel P. Berrange
2010-01-07 12:50                                                   ` Avi Kivity
2010-01-07 13:14                                                 ` Anthony Liguori
2010-01-07 13:42                                                   ` Dor Laor
2010-01-11 13:26                                                 ` Markus Armbruster
2010-01-07 11:59                                     ` Avi Kivity
2010-01-07 12:17                                       ` Dor Laor
2010-01-07  8:24                             ` Daniel P. Berrange
2010-01-07  9:13                               ` Dor Laor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).