public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* KVM Processor cache size
@ 2010-08-02 11:45 Ricardo Martins
  2010-08-02 12:51 ` Andre Przywara
  0 siblings, 1 reply; 20+ messages in thread
From: Ricardo Martins @ 2010-08-02 11:45 UTC (permalink / raw)
  To: kvm

Hi guys,

I'm having a problem with kvm, my physical machine have 2 processor
Xeon E5520, with 8 mb cache size, when i run "cat /proc/cpuinfo" the
linux shows 16 processors equal.

processor       : 15
vendor_id       : GenuineIntel
cpu family      : 6
model           : 26
model name      : Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
stepping        : 5
cpu MHz         : 1596.000
cache size      : 8192 KB
physical id     : 1
siblings        : 8
core id         : 3
cpu cores       : 4
apicid          : 23
fpu             : yes
fpu_exception   : yes
cpuid level     : 11
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov

  pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx
rdtscp lm const
                                          ant_tsc ida nonstop_tsc pni
monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 ss

        e4_2 popcnt lahf_lm
bogomips        : 4533.28
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management: [8]


The problem is that when I run the same command on the virtual
machine, Linux shows the processors with only 32 kb, I believe that
anything is wrong.

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 6
model name      : QEMU Virtual CPU version 0.9.1
stepping        : 3
cpu MHz         : 2266.575
cache size      : 32 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 4
wp              : yes
flags           : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat
                                    pse36 clflush mmx fxsr sse sse2
syscall nx lm up pni
bogomips        : 4540.34
clflush size    : 64
power management:

On the host:
[root@nissan ~]# lsmod | grep kvm
kvm_intel              86920  1
kvm                   226208  2 ksm,kvm_intel

On virtual Machine:
linea:/# x86info
x86info v1.21.  Dave Jones 2001-2007


Found 1 CPU, but found 16d CPUs in MPTable.
--------------------------------------------------------------------------
Family: 6 Model: 6 Stepping: 3 Type: 0 Brand: 0
CPU Model: Celeron / Mobile Pentium II Original OEM
Feature flags:
 fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
clflsh mmx fxsr sse sse2
Extended feature flags:
 sse3 [31]
 [0] [2] [3] [4] [5] [6] [7] [8] [9] SYSCALL [13] [15] [16] xd [23] [24] em64t
Cache info
 L1 Instruction cache: 32KB, 8-way associative. 64 byte line size.
 L1 Data cache: 32KB, 8-way associative. 64 byte line size.
 L2 unified cache: 2MB, sectored, 8-way associative. 64 byte line size.
TLB info


I searched for someone who had the same problem but found nothing.

Plz Help...

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 11:45 KVM Processor cache size Ricardo Martins
@ 2010-08-02 12:51 ` Andre Przywara
  2010-08-02 13:08   ` Avi Kivity
  2010-08-02 13:49   ` Ulrich Drepper
  0 siblings, 2 replies; 20+ messages in thread
From: Andre Przywara @ 2010-08-02 12:51 UTC (permalink / raw)
  To: Ricardo Martins; +Cc: kvm@vger.kernel.org

Ricardo Martins wrote:
> Hi guys,
> 
> I'm having a problem with kvm, my physical machine have 2 processor
> Xeon E5520, with 8 mb cache size, when i run "cat /proc/cpuinfo" the
> linux shows 16 processors equal.
> 
> processor       : 15
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 26
> model name      : Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
> stepping        : 5
> cpu MHz         : 1596.000
> cache size      : 8192 KB
 > ....
> The problem is that when I run the same command on the virtual
> machine, Linux shows the processors with only 32 kb, I believe that
> anything is wrong.
> 
> processor       : 0
> vendor_id       : GenuineIntel
> cpu family      : 6
> model           : 6
> model name      : QEMU Virtual CPU version 0.9.1
> stepping        : 3
> cpu MHz         : 2266.575
> cache size      : 32 KB
 > ....
> On virtual Machine:
> linea:/# x86info
> x86info v1.21.  Dave Jones 2001-2007
> 
> 
> Found 1 CPU, but found 16d CPUs in MPTable.
> --------------------------------------------------------------------------
> Family: 6 Model: 6 Stepping: 3 Type: 0 Brand: 0
> CPU Model: Celeron / Mobile Pentium II Original OEM
> Feature flags:
>  fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
> clflsh mmx fxsr sse sse2
> Extended feature flags:
>  sse3 [31]
>  [0] [2] [3] [4] [5] [6] [7] [8] [9] SYSCALL [13] [15] [16] xd [23] [24] em64t
> Cache info
>  L1 Instruction cache: 32KB, 8-way associative. 64 byte line size.
>  L1 Data cache: 32KB, 8-way associative. 64 byte line size.
>  L2 unified cache: 2MB, sectored, 8-way associative. 64 byte line size.
> TLB info

KVM (or better: the QEMU part) injects a bogus CPU model (compare 
family/model/name), which is the same on all host CPUs. This helps with 
migration, because the CPU does not change.
The cache size is also the same, as it is part of the "cpuid" command 
output.
You can use other CPU models (like kvm64) for better base models, but 
the cache size will likely not match your host processor's one.
For that purpose exists the "host" CPU model, which will (to some 
degree) simply push your host CPU model to the guest. For now this only 
affects the family/model/stepping/name values and the feature flags.
I sent a patch to include the cache size when using -cpu host, but this 
has been n'acked because the benefit is not clear.
http://www.mail-archive.com/qemu-devel@nongnu.org/msg32718.html

Do you have a use case for the cache size or is this just out of curiosity?


-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 12:51 ` Andre Przywara
@ 2010-08-02 13:08   ` Avi Kivity
  2010-08-02 22:22     ` Anthony Liguori
  2010-08-02 22:25     ` Anthony Liguori
  2010-08-02 13:49   ` Ulrich Drepper
  1 sibling, 2 replies; 20+ messages in thread
From: Avi Kivity @ 2010-08-02 13:08 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

  On 08/02/2010 03:51 PM, Andre Przywara wrote:
> Ricardo Martins wrote:
>> Hi guys,
>>
>> I'm having a problem with kvm, my physical machine have 2 processor
>> Xeon E5520, with 8 mb cache size, when i run "cat /proc/cpuinfo" the
>> linux shows 16 processors equal.
>>
>> processor       : 15
>> vendor_id       : GenuineIntel
>> cpu family      : 6
>> model           : 26
>> model name      : Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
>> stepping        : 5
>> cpu MHz         : 1596.000
>> cache size      : 8192 KB
> > ....
>> The problem is that when I run the same command on the virtual
>> machine, Linux shows the processors with only 32 kb, I believe that
>> anything is wrong.
>>
>> processor       : 0
>> vendor_id       : GenuineIntel
>> cpu family      : 6
>> model           : 6
>> model name      : QEMU Virtual CPU version 0.9.1
>> stepping        : 3
>> cpu MHz         : 2266.575
>> cache size      : 32 KB
> > ....
>> On virtual Machine:
>> linea:/# x86info
>> x86info v1.21.  Dave Jones 2001-2007
>>
>>
>> Found 1 CPU, but found 16d CPUs in MPTable.
>> -------------------------------------------------------------------------- 
>>
>> Family: 6 Model: 6 Stepping: 3 Type: 0 Brand: 0
>> CPU Model: Celeron / Mobile Pentium II Original OEM
>> Feature flags:
>>  fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
>> clflsh mmx fxsr sse sse2
>> Extended feature flags:
>>  sse3 [31]
>>  [0] [2] [3] [4] [5] [6] [7] [8] [9] SYSCALL [13] [15] [16] xd [23] 
>> [24] em64t
>> Cache info
>>  L1 Instruction cache: 32KB, 8-way associative. 64 byte line size.
>>  L1 Data cache: 32KB, 8-way associative. 64 byte line size.
>>  L2 unified cache: 2MB, sectored, 8-way associative. 64 byte line size.
>> TLB info
>
> KVM (or better: the QEMU part) injects a bogus CPU model (compare 
> family/model/name), which is the same on all host CPUs. This helps 
> with migration, because the CPU does not change.
> The cache size is also the same, as it is part of the "cpuid" command 
> output.
> You can use other CPU models (like kvm64) for better base models, but 
> the cache size will likely not match your host processor's one.
> For that purpose exists the "host" CPU model, which will (to some 
> degree) simply push your host CPU model to the guest. For now this 
> only affects the family/model/stepping/name values and the feature flags.
> I sent a patch to include the cache size when using -cpu host, but 
> this has been n'acked because the benefit is not clear.

Anthony, why was this NACKed?  First, there are programs which query the 
cache size.  That's why it's exposed!  Second, -cpu host is for exposing 
as many host cpu features as we can, not just those we have an immediate 
use for.  It's like 'cp -a' dropping attributes the author didn't care 
about.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 12:51 ` Andre Przywara
  2010-08-02 13:08   ` Avi Kivity
@ 2010-08-02 13:49   ` Ulrich Drepper
  2010-08-02 18:38     ` Ricardo Martins
                       ` (2 more replies)
  1 sibling, 3 replies; 20+ messages in thread
From: Ulrich Drepper @ 2010-08-02 13:49 UTC (permalink / raw)
  To: Andre Przywara; +Cc: Ricardo Martins, kvm@vger.kernel.org

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 08/02/2010 05:51 AM, Andre Przywara wrote:
> Do you have a use case for the cache size or is this just out of curiosity?

glibc uses the cache size information returned by cpuid to perform
optimizations.  For instance, copy operations which would pollute too
much of the cache because they are large will use non-temporal
instructions.  There are real performance benefits.  Even the synthetic
CPU provided by qemu should have a more realistic value.

- -- 
➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAkxWzPsACgkQ2ijCOnn/RHQ2JwCeJsgXHxkWG/PYS8JQRiGM1UFF
m78Ani9kmnKJyru/wh764NSgHSQx+WjU
=6qet
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 13:49   ` Ulrich Drepper
@ 2010-08-02 18:38     ` Ricardo Martins
  2010-08-02 22:24       ` Andre Przywara
  2010-08-02 22:15     ` Andre Przywara
  2010-08-02 22:23     ` Anthony Liguori
  2 siblings, 1 reply; 20+ messages in thread
From: Ricardo Martins @ 2010-08-02 18:38 UTC (permalink / raw)
  To: Ulrich Drepper; +Cc: Andre Przywara, kvm@vger.kernel.org

Thanks for the answers...

I am creating an environment with multiple virtual servers, my main
concern is whether I have a noticeable
loss of performance because of this problem, if the answer is yes I
will find another solution for my host server.



2010/8/2 Ulrich Drepper <drepper@redhat.com>:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 08/02/2010 05:51 AM, Andre Przywara wrote:
>> Do you have a use case for the cache size or is this just out of curiosity?
>
> glibc uses the cache size information returned by cpuid to perform
> optimizations.  For instance, copy operations which would pollute too
> much of the cache because they are large will use non-temporal
> instructions.  There are real performance benefits.  Even the synthetic
> CPU provided by qemu should have a more realistic value.
>
> - --
> ➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.14 (GNU/Linux)
>
> iEYEARECAAYFAkxWzPsACgkQ2ijCOnn/RHQ2JwCeJsgXHxkWG/PYS8JQRiGM1UFF
> m78Ani9kmnKJyru/wh764NSgHSQx+WjU
> =6qet
> -----END PGP SIGNATURE-----
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 13:49   ` Ulrich Drepper
  2010-08-02 18:38     ` Ricardo Martins
@ 2010-08-02 22:15     ` Andre Przywara
  2010-08-02 22:23     ` Anthony Liguori
  2 siblings, 0 replies; 20+ messages in thread
From: Andre Przywara @ 2010-08-02 22:15 UTC (permalink / raw)
  To: Ulrich Drepper; +Cc: Ricardo Martins, kvm@vger.kernel.org

Ulrich Drepper wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 08/02/2010 05:51 AM, Andre Przywara wrote:
>> Do you have a use case for the cache size or is this just out of curiosity?
> 
> glibc uses the cache size information returned by cpuid to perform
> optimizations.  For instance, copy operations which would pollute too
> much of the cache because they are large will use non-temporal
> instructions.  There are real performance benefits.
Thanks for pointing this out. Do you have an idea which benchmark could 
show the impact of an incorrect cache size? Microbenchmarks would be OK, 
as long as it can convince maintainers ;-)

 > Even the synthetic
> CPU provided by qemu should have a more realistic value.
What would you recommend as a more realistic value?
For AMD it is currently 64K/64K C/D for L1 and 512KB for L2, for Intel 
32K/32K C/D for L1 and 4MB for L2.

I think the 32KB shown in the original post are some kind of a bug, 
Ricardo, can you reproduce this? Can you post the command line and the 
used qemu version ($ qemu --version)

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 13:08   ` Avi Kivity
@ 2010-08-02 22:22     ` Anthony Liguori
  2010-08-02 22:35       ` Andre Przywara
  2010-08-03  5:33       ` Avi Kivity
  2010-08-02 22:25     ` Anthony Liguori
  1 sibling, 2 replies; 20+ messages in thread
From: Anthony Liguori @ 2010-08-02 22:22 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

On 08/02/2010 08:08 AM, Avi Kivity wrote:
>> I sent a patch to include the cache size when using -cpu host, but 
>> this has been n'acked because the benefit is not clear.
>
>
> Anthony, why was this NACKed?

I didn't NACK it.

My concern is that we're still not handling live migration with -cpu 
host in any meaningful way.  Exposing more details without addressing 
live migration is going to increase the likelihood of major failure.

We need to add cpuid information to live migration such that we can 
generate a graceful failure during migration.  Really, we shouldn't have 
taken -cpu host in the first place without this.

Regards,

Anthony Liguori

>   First, there are programs which query the cache size.  That's why 
> it's exposed!  Second, -cpu host is for exposing as many host cpu 
> features as we can, not just those we have an immediate use for.  It's 
> like 'cp -a' dropping attributes the author didn't care about.
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 13:49   ` Ulrich Drepper
  2010-08-02 18:38     ` Ricardo Martins
  2010-08-02 22:15     ` Andre Przywara
@ 2010-08-02 22:23     ` Anthony Liguori
  2010-08-02 22:42       ` Andre Przywara
  2 siblings, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-08-02 22:23 UTC (permalink / raw)
  To: Ulrich Drepper; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

On 08/02/2010 08:49 AM, Ulrich Drepper wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> On 08/02/2010 05:51 AM, Andre Przywara wrote:
>    
>> Do you have a use case for the cache size or is this just out of curiosity?
>>      
> glibc uses the cache size information returned by cpuid to perform
> optimizations.  For instance, copy operations which would pollute too
> much of the cache because they are large will use non-temporal
> instructions.  There are real performance benefits.

I imagine that there would be real performance problems from doing live 
migration with -cpu host too if we don't guarantee these values remain 
stable across migration...

Regards,

Anthony Liguori

>    Even the synthetic
> CPU provided by qemu should have a more realistic value.
>
> - -- 
> ➧ Ulrich Drepper ➧ Red Hat, Inc. ➧ 444 Castro St ➧ Mountain View, CA ❖
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2.0.14 (GNU/Linux)
>
> iEYEARECAAYFAkxWzPsACgkQ2ijCOnn/RHQ2JwCeJsgXHxkWG/PYS8JQRiGM1UFF
> m78Ani9kmnKJyru/wh764NSgHSQx+WjU
> =6qet
> -----END PGP SIGNATURE-----
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>    


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 18:38     ` Ricardo Martins
@ 2010-08-02 22:24       ` Andre Przywara
  0 siblings, 0 replies; 20+ messages in thread
From: Andre Przywara @ 2010-08-02 22:24 UTC (permalink / raw)
  To: Ricardo Martins; +Cc: Ulrich Drepper, kvm@vger.kernel.org

Ricardo Martins wrote:
> Thanks for the answers...
> 
> I am creating an environment with multiple virtual servers, my main
> concern is whether I have a noticeable
> loss of performance because of this problem, if the answer is yes I
> will find another solution for my host server.
For server workloads it is less of a concern, that's why this issue 
hasn't been addressed for such a long time. Usually server workloads are 
not restricted by cache size or cache access patterns, so you will not 
notice a slowdown. Please note that QEMU does not restrict the actual 
cache size of the CPU (which will be fully available to the guest), but 
only the size _reported_ to the guest.
If you are still in doubt, you could do benchmarks with the default CPU 
model and with -cpu host (maybe even with my patch) to determine whether 
there is a difference.

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 13:08   ` Avi Kivity
  2010-08-02 22:22     ` Anthony Liguori
@ 2010-08-02 22:25     ` Anthony Liguori
  2010-08-02 22:54       ` Andre Przywara
  1 sibling, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-08-02 22:25 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

On 08/02/2010 08:08 AM, Avi Kivity wrote:
>  On 08/02/2010 03:51 PM, Andre Przywara wrote:
>> Ricardo Martins wrote:
>>> Hi guys,
>>>
>>> I'm having a problem with kvm, my physical machine have 2 processor
>>> Xeon E5520, with 8 mb cache size, when i run "cat /proc/cpuinfo" the
>>> linux shows 16 processors equal.
>>>
>>> processor       : 15
>>> vendor_id       : GenuineIntel
>>> cpu family      : 6
>>> model           : 26
>>> model name      : Intel(R) Xeon(R) CPU           E5520  @ 2.27GHz
>>> stepping        : 5
>>> cpu MHz         : 1596.000
>>> cache size      : 8192 KB
>> > ....
>>> The problem is that when I run the same command on the virtual
>>> machine, Linux shows the processors with only 32 kb, I believe that
>>> anything is wrong.
>>>
>>> processor       : 0
>>> vendor_id       : GenuineIntel
>>> cpu family      : 6
>>> model           : 6
>>> model name      : QEMU Virtual CPU version 0.9.1
>>> stepping        : 3
>>> cpu MHz         : 2266.575
>>> cache size      : 32 KB
>> > ....
>>> On virtual Machine:
>>> linea:/# x86info
>>> x86info v1.21.  Dave Jones 2001-2007
>>>
>>>
>>> Found 1 CPU, but found 16d CPUs in MPTable.
>>> -------------------------------------------------------------------------- 
>>>
>>> Family: 6 Model: 6 Stepping: 3 Type: 0 Brand: 0
>>> CPU Model: Celeron / Mobile Pentium II Original OEM
>>> Feature flags:
>>>  fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36
>>> clflsh mmx fxsr sse sse2
>>> Extended feature flags:
>>>  sse3 [31]
>>>  [0] [2] [3] [4] [5] [6] [7] [8] [9] SYSCALL [13] [15] [16] xd [23] 
>>> [24] em64t
>>> Cache info
>>>  L1 Instruction cache: 32KB, 8-way associative. 64 byte line size.
>>>  L1 Data cache: 32KB, 8-way associative. 64 byte line size.
>>>  L2 unified cache: 2MB, sectored, 8-way associative. 64 byte line size.
>>> TLB info
>>
>> KVM (or better: the QEMU part) injects a bogus CPU model (compare 
>> family/model/name), which is the same on all host CPUs. This helps 
>> with migration, because the CPU does not change.
>> The cache size is also the same, as it is part of the "cpuid" command 
>> output.
>> You can use other CPU models (like kvm64) for better base models, but 
>> the cache size will likely not match your host processor's one.
>> For that purpose exists the "host" CPU model, which will (to some 
>> degree) simply push your host CPU model to the guest. For now this 
>> only affects the family/model/stepping/name values and the feature 
>> flags.
>> I sent a patch to include the cache size when using -cpu host, but 
>> this has been n'acked because the benefit is not clear.
>
> Anthony, why was this NACKed?  First, there are programs which query 
> the cache size.  That's why it's exposed!  Second, -cpu host is for 
> exposing as many host cpu features as we can, not just those we have 
> an immediate use for.  It's like 'cp -a' dropping attributes the 
> author didn't care about.

That's exactly what the code does today BTW.  The kernel module filters 
cpuid flags, qemu filters additional flags, and we don't pass everything 
through anyway.

-cpu host is a mess and needs some love.  It's impossible to use 
correctly today in a production environment if you care about reliably 
generating the same guest visible interface.

Regards,

Anthony Liguori



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:22     ` Anthony Liguori
@ 2010-08-02 22:35       ` Andre Przywara
  2010-08-02 23:38         ` Anthony Liguori
  2010-08-03  5:33       ` Avi Kivity
  1 sibling, 1 reply; 20+ messages in thread
From: Andre Przywara @ 2010-08-02 22:35 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Avi Kivity, Ricardo Martins, kvm@vger.kernel.org

Anthony Liguori wrote:
> On 08/02/2010 08:08 AM, Avi Kivity wrote:
>>> I sent a patch to include the cache size when using -cpu host, but 
>>> this has been n'acked because the benefit is not clear.
>>
>> Anthony, why was this NACKed?
> 
> I didn't NACK it.
You are right. I am sorry if that created a misunderstanding, I actually 
meant: "was not committed".
> 
> My concern is that we're still not handling live migration with -cpu 
> host in any meaningful way.  Exposing more details without addressing 
> live migration is going to increase the likelihood of major failure.
Would you accept a patch simply disabling migration in case -cpu host 
was used in the first place?
> 
> We need to add cpuid information to live migration such that we can 
> generate a graceful failure during migration.  Really, we shouldn't have 
> taken -cpu host in the first place without this.
Is there already a way to communicate from the target to the source? 
This would allow to check for migrate-ability before we transfer any 
data. Or should we handle this in a management application?

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:23     ` Anthony Liguori
@ 2010-08-02 22:42       ` Andre Przywara
  2010-08-02 23:36         ` Anthony Liguori
  0 siblings, 1 reply; 20+ messages in thread
From: Andre Przywara @ 2010-08-02 22:42 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Ulrich Drepper, Ricardo Martins, kvm@vger.kernel.org

Anthony Liguori wrote:
> On 08/02/2010 08:49 AM, Ulrich Drepper wrote:
>> glibc uses the cache size information returned by cpuid to perform
>> optimizations.  For instance, copy operations which would pollute too
>> much of the cache because they are large will use non-temporal
>> instructions.  There are real performance benefits.
> 
> I imagine that there would be real performance problems from doing live 
> migration with -cpu host too if we don't guarantee these values remain 
> stable across migration...
Again, -cpu host is not meant to be migrated. There are other 
virtualization use cases than cloud-like server virtualization. 
Sometimes users don't care about migration (or even the live version), 
but want full CPU exposure for performance reasons (think of 
virtualizing Windows on a Linux desktop).
I agree that -cpu host and migration should be addressed, but only to a 
certain degree. And missing migration experience should not be a road 
blocker for -cpu host.

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:25     ` Anthony Liguori
@ 2010-08-02 22:54       ` Andre Przywara
  2010-08-02 23:40         ` Anthony Liguori
  0 siblings, 1 reply; 20+ messages in thread
From: Andre Przywara @ 2010-08-02 22:54 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Avi Kivity, Ricardo Martins, kvm@vger.kernel.org

Anthony Liguori wrote:
> On 08/02/2010 08:08 AM, Avi Kivity wrote:
>>  On 08/02/2010 03:51 PM, Andre Przywara wrote:
>>> I sent a patch to include the cache size when using -cpu host, but 
>>> this has been n'acked because the benefit is not clear.
>> Anthony, why was this NACKed?  First, there are programs which query 
>> the cache size.  That's why it's exposed!  Second, -cpu host is for 
>> exposing as many host cpu features as we can, not just those we have 
>> an immediate use for.  It's like 'cp -a' dropping attributes the 
>> author didn't care about.
> 
> That's exactly what the code does today BTW.  The kernel module filters 
> cpuid flags, qemu filters additional flags, and we don't pass everything 
> through anyway.
> 
> -cpu host is a mess and needs some love.
The mentioned patch addresses this to some degree, where it creates a 
list of CPUID leafs which can be safely passed through. There are some 
which we should not (and sometimes must not) propagate (think of host 
topology and power management), but these should not block the useful 
ones. CPUID transports a lot of different information, so we should also 
distinguish here.

> It's impossible to use 
> correctly today in a production environment if you care about reliably 
> generating the same guest visible interface.
Do you mean by this that the guest sees a different CPU model on each 
host it is started? Actually this is a default scenario for native 
machines and CPUID was introduced for applications to adapt to this.
So if you leave migration aside, -cpu host is actually the natural way.

Regards,
Andre.

-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:42       ` Andre Przywara
@ 2010-08-02 23:36         ` Anthony Liguori
  2010-08-03  6:25           ` Dor Laor
  0 siblings, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-08-02 23:36 UTC (permalink / raw)
  To: Andre Przywara; +Cc: Ulrich Drepper, Ricardo Martins, kvm@vger.kernel.org

On 08/02/2010 05:42 PM, Andre Przywara wrote:
> Anthony Liguori wrote:
>> On 08/02/2010 08:49 AM, Ulrich Drepper wrote:
>>> glibc uses the cache size information returned by cpuid to perform
>>> optimizations.  For instance, copy operations which would pollute too
>>> much of the cache because they are large will use non-temporal
>>> instructions.  There are real performance benefits.
>>
>> I imagine that there would be real performance problems from doing 
>> live migration with -cpu host too if we don't guarantee these values 
>> remain stable across migration...
> Again, -cpu host is not meant to be migrated.

Then it needs to prevent migration from happening.  Otherwise, it's a 
bug waiting to happen.

> There are other virtualization use cases than cloud-like server 
> virtualization. Sometimes users don't care about migration (or even 
> the live version), but want full CPU exposure for performance reasons 
> (think of virtualizing Windows on a Linux desktop).
> I agree that -cpu host and migration should be addressed, but only to 
> a certain degree. And missing migration experience should not be a 
> road blocker for -cpu host.

When we can reasonably prevent it, we should prevent users from shooting 
themselves in the foot.  Honestly, I think -cpu host is exactly what you 
would want to use in a cloud.  A lot of private clouds and even public 
clouds are largely based on homogenous hardware.

I actually think the case where you want to migrate between heterogenous 
hardware is grossly overstated.

Regards,

Anthony Liguori

>
> Regards,
> Andre.
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:35       ` Andre Przywara
@ 2010-08-02 23:38         ` Anthony Liguori
  2010-08-03  5:38           ` Avi Kivity
  0 siblings, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-08-02 23:38 UTC (permalink / raw)
  To: Andre Przywara; +Cc: Avi Kivity, Ricardo Martins, kvm@vger.kernel.org

On 08/02/2010 05:35 PM, Andre Przywara wrote:
> Anthony Liguori wrote:
>> On 08/02/2010 08:08 AM, Avi Kivity wrote:
>>>> I sent a patch to include the cache size when using -cpu host, but 
>>>> this has been n'acked because the benefit is not clear.
>>>
>>> Anthony, why was this NACKed?
>>
>> I didn't NACK it.
> You are right. I am sorry if that created a misunderstanding, I 
> actually meant: "was not committed".
>>
>> My concern is that we're still not handling live migration with -cpu 
>> host in any meaningful way.  Exposing more details without addressing 
>> live migration is going to increase the likelihood of major failure.
> Would you accept a patch simply disabling migration in case -cpu host 
> was used in the first place?

Yes.  The only concern that keeps me from applying it is that it 
increases the likelihood of failures with migration.  I agree that it's 
safer to disable migration until it's fixed properly.

>>
>> We need to add cpuid information to live migration such that we can 
>> generate a graceful failure during migration.  Really, we shouldn't 
>> have taken -cpu host in the first place without this.
> Is there already a way to communicate from the target to the source? 
> This would allow to check for migrate-ability before we transfer any 
> data. Or should we handle this in a management application?

Send the cpuid fields as part of migration state.  Verify they match the 
local cpuid fields on the destination side.  The destination can then 
reject the migration if it can't match those CPUID fields.  That's 
actually the only way to safely do it today because there's no way for a 
management application to query qemu and kvm for the fields that they'll 
mask out.

Regards,

Anthony Liguori

> Regards,
> Andre.
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:54       ` Andre Przywara
@ 2010-08-02 23:40         ` Anthony Liguori
  2010-08-03  5:41           ` Avi Kivity
  0 siblings, 1 reply; 20+ messages in thread
From: Anthony Liguori @ 2010-08-02 23:40 UTC (permalink / raw)
  To: Andre Przywara; +Cc: Avi Kivity, Ricardo Martins, kvm@vger.kernel.org

On 08/02/2010 05:54 PM, Andre Przywara wrote:
> Anthony Liguori wrote:
>> On 08/02/2010 08:08 AM, Avi Kivity wrote:
>>>  On 08/02/2010 03:51 PM, Andre Przywara wrote:
>>>> I sent a patch to include the cache size when using -cpu host, but 
>>>> this has been n'acked because the benefit is not clear.
>>> Anthony, why was this NACKed?  First, there are programs which query 
>>> the cache size.  That's why it's exposed!  Second, -cpu host is for 
>>> exposing as many host cpu features as we can, not just those we have 
>>> an immediate use for.  It's like 'cp -a' dropping attributes the 
>>> author didn't care about.
>>
>> That's exactly what the code does today BTW.  The kernel module 
>> filters cpuid flags, qemu filters additional flags, and we don't pass 
>> everything through anyway.
>>
>> -cpu host is a mess and needs some love.
> The mentioned patch addresses this to some degree, where it creates a 
> list of CPUID leafs which can be safely passed through. There are some 
> which we should not (and sometimes must not) propagate (think of host 
> topology and power management), but these should not block the useful 
> ones. CPUID transports a lot of different information, so we should 
> also distinguish here.
>
>> It's impossible to use correctly today in a production environment if 
>> you care about reliably generating the same guest visible interface.
> Do you mean by this that the guest sees a different CPU model on each 
> host it is started?

No, I mean that even if you say -cpu host and record feature flags of 
the machine your own, since we mask out certain features in both QEMU 
and KVM, the resulting set of features is not discoverable which means 
you can't recreate them somewhere else.

> Actually this is a default scenario for native machines and CPUID was 
> introduced for applications to adapt to this.

Yes, but virtualization is not supposed to expose the underlying 
hardware.  The vendor_id bit is a harder problem that I'm willing to 
ignore in this discussion.  I understand why we have to do this today.

Regards,

Anthony Liguori

> So if you leave migration aside, -cpu host is actually the natural way.
>
> Regards,
> Andre.
>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 22:22     ` Anthony Liguori
  2010-08-02 22:35       ` Andre Przywara
@ 2010-08-03  5:33       ` Avi Kivity
  1 sibling, 0 replies; 20+ messages in thread
From: Avi Kivity @ 2010-08-03  5:33 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

  On 08/03/2010 01:22 AM, Anthony Liguori wrote:
> On 08/02/2010 08:08 AM, Avi Kivity wrote:
>>> I sent a patch to include the cache size when using -cpu host, but 
>>> this has been n'acked because the benefit is not clear.
>>
>>
>> Anthony, why was this NACKed?
>
> I didn't NACK it.
>
> My concern is that we're still not handling live migration with -cpu 
> host in any meaningful way.  Exposing more details without addressing 
> live migration is going to increase the likelihood of major failure.

-cpu host is never going to be live migratable unless your hosts are 
exactly equal.  Its goal is to get the best performance, not best 
compatibility, similar to device assignment.

>
> We need to add cpuid information to live migration such that we can 
> generate a graceful failure during migration. 

Agreed, esp. as it contains state.

> Really, we shouldn't have taken -cpu host in the first place without 
> this.

Disagreed.  For live migration the user needs to specify cpuid precisely.

We do need to be able to specify the cache size parameters from the -cpu 
description, but that shouldn't stop -cpu host.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 23:38         ` Anthony Liguori
@ 2010-08-03  5:38           ` Avi Kivity
  0 siblings, 0 replies; 20+ messages in thread
From: Avi Kivity @ 2010-08-03  5:38 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

  On 08/03/2010 02:38 AM, Anthony Liguori wrote:
> Is there already a way to communicate from the target to the source? 
> This would allow to check for migrate-ability before we transfer any 
> data. Or should we handle this in a management application?

Since this is determined at startup time, it should be done by 
management.  There's no point in starting a live migration that we know 
will fail.

>
> Send the cpuid fields as part of migration state.  Verify they match 
> the local cpuid fields on the destination side.  The destination can 
> then reject the migration if it can't match those CPUID fields. 

I agree with that, as a safety check.

Note it can be determined even earlier, if qemu warns/fails on masked 
features:

    qemu -cpu qemu64,+this,-that,strict


> That's actually the only way to safely do it today because there's no 
> way for a management application to query qemu and kvm for the fields 
> that they'll mask out.

That needs to part of the qemu capabiltities megapatch.  Supporting 
POPCNT isn't very different from supporting cache=unsafe from the 
management point of view.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 23:40         ` Anthony Liguori
@ 2010-08-03  5:41           ` Avi Kivity
  0 siblings, 0 replies; 20+ messages in thread
From: Avi Kivity @ 2010-08-03  5:41 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: Andre Przywara, Ricardo Martins, kvm@vger.kernel.org

  On 08/03/2010 02:40 AM, Anthony Liguori wrote:
>
> Yes, but virtualization is not supposed to expose the underlying 
> hardware.  The vendor_id bit is a harder problem that I'm willing to 
> ignore in this discussion.  I understand why we have to do this today.

Today and forever... unless one of the processor vendors starts 
supporting the other's fast system call instructions on all modes.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: KVM Processor cache size
  2010-08-02 23:36         ` Anthony Liguori
@ 2010-08-03  6:25           ` Dor Laor
  0 siblings, 0 replies; 20+ messages in thread
From: Dor Laor @ 2010-08-03  6:25 UTC (permalink / raw)
  To: Anthony Liguori
  Cc: Andre Przywara, Ulrich Drepper, Ricardo Martins,
	kvm@vger.kernel.org

On 08/03/2010 02:36 AM, Anthony Liguori wrote:
> On 08/02/2010 05:42 PM, Andre Przywara wrote:
>> Anthony Liguori wrote:
>>> On 08/02/2010 08:49 AM, Ulrich Drepper wrote:
>>>> glibc uses the cache size information returned by cpuid to perform
>>>> optimizations. For instance, copy operations which would pollute too
>>>> much of the cache because they are large will use non-temporal
>>>> instructions. There are real performance benefits.
>>>
>>> I imagine that there would be real performance problems from doing
>>> live migration with -cpu host too if we don't guarantee these values
>>> remain stable across migration...
>> Again, -cpu host is not meant to be migrated.
>
> Then it needs to prevent migration from happening. Otherwise, it's a bug
> waiting to happen.
>
>> There are other virtualization use cases than cloud-like server
>> virtualization. Sometimes users don't care about migration (or even
>> the live version), but want full CPU exposure for performance reasons
>> (think of virtualizing Windows on a Linux desktop).
>> I agree that -cpu host and migration should be addressed, but only to
>> a certain degree. And missing migration experience should not be a
>> road blocker for -cpu host.
>
> When we can reasonably prevent it, we should prevent users from shooting
> themselves in the foot. Honestly, I think -cpu host is exactly what you
> would want to use in a cloud. A lot of private clouds and even public
> clouds are largely based on homogenous hardware.

There are two good solutions for that:
a. keep adding newer -cpu definition like the Penryn, Nehalem,
    Opteron_gx, so newer models will be abstracted as similar to the
    physical properties
b. Use strict flag with -cpu host and pass the info with the live
    migration protocol.
    Our live migration protocol can do better job with validation the
    cmdline and the current set of devices/hw on the src/dst and fail
    migration if there is a diff. Today we relay on libvirt for that,
    another mechanism will surely help, especially for -cpu host.
    The goodie is that there won't be a need to wait for the non-live
    migration part, and more cpu cycles will be saved.

>
> I actually think the case where you want to migrate between heterogenous
> hardware is grossly overstated.
>
> Regards,
>
> Anthony Liguori
>
>>
>> Regards,
>> Andre.
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2010-08-03  6:25 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-02 11:45 KVM Processor cache size Ricardo Martins
2010-08-02 12:51 ` Andre Przywara
2010-08-02 13:08   ` Avi Kivity
2010-08-02 22:22     ` Anthony Liguori
2010-08-02 22:35       ` Andre Przywara
2010-08-02 23:38         ` Anthony Liguori
2010-08-03  5:38           ` Avi Kivity
2010-08-03  5:33       ` Avi Kivity
2010-08-02 22:25     ` Anthony Liguori
2010-08-02 22:54       ` Andre Przywara
2010-08-02 23:40         ` Anthony Liguori
2010-08-03  5:41           ` Avi Kivity
2010-08-02 13:49   ` Ulrich Drepper
2010-08-02 18:38     ` Ricardo Martins
2010-08-02 22:24       ` Andre Przywara
2010-08-02 22:15     ` Andre Przywara
2010-08-02 22:23     ` Anthony Liguori
2010-08-02 22:42       ` Andre Przywara
2010-08-02 23:36         ` Anthony Liguori
2010-08-03  6:25           ` Dor Laor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox