public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Andrew Theurer <habanero@linux.vnet.ibm.com>
Cc: kvm@vger.kernel.org
Subject: Re: windows workload: many ept_violation and mmio exits
Date: Thu, 03 Dec 2009 16:34:07 +0200	[thread overview]
Message-ID: <4B17CC5F.20101@redhat.com> (raw)
In-Reply-To: <4B17C12B.4020300@linux.vnet.ibm.com>

On 12/03/2009 03:46 PM, Andrew Theurer wrote:
> I am running a windows workload which has 26 windows VMs running many 
> instances of a J2EE workload.  There are 13 pairs of an application 
> server VM and database server VM.  There seem to be quite a bit of 
> vm_exits, and it looks over a third of them are mmio_exit:
>
>> efer_relo  0
>> exits      337139
>> fpu_reloa  247321
>> halt_exit  19092
>> halt_wake  18611
>> host_stat  247332
>> hypercall  0
>> insn_emul  184265
>> insn_emul  184265
>> invlpg     0
>> io_exits   69184
>> irq_exits  52953
>> irq_injec  48115
>> irq_windo  2411
>> largepage  19
>> mmio_exit  123554
> I collected a kvmtrace, and below is a very small portion of that.  Is 
> there a way I can figure out what device the mmio's are for? 

We want 'info physical_address_space' in the monitor.

> Also, is it normal to have lots of ept_violations?  This is a 2 socket 
> Nehalem system with SMT on.

So long as pf_fixed is low, these are all mmio or apic accesses.

>
>
>> qemu-system-x86-19673 [014] 213577.939624: kvm_page_fault: address 
>> fed000f0 error_code 181
>>  qemu-system-x86-19673 [014] 213577.939627: kvm_mmio: mmio 
>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>>  qemu-system-x86-19673 [014] 213577.939629: kvm_mmio: mmio read len 4 
>> gpa 0xfed000f0 val 0xfb8f214d

hpet

>>  qemu-system-x86-19673 [014] 213577.939631: kvm_entry: vcpu 0
>>  qemu-system-x86-19673 [014] 213577.939633: kvm_exit: reason 
>> ept_violation rip 0xfffff8000160ef8e
>>  qemu-system-x86-19673 [014] 213577.939634: kvm_page_fault: address 
>> fed000f0 error_code 181

hpet - was this the same exit? we ought to skip over the emulated 
instruction.

>>  qemu-system-x86-19673 [014] 213577.939693: kvm_page_fault: address 
>> fed000f0 error_code 181
>>  qemu-system-x86-19673 [014] 213577.939696: kvm_mmio: mmio 
>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0

hpet

>>  qemu-system-x86-19332 [008] 213577.939699: kvm_exit: reason 
>> ept_violation rip 0xfffff80001b3af8e
>>  qemu-system-x86-19332 [008] 213577.939700: kvm_page_fault: address 
>> fed000f0 error_code 181
>>  qemu-system-x86-19673 [014] 213577.939702: kvm_mmio: mmio read len 4 
>> gpa 0xfed000f0 val 0xfb8f3da6

hpet

>>  qemu-system-x86-19332 [008] 213577.939706: kvm_mmio: mmio 
>> unsatisfied-read len 4 gpa 0xfed000f0 val 0x0
>>  qemu-system-x86-19563 [010] 213577.939707: kvm_ioapic_set_irq: pin 
>> 11 dst 1 vec=130 (LowPrio|logical|level)
>>  qemu-system-x86-19332 [008] 213577.939713: kvm_mmio: mmio read len 4 
>> gpa 0xfed000f0 val 0x29a105de

hpet ...

>>  qemu-system-x86-19673 [014] 213577.939908: kvm_ioapic_set_irq: pin 
>> 11 dst 1 vec=130 (LowPrio|logical|level)
>>  qemu-system-x86-19673 [014] 213577.939910: kvm_entry: vcpu 0
>>  qemu-system-x86-19673 [014] 213577.939912: kvm_exit: reason 
>> apic_access rip 0xfffff800016a050c
>>  qemu-system-x86-19673 [014] 213577.939914: kvm_mmio: mmio write len 
>> 4 gpa 0xfee000b0 val 0x0

apic eoi

>>  qemu-system-x86-19332 [008] 213577.939958: kvm_mmio: mmio write len 
>> 4 gpa 0xfee000b0 val 0x0
>>  qemu-system-x86-19673 [014] 213577.939958: kvm_pic_set_irq: chip 1 
>> pin 3 (level|masked)
>>  qemu-system-x86-19332 [008] 213577.939958: kvm_apic: apic_write 
>> APIC_EOI = 0x0

apic eoi

>>  qemu-system-x86-19673 [014] 213577.940010: kvm_exit: reason 
>> cr_access rip 0xfffff800016ee2b2
>>  qemu-system-x86-19673 [014] 213577.940011: kvm_cr: cr_write 4 = 0x678
>>  qemu-system-x86-19673 [014] 213577.940017: kvm_entry: vcpu 0
>>  qemu-system-x86-19673 [014] 213577.940019: kvm_exit: reason 
>> cr_access rip 0xfffff800016ee2b5
>>  qemu-system-x86-19673 [014] 213577.940019: kvm_cr: cr_write 4 = 0x6f8

toggling global pages, we can avoid that with CR4_GUEST_HOST_MASK.

So, tons of hpet and eois.  We can accelerate both by thing the hyper-V 
accelerations, we already have some (unmerged) code for eoi, so this 
should be improved soon.

>
> Here is oprofile:
>
>> 4117817  62.2029  kvm-intel.ko             kvm-intel.ko             
>> vmx_vcpu_run
>> 338198    5.1087  qemu-system-x86_64       qemu-system-x86_64       
>> /usr/local/qemu/48bb360cc687b89b74dfb1cac0f6e8812b64841c/bin/qemu-system-x86_64 
>>
>> 62449     0.9433  kvm.ko                   kvm.ko                   
>> kvm_arch_vcpu_ioctl_run
>> 56512     0.8537  
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> copy_user_generic_string

We ought to switch to put_user/get_user.  rep movs has quite slow start-up.

>> 52373     0.7911  
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> native_write_msr_safe

hpet in kernel or hyper-V timers will reduce this.

>> 34847     0.5264  
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> schedule
>> 34678     0.5238  
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> fget_light

and this.

>> 29894     0.4516  kvm.ko                   kvm.ko                   
>> paging64_walk_addr
>> 27778     0.4196  kvm.ko                   kvm.ko                   
>> gfn_to_hva
>> 24563     0.3710  kvm.ko                   kvm.ko                   
>> x86_decode_insn
>> 23900     0.3610  
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> vmlinux-2.6.32-rc7-5e8cb552cb8b48244b6d07bff984b3c4080d4bc9-autokern1 
>> do_select
>> 21123     0.3191  libc-2.10.90.so          libc-2.10.90.so          
>> memcpy
>> 20694     0.3126  kvm.ko                   kvm.ko                   
>> x86_emulate_insn

hyper-V APIC and timers will reduce all of the above (except memcpy).

-- 
error compiling committee.c: too many arguments to function


  reply	other threads:[~2009-12-03 14:34 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-12-03 13:46 windows workload: many ept_violation and mmio exits Andrew Theurer
2009-12-03 14:34 ` Avi Kivity [this message]
2011-08-26  5:32   ` ya su
2011-08-28  7:42     ` Avi Kivity
2011-08-28 18:54       ` Alexander Graf
2011-08-28 20:42         ` HPET configuration in Seabios (was: Re: windows workload: many ept_violation and mmio exits) Jan Kiszka
2011-08-28 22:14           ` Kevin O'Connor
2011-08-29  5:32             ` HPET configuration in Seabios Avi Kivity
2011-08-29 10:25               ` Jan Kiszka
2011-08-29 11:00                 ` Avi Kivity
2011-08-29 11:05                   ` Jan Kiszka
2011-08-29 11:11                     ` Avi Kivity
2011-08-29 11:12                     ` Jan Kiszka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B17CC5F.20101@redhat.com \
    --to=avi@redhat.com \
    --cc=habanero@linux.vnet.ibm.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox