From: "Alex Bennée" <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] Cross-posted : Odd QXL/KVM performance issue with a Windows 7 Guest
Date: Tue, 05 Nov 2019 11:32:51 +0000 [thread overview]
Message-ID: <871rumk17g.fsf@linaro.org> (raw)
In-Reply-To: <49b5d60f-b0cd-8d47-5e0f-75fc76b3ee47@fnarfbargle.com>
Brad Campbell <lists2009@fnarfbargle.com> writes:
> On 6/9/19 21:38, Brad Campbell wrote:
>> 7022@1567775824.002106:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe13b0c970
>> 7022@1567775824.002115:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe13b0c980
>> 7022@1567775824.003122:kvm_vm_ioctl type 0xc008ae67, arg 0x7ffe13b0c970
>
>> Does this look familiar to anyone?
>
> Ugh. System timer.
>
> So with the timer interrupt removed and an added trace on IRQ > 0:
>
> qxl/guest-0: 79096403248: qxldd: DrvCopyBits
> 14955@1567780063.149527:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
> 14956@1567780063.150291:qxl_ring_res_put 0 #res=1
> 14955@1567780063.163672:kvm_run_exit cpu_index 2, reason 2
> 14955@1567780063.163688:qxl_io_write 0 native addr=4 (QXL_IO_NOTIFY_OOM) val=0 size=1 async=0
> 14955@1567780063.163704:qxl_spice_oom 0
> 14955@1567780063.163720:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
> 14956@1567780063.163755:qxl_ring_command_check 0 native
> 14956@1567780063.163779:qxl_ring_res_push 0 native s#=0 res#=1 last=0x7f3c0d44b6e0 notify=yes
> 14956@1567780063.163816:qxl_ring_res_push_rest 0 ring 1/8 [326,325]
> 14956@1567780063.163841:qxl_send_events 0 1
> 14956@1567780063.163868:qxl_ring_cursor_check 0 native
> 14956@1567780063.163888:qxl_ring_command_check 0 native
> 14924@1567780063.163879:kvm_set_irq irq 11, level 1, status 1
> 14954@1567780063.163895:kvm_run_exit cpu_index 1, reason 2
> 14954@1567780063.163965:qxl_io_write 0 native addr=3 (QXL_IO_UPDATE_IRQ) val=0 size=1 async=0
> 14954@1567780063.164006:kvm_set_irq irq 11, level 0, status 1
> 14954@1567780063.164029:kvm_vcpu_ioctl cpu_index 1, type 0xae80, arg (nil)
> 14954@1567780063.164065:kvm_run_exit cpu_index 1, reason 2
> 14954@1567780063.164080:qxl_io_write 0 native addr=3 (QXL_IO_UPDATE_IRQ) val=0 size=1 async=0
> 14954@1567780063.164104:kvm_vcpu_ioctl cpu_index 1, type 0xae80, arg (nil)
> 14955@1567780063.266778:kvm_run_exit cpu_index 2, reason 2
> 14955@1567780063.266790:qxl_io_write 0 native addr=0 (QXL_IO_NOTIFY_CMD) val=0 size=1 async=0
> 14955@1567780063.266809:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
> 14956@1567780063.266822:qxl_ring_cursor_check 0 native
> 14956@1567780063.266842:qxl_ring_command_check 0 native
> 79213750625 qxl-0/cmd: cmd @ 0x10000000104b598 draw: surface_id 0 type copy effect opaque src 100000001fecbf8 (id 9fe0870780 type 0 flags 0 width 1920 height 1080, fmt 8 flags 0 x 1920 y 1080 stride 7680 palette 0 data 100000001fecc28) area 1920x1080+0+0 rop 8
> 14956@1567780063.266983:qxl_ring_command_get 0 native
> 14956@1567780063.267044:qxl_ring_command_check 0 native
> 14956@1567780063.267070:qxl_ring_cursor_check 0 native
> 14956@1567780063.267087:qxl_ring_command_check 0 native
> 14956@1567780063.267109:qxl_ring_command_req_notification 0
> 14955@1567780063.267967:kvm_run_exit cpu_index 2, reason 2
> 14955@1567780063.267987:qxl_io_write 0 native addr=7 (QXL_IO_LOG) val=0 size=1 async=0
> 14955@1567780063.268015:qxl_io_log 0 qxldd: DrvCopyBits
>
> So if I'm not mistaken (for the nth time), we have KVM_RUN on cpu index 2 here:
>
> 14955@1567780063.163720:kvm_vcpu_ioctl cpu_index 2, type 0xae80, arg (nil)
>
> And it returns here :
>
> 14955@1567780063.266778:kvm_run_exit cpu_index 2, reason 2
>
> Does that imply guest code is running for ~100ms on that vcpu?
Yes. In the KVM game vmexits is what kills performance. If QEMU is
involved in doing the emulation you have to exit the guest, go through
the kernel, exit the ioctl and then QEMU does it's thing before you
restart.
You can't avoid all exits - indeed VIRTIO is designed to limit the exits
to a single exit per transmission. However if you are emulating a legacy
device in QEMU every access to a memory mapped register will involve an
exit.
If you monitor QEMU with "perf record" and then look at the result
("perf report") see what QEMU is doing all that time. It's likely there
is a legacy device somewhere which the guest kernel is hammering.
--
Alex.
next prev parent reply other threads:[~2019-11-05 11:33 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <ed421291-7178-d7bc-5ed3-9863d28ceba9@fnarfbargle.com>
[not found] ` <dd33a398-3c1f-0c92-2318-00ad144e1e5d@fnarfbargle.com>
2019-09-06 8:49 ` [Qemu-devel] Cross-posted : Odd QXL/KVM performance issue with a Windows 7 Guest Brad Campbell
2019-09-06 13:38 ` Brad Campbell
2019-09-06 14:41 ` Brad Campbell
2019-11-05 11:32 ` Alex Bennée [this message]
2019-09-06 19:03 ` Dr. David Alan Gilbert
2019-09-07 0:44 ` Brad Campbell
2019-09-09 15:22 ` Dr. David Alan Gilbert
2019-09-12 14:54 ` [Qemu-devel] [Qemu-discuss] " Brad Campbell
2019-11-05 2:38 ` Brad Campbell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=871rumk17g.fsf@linaro.org \
--to=alex.bennee@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).