From: "Marcus Hähnel" <marcus.haehnel@kernkonzept.com>
To: Adam Lackorzynski <adam@l4re.org>, qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH] multiboot: Use DMA instead port-based transfer
Date: Thu, 21 Oct 2021 23:55:41 +0200 [thread overview]
Message-ID: <2800151.e9J7NaK4W3@amethyst> (raw)
In-Reply-To: <da674bf3-fc7f-28c8-7c45-f98754ecb5d1@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 2235 bytes --]
On Tuesday, October 19, 2021 6:45:44 PM CEST Paolo Bonzini wrote:
> On my system (a relatively recent laptop) I get 15-20 MiB per second,
> which is slow but not as slow as what you got. Out of curiosity, can
> you test what you get with the following kernel patch?
>
> diff --git a/arch/x86/kvm/kvm_emulate.h b/arch/x86/kvm/kvm_emulate.h
> index 798508e8b6f5..5853ae93bcb2 100644
> --- a/arch/x86/kvm/kvm_emulate.h
> +++ b/arch/x86/kvm/kvm_emulate.h
> @@ -272,7 +272,7 @@ struct fetch_cache {
> };
>
> struct read_cache {
> - u8 data[1024];
> + u8 data[4096];
> unsigned long pos;
> unsigned long end;
> };
Hi Paolo,
Thank you very much for the cleaned up and improved patch in the other
thread, which solves our issue perfectly! Your work is much appreciated.
Regarding your question above I made some quick benchmark runs. Using a
195kB kernel image and measuring from QEmu start until the first complete
line is sent over the serial output I get the following timings, all
numbers in seconds:
kvm? | DMA Multiboot | Old Multiboot |
-------|---------------|---------------|
no-kvm | 0.209 ± 0.01 | 15.283 ± 0.19 |
kvm | 0.207 ± 0.01 | 20.771 ± 0.26 |
kvm-4k | 0.208 ± 0.01 | 19.878 ± 0.22 |
The tests were run 10 times using perf stat -r10. The table shows the
averages and standard deviation. While perf does have some overhead
the general issue is independent of that.
Changing the read cache to 4k has a negligible impact compared to running
without kvm. The numbers for DMA are roughly two orders of magnitude better
in all cases.
Hardware: Lenovo T480, Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
Software: Gentoo, Custom Linux 5.14.9 kernel
QEmu master with your DMA multiboot patches applied, v6.1.0-1564-g1a510366d8-dirty
configured without any options, only the x86_64 softmmu as target
Commandline: qemu-system-x86_64 -enable-kvm -kernel /path/to/kernel -serial stdio -nographic -monitor none -m 512
General ballpark of results was confirmed on a different T480 with
another OS and QEmu version and on a server system. As you can see we get
not MiB/s when booting through multiboot without DMA but KiB/s.
- Marcus
[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
prev parent reply other threads:[~2021-10-21 22:04 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-10 19:10 [PATCH] multiboot: Use DMA instead port-based transfer Adam Lackorzynski
2021-10-19 8:28 ` Stefano Garzarella
2021-10-19 16:45 ` Paolo Bonzini
2021-10-21 21:55 ` Marcus Hähnel [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2800151.e9J7NaK4W3@amethyst \
--to=marcus.haehnel@kernkonzept.com \
--cc=adam@l4re.org \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).