From: Stefano Garzarella <sgarzare@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Kevin Wolf" <kwolf@redhat.com>, "Fam Zheng" <fam@euphon.net>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Eduardo Habkost" <ehabkost@redhat.com>,
qemu-block@nongnu.org, "Stefan Weil" <sw@weilnetz.de>,
"Markus Armbruster" <armbru@redhat.com>,
"Max Reitz" <mreitz@redhat.com>,
"Stefan Hajnoczi" <stefanha@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Eric Blake" <eblake@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: [PATCH 0/3] linux-aio: limit the batch size to reduce queue latency
Date: Wed, 7 Jul 2021 17:00:16 +0200 [thread overview]
Message-ID: <20210707150019.201442-1-sgarzare@redhat.com> (raw)
This series add a new `aio-max-batch` parameter to IOThread, and use it in the
Linux AIO backend to limit the batch size (number of request submitted to the
kernel through io_submit(2)).
Commit 2558cb8dd4 ("linux-aio: increasing MAX_EVENTS to a larger hardcoded
value") changed MAX_EVENTS from 128 to 1024, to increase the number of
in-flight requests. But this change also increased the potential maximum batch
to 1024 elements.
The problem is noticeable when we have a lot of requests in flight and multiple
queues attached to the same AIO context.
In this case we potentially create very large batches. Instead, when we have
a single queue, the batch is limited because when the queue is unplugged,
there is a call to io_submit(2).
In practice, io_submit(2) was called only when there are no more queues plugged
in or when we fill the AIO queue (MAX_EVENTS = 1024).
I run some benchmarks to choose 32 as default batch value for Linux AIO.
Below the kIOPS measured with fio running in the guest (average over 3 runs):
| master | with this series applied |
|687f9f7834e| maxbatch=8|maxbatch=16|maxbatch=32|maxbatch=64|
# queues | 1q | 4qs | 1q | 4qs | 1q | 4qs | 1q | 4qs | 1q | 4qs |
-- randread tests -|-----------------------------------------------------------|
bs=4k iodepth=1 | 193 | 188 | 204 | 198 | 194 | 202 | 201 | 213 | 195 | 201 |
bs=4k iodepth=8 | 241 | 265 | 247 | 248 | 249 | 250 | 257 | 269 | 270 | 240 |
bs=4k iodepth=64 | 216 | 202 | 257 | 269 | 269 | 256 | 258 | 271 | 254 | 251 |
bs=4k iodepth=128 | 212 | 177 | 267 | 253 | 285 | 271 | 245 | 281 | 255 | 269 |
bs=16k iodepth=1 | 130 | 133 | 137 | 137 | 130 | 130 | 130 | 130 | 130 | 130 |
bs=16k iodepth=8 | 130 | 137 | 144 | 137 | 131 | 130 | 131 | 131 | 130 | 131 |
bs=16k iodepth=64 | 130 | 104 | 137 | 134 | 131 | 128 | 131 | 128 | 137 | 128 |
bs=16k iodepth=128 | 130 | 101 | 137 | 134 | 131 | 129 | 131 | 129 | 138 | 129 |
1q = virtio-blk device with a single queue
4qs = virito-blk device with multi queues (one queue per vCPU - 4)
I reported only the most significant tests, but I also did other tests to
make sure there were no regressions, here the full report:
https://docs.google.com/spreadsheets/d/11X3_5FJu7pnMTlf4ZatRDvsnU9K3EPj6Mn3aJIsE4tI
Test environment:
- Disk: Intel Corporation NVMe Datacenter SSD [Optane]
- CPU: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
- QEMU: qemu-system-x86_64 -machine q35,accel=kvm -smp 4 -m 4096 \
... \
-object iothread,id=iothread0,aio-max-batch=${MAX_BATCH} \
-device virtio-blk-pci,iothread=iothread0,num-queues=${NUM_QUEUES}
- benchmark: fio --ioengine=libaio --thread --group_reporting \
--number_ios=200000 --direct=1 --filename=/dev/vdb \
--rw=${TEST} --bs=${BS} --iodepth=${IODEPTH} --numjobs=16
Next steps:
- benchmark io_uring and use `aio-max-batch` also there
- make MAX_EVENTS parametric adding a new `aio-max-events` parameter
Comments and suggestions are welcome :-)
Thanks,
Stefano
Stefano Garzarella (3):
iothread: generalize iothread_set_param/iothread_get_param
iothread: add aio-max-batch parameter
linux-aio: limit the batch size using `aio-max-batch` parameter
qapi/misc.json | 6 ++-
qapi/qom.json | 7 +++-
include/block/aio.h | 12 ++++++
include/sysemu/iothread.h | 3 ++
block/linux-aio.c | 6 ++-
iothread.c | 82 ++++++++++++++++++++++++++++++++++-----
monitor/hmp-cmds.c | 2 +
util/aio-posix.c | 12 ++++++
util/aio-win32.c | 5 +++
util/async.c | 2 +
qemu-options.hx | 8 +++-
11 files changed, 131 insertions(+), 14 deletions(-)
--
2.31.1
next reply other threads:[~2021-07-07 15:02 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-07 15:00 Stefano Garzarella [this message]
2021-07-07 15:00 ` [PATCH 1/3] iothread: generalize iothread_set_param/iothread_get_param Stefano Garzarella
2021-07-07 15:00 ` [PATCH 2/3] iothread: add aio-max-batch parameter Stefano Garzarella
2021-07-13 14:51 ` Stefan Hajnoczi
2021-07-19 10:10 ` Stefano Garzarella
2021-07-07 15:00 ` [PATCH 3/3] linux-aio: limit the batch size using `aio-max-batch` parameter Stefano Garzarella
2021-07-13 14:58 ` Stefan Hajnoczi
2021-07-19 10:35 ` Stefano Garzarella
2021-07-20 10:41 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210707150019.201442-1-sgarzare@redhat.com \
--to=sgarzare@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eblake@redhat.com \
--cc=ehabkost@redhat.com \
--cc=fam@euphon.net \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=sw@weilnetz.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).