From: Hanna Czenczek <hreitz@redhat.com>
To: qemu-block@nongnu.org
Cc: qemu-devel@nongnu.org, Hanna Czenczek <hreitz@redhat.com>,
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>,
Eric Blake <eblake@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>, Fam Zheng <fam@euphon.net>
Subject: [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX
Date: Tue, 11 Apr 2023 19:34:14 +0200 [thread overview]
Message-ID: <20230411173418.19549-1-hreitz@redhat.com> (raw)
RFC:
https://lists.nongnu.org/archive/html/qemu-block/2023-03/msg00446.html
v1:
https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg05049.html
As explained in the RFC’s cover letter, the problem this series
addresses is that we pad requests from the guest that are unaligned to
the underlying storage’s alignment requirements so they fit. This
involves giving them head and/or tail padding.
We generally work with I/O vectors, so this padding is added via
prepending/appending vector elements. We have a maximum limit on the
number of elements, though (1024, specifically), so it is possible that
padding has the vector exceed this limit, resulting in an unrecoverable
I/O error that is returned to the guest -- on a perfectly valid request,
as far as the guest is concerned.
To fix this, when the limit would be exceeded, this series temporarily
merges some (up to three) I/O vector element into one to decrease the
number of vector elements as far as necessary.
v2:
- Patch 1: Made a note in the commit message of having renamed
qiov_slice() -> qemu_iovec_slice()
- Patch 2:
- Renamed bdrv_padding_destroy() to bdrv_padding_finalize(),
indicating that the padding is not just simply destroyed, but more
steps may be taken (i.e. copying back the contents of the temporary
buffer used for the merged elements)
- Generally replaced “couple of” by “two or three”
Hanna Czenczek (4):
util/iov: Make qiov_slice() public
block: Collapse padded I/O vecs exceeding IOV_MAX
util/iov: Remove qemu_iovec_init_extended()
iotests/iov-padding: New test
include/qemu/iov.h | 8 +-
block/io.c | 166 +++++++++++++++++++++--
util/iov.c | 89 +++---------
tests/qemu-iotests/tests/iov-padding | 85 ++++++++++++
tests/qemu-iotests/tests/iov-padding.out | 59 ++++++++
5 files changed, 314 insertions(+), 93 deletions(-)
create mode 100755 tests/qemu-iotests/tests/iov-padding
create mode 100644 tests/qemu-iotests/tests/iov-padding.out
--
2.39.1
next reply other threads:[~2023-04-11 17:35 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-11 17:34 Hanna Czenczek [this message]
2023-04-11 17:34 ` [PATCH v2 1/4] util/iov: Make qiov_slice() public Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
2023-04-11 18:55 ` Vladimir Sementsov-Ogievskiy
2023-04-11 17:34 ` [PATCH v2 3/4] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 4/4] iotests/iov-padding: New test Hanna Czenczek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230411173418.19549-1-hreitz@redhat.com \
--to=hreitz@redhat.com \
--cc=eblake@redhat.com \
--cc=fam@euphon.net \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=vsementsov@yandex-team.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).