* [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX
@ 2023-04-11 17:34 Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 1/4] util/iov: Make qiov_slice() public Hanna Czenczek
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Hanna Czenczek @ 2023-04-11 17:34 UTC (permalink / raw)
To: qemu-block
Cc: qemu-devel, Hanna Czenczek, Vladimir Sementsov-Ogievskiy,
Eric Blake, Kevin Wolf, Stefan Hajnoczi, Fam Zheng
RFC:
https://lists.nongnu.org/archive/html/qemu-block/2023-03/msg00446.html
v1:
https://lists.nongnu.org/archive/html/qemu-devel/2023-03/msg05049.html
As explained in the RFC’s cover letter, the problem this series
addresses is that we pad requests from the guest that are unaligned to
the underlying storage’s alignment requirements so they fit. This
involves giving them head and/or tail padding.
We generally work with I/O vectors, so this padding is added via
prepending/appending vector elements. We have a maximum limit on the
number of elements, though (1024, specifically), so it is possible that
padding has the vector exceed this limit, resulting in an unrecoverable
I/O error that is returned to the guest -- on a perfectly valid request,
as far as the guest is concerned.
To fix this, when the limit would be exceeded, this series temporarily
merges some (up to three) I/O vector element into one to decrease the
number of vector elements as far as necessary.
v2:
- Patch 1: Made a note in the commit message of having renamed
qiov_slice() -> qemu_iovec_slice()
- Patch 2:
- Renamed bdrv_padding_destroy() to bdrv_padding_finalize(),
indicating that the padding is not just simply destroyed, but more
steps may be taken (i.e. copying back the contents of the temporary
buffer used for the merged elements)
- Generally replaced “couple of” by “two or three”
Hanna Czenczek (4):
util/iov: Make qiov_slice() public
block: Collapse padded I/O vecs exceeding IOV_MAX
util/iov: Remove qemu_iovec_init_extended()
iotests/iov-padding: New test
include/qemu/iov.h | 8 +-
block/io.c | 166 +++++++++++++++++++++--
util/iov.c | 89 +++---------
tests/qemu-iotests/tests/iov-padding | 85 ++++++++++++
tests/qemu-iotests/tests/iov-padding.out | 59 ++++++++
5 files changed, 314 insertions(+), 93 deletions(-)
create mode 100755 tests/qemu-iotests/tests/iov-padding
create mode 100644 tests/qemu-iotests/tests/iov-padding.out
--
2.39.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/4] util/iov: Make qiov_slice() public
2023-04-11 17:34 [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX Hanna Czenczek
@ 2023-04-11 17:34 ` Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Hanna Czenczek @ 2023-04-11 17:34 UTC (permalink / raw)
To: qemu-block
Cc: qemu-devel, Hanna Czenczek, Vladimir Sementsov-Ogievskiy,
Eric Blake, Kevin Wolf, Stefan Hajnoczi, Fam Zheng
We want to inline qemu_iovec_init_extended() in block/io.c for padding
requests, and having access to qiov_slice() is useful for this. As a
public function, it is renamed to qemu_iovec_slice().
(We will need to count the number of I/O vector elements of a slice
there, and then later process this slice. Without qiov_slice(), we
would need to call qemu_iovec_subvec_niov(), and all further
IOV-processing functions may need to skip prefixing elements to
accomodate for a qiov_offset. Because qemu_iovec_subvec_niov()
internally calls qiov_slice(), we can just have the block/io.c code call
qiov_slice() itself, thus get the number of elements, and also create an
iovec array with the superfluous prefixing elements stripped, so the
following processing functions no longer need to skip them.)
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
include/qemu/iov.h | 3 +++
util/iov.c | 14 +++++++-------
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/include/qemu/iov.h b/include/qemu/iov.h
index 9330746680..46fadfb27a 100644
--- a/include/qemu/iov.h
+++ b/include/qemu/iov.h
@@ -229,6 +229,9 @@ int qemu_iovec_init_extended(
void *tail_buf, size_t tail_len);
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len);
+struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
+ size_t offset, size_t len,
+ size_t *head, size_t *tail, int *niov);
int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len);
void qemu_iovec_add(QEMUIOVector *qiov, void *base, size_t len);
void qemu_iovec_concat(QEMUIOVector *dst,
diff --git a/util/iov.c b/util/iov.c
index b4be580022..65a70449da 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -378,15 +378,15 @@ static struct iovec *iov_skip_offset(struct iovec *iov, size_t offset,
}
/*
- * qiov_slice
+ * qemu_iovec_slice
*
* Find subarray of iovec's, containing requested range. @head would
* be offset in first iov (returned by the function), @tail would be
* count of extra bytes in last iovec (returned iov + @niov - 1).
*/
-static struct iovec *qiov_slice(QEMUIOVector *qiov,
- size_t offset, size_t len,
- size_t *head, size_t *tail, int *niov)
+struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
+ size_t offset, size_t len,
+ size_t *head, size_t *tail, int *niov)
{
struct iovec *iov, *end_iov;
@@ -411,7 +411,7 @@ int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len)
size_t head, tail;
int niov;
- qiov_slice(qiov, offset, len, &head, &tail, &niov);
+ qemu_iovec_slice(qiov, offset, len, &head, &tail, &niov);
return niov;
}
@@ -439,8 +439,8 @@ int qemu_iovec_init_extended(
}
if (mid_len) {
- mid_iov = qiov_slice(mid_qiov, mid_offset, mid_len,
- &mid_head, &mid_tail, &mid_niov);
+ mid_iov = qemu_iovec_slice(mid_qiov, mid_offset, mid_len,
+ &mid_head, &mid_tail, &mid_niov);
}
total_niov = !!head_len + mid_niov + !!tail_len;
--
2.39.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-04-11 17:34 [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 1/4] util/iov: Make qiov_slice() public Hanna Czenczek
@ 2023-04-11 17:34 ` Hanna Czenczek
2023-04-11 18:55 ` Vladimir Sementsov-Ogievskiy
2023-04-11 17:34 ` [PATCH v2 3/4] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 4/4] iotests/iov-padding: New test Hanna Czenczek
3 siblings, 1 reply; 6+ messages in thread
From: Hanna Czenczek @ 2023-04-11 17:34 UTC (permalink / raw)
To: qemu-block
Cc: qemu-devel, Hanna Czenczek, Vladimir Sementsov-Ogievskiy,
Eric Blake, Kevin Wolf, Stefan Hajnoczi, Fam Zheng
When processing vectored guest requests that are not aligned to the
storage request alignment, we pad them by adding head and/or tail
buffers for a read-modify-write cycle.
The guest can submit I/O vectors up to IOV_MAX (1024) in length, but
with this padding, the vector can exceed that limit. As of
4c002cef0e9abe7135d7916c51abce47f7fc1ee2 ("util/iov: make
qemu_iovec_init_extended() honest"), we refuse to pad vectors beyond the
limit, instead returning an error to the guest.
To the guest, this appears as a random I/O error. We should not return
an I/O error to the guest when it issued a perfectly valid request.
Before 4c002cef0e9abe7135d7916c51abce47f7fc1ee2, we just made the vector
longer than IOV_MAX, which generally seems to work (because the guest
assumes a smaller alignment than we really have, file-posix's
raw_co_prw() will generally see bdrv_qiov_is_aligned() return false, and
so emulate the request, so that the IOV_MAX does not matter). However,
that does not seem exactly great.
I see two ways to fix this problem:
1. We split such long requests into two requests.
2. We join some elements of the vector into new buffers to make it
shorter.
I am wary of (1), because it seems like it may have unintended side
effects.
(2) on the other hand seems relatively simple to implement, with
hopefully few side effects, so this patch does that.
To do this, the use of qemu_iovec_init_extended() in bdrv_pad_request()
is effectively replaced by the new function bdrv_create_padded_qiov(),
which not only wraps the request IOV with padding head/tail, but also
ensures that the resulting vector will not have more than IOV_MAX
elements. Putting that functionality into qemu_iovec_init_extended() is
infeasible because it requires allocating a bounce buffer; doing so
would require many more parameters (buffer alignment, how to initialize
the buffer, and out parameters like the buffer, its length, and the
original elements), which is not reasonable.
Conversely, it is not difficult to move qemu_iovec_init_extended()'s
functionality into bdrv_create_padded_qiov() by using public
qemu_iovec_* functions, so that is what this patch does.
Because bdrv_pad_request() was the only "serious" user of
qemu_iovec_init_extended(), the next patch will remove the latter
function, so the functionality is not implemented twice.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2141964
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/io.c | 166 ++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 151 insertions(+), 15 deletions(-)
diff --git a/block/io.c b/block/io.c
index 8974d46941..587c3ed1db 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1435,6 +1435,14 @@ out:
* @merge_reads is true for small requests,
* if @buf_len == @head + bytes + @tail. In this case it is possible that both
* head and tail exist but @buf_len == align and @tail_buf == @buf.
+ *
+ * @write is true for write requests, false for read requests.
+ *
+ * If padding makes the vector too long (exceeding IOV_MAX), then we need to
+ * merge existing vector elements into a single one. @collapse_bounce_buf acts
+ * as the bounce buffer in such cases. @pre_collapse_qiov has the pre-collapse
+ * I/O vector elements so for read requests, the data can be copied back after
+ * the read is done.
*/
typedef struct BdrvRequestPadding {
uint8_t *buf;
@@ -1443,11 +1451,17 @@ typedef struct BdrvRequestPadding {
size_t head;
size_t tail;
bool merge_reads;
+ bool write;
QEMUIOVector local_qiov;
+
+ uint8_t *collapse_bounce_buf;
+ size_t collapse_len;
+ QEMUIOVector pre_collapse_qiov;
} BdrvRequestPadding;
static bool bdrv_init_padding(BlockDriverState *bs,
int64_t offset, int64_t bytes,
+ bool write,
BdrvRequestPadding *pad)
{
int64_t align = bs->bl.request_alignment;
@@ -1479,6 +1493,8 @@ static bool bdrv_init_padding(BlockDriverState *bs,
pad->tail_buf = pad->buf + pad->buf_len - align;
}
+ pad->write = write;
+
return true;
}
@@ -1543,8 +1559,23 @@ zero_mem:
return 0;
}
-static void bdrv_padding_destroy(BdrvRequestPadding *pad)
+/**
+ * Free *pad's associated buffers, and perform any necessary finalization steps.
+ */
+static void bdrv_padding_finalize(BdrvRequestPadding *pad)
{
+ if (pad->collapse_bounce_buf) {
+ if (!pad->write) {
+ /*
+ * If padding required elements in the vector to be collapsed into a
+ * bounce buffer, copy the bounce buffer content back
+ */
+ qemu_iovec_from_buf(&pad->pre_collapse_qiov, 0,
+ pad->collapse_bounce_buf, pad->collapse_len);
+ }
+ qemu_vfree(pad->collapse_bounce_buf);
+ qemu_iovec_destroy(&pad->pre_collapse_qiov);
+ }
if (pad->buf) {
qemu_vfree(pad->buf);
qemu_iovec_destroy(&pad->local_qiov);
@@ -1552,6 +1583,101 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
memset(pad, 0, sizeof(*pad));
}
+/*
+ * Create pad->local_qiov by wrapping @iov in the padding head and tail, while
+ * ensuring that the resulting vector will not exceed IOV_MAX elements.
+ *
+ * To ensure this, when necessary, the first two or three elements of @iov are
+ * merged into pad->collapse_bounce_buf and replaced by a reference to that
+ * bounce buffer in pad->local_qiov.
+ *
+ * After performing a read request, the data from the bounce buffer must be
+ * copied back into pad->pre_collapse_qiov (e.g. by bdrv_padding_finalize()).
+ */
+static int bdrv_create_padded_qiov(BlockDriverState *bs,
+ BdrvRequestPadding *pad,
+ struct iovec *iov, int niov,
+ size_t iov_offset, size_t bytes)
+{
+ int padded_niov, surplus_count, collapse_count;
+
+ /* Assert this invariant */
+ assert(niov <= IOV_MAX);
+
+ /*
+ * Cannot pad if resulting length would exceed SIZE_MAX. Returning an error
+ * to the guest is not ideal, but there is little else we can do. At least
+ * this will practically never happen on 64-bit systems.
+ */
+ if (SIZE_MAX - pad->head < bytes ||
+ SIZE_MAX - pad->head - bytes < pad->tail)
+ {
+ return -EINVAL;
+ }
+
+ /* Length of the resulting IOV if we just concatenated everything */
+ padded_niov = !!pad->head + niov + !!pad->tail;
+
+ qemu_iovec_init(&pad->local_qiov, MIN(padded_niov, IOV_MAX));
+
+ if (pad->head) {
+ qemu_iovec_add(&pad->local_qiov, pad->buf, pad->head);
+ }
+
+ /*
+ * If padded_niov > IOV_MAX, we cannot just concatenate everything.
+ * Instead, merge the first two or three elements of @iov to reduce the
+ * number of vector elements as necessary.
+ */
+ if (padded_niov > IOV_MAX) {
+ /*
+ * Only head and tail can have lead to the number of entries exceeding
+ * IOV_MAX, so we can exceed it by the head and tail at most. We need
+ * to reduce the number of elements by `surplus_count`, so we merge that
+ * many elements plus one into one element.
+ */
+ surplus_count = padded_niov - IOV_MAX;
+ assert(surplus_count <= !!pad->head + !!pad->tail);
+ collapse_count = surplus_count + 1;
+
+ /*
+ * Move the elements to collapse into `pad->pre_collapse_qiov`, then
+ * advance `iov` (and associated variables) by those elements.
+ */
+ qemu_iovec_init(&pad->pre_collapse_qiov, collapse_count);
+ qemu_iovec_concat_iov(&pad->pre_collapse_qiov, iov,
+ collapse_count, iov_offset, SIZE_MAX);
+ iov += collapse_count;
+ iov_offset = 0;
+ niov -= collapse_count;
+ bytes -= pad->pre_collapse_qiov.size;
+
+ /*
+ * Construct the bounce buffer to match the length of the to-collapse
+ * vector elements, and for write requests, initialize it with the data
+ * from those elements. Then add it to `pad->local_qiov`.
+ */
+ pad->collapse_len = pad->pre_collapse_qiov.size;
+ pad->collapse_bounce_buf = qemu_blockalign(bs, pad->collapse_len);
+ if (pad->write) {
+ qemu_iovec_to_buf(&pad->pre_collapse_qiov, 0,
+ pad->collapse_bounce_buf, pad->collapse_len);
+ }
+ qemu_iovec_add(&pad->local_qiov,
+ pad->collapse_bounce_buf, pad->collapse_len);
+ }
+
+ qemu_iovec_concat_iov(&pad->local_qiov, iov, niov, iov_offset, bytes);
+
+ if (pad->tail) {
+ qemu_iovec_add(&pad->local_qiov,
+ pad->buf + pad->buf_len - pad->tail, pad->tail);
+ }
+
+ assert(pad->local_qiov.niov == MIN(padded_niov, IOV_MAX));
+ return 0;
+}
+
/*
* bdrv_pad_request
*
@@ -1559,6 +1685,8 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
* read of padding, bdrv_padding_rmw_read() should be called separately if
* needed.
*
+ * @write is true for write requests, false for read requests.
+ *
* Request parameters (@qiov, &qiov_offset, &offset, &bytes) are in-out:
* - on function start they represent original request
* - on failure or when padding is not needed they are unchanged
@@ -1567,26 +1695,34 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
static int bdrv_pad_request(BlockDriverState *bs,
QEMUIOVector **qiov, size_t *qiov_offset,
int64_t *offset, int64_t *bytes,
+ bool write,
BdrvRequestPadding *pad, bool *padded,
BdrvRequestFlags *flags)
{
int ret;
+ struct iovec *sliced_iov;
+ int sliced_niov;
+ size_t sliced_head, sliced_tail;
bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
- if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
+ if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
if (padded) {
*padded = false;
}
return 0;
}
- ret = qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
- *qiov, *qiov_offset, *bytes,
- pad->buf + pad->buf_len - pad->tail,
- pad->tail);
+ sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
+ &sliced_head, &sliced_tail,
+ &sliced_niov);
+
+ /* Guaranteed by bdrv_check_qiov_request() */
+ assert(*bytes <= SIZE_MAX);
+ ret = bdrv_create_padded_qiov(bs, pad, sliced_iov, sliced_niov,
+ sliced_head, *bytes);
if (ret < 0) {
- bdrv_padding_destroy(pad);
+ bdrv_padding_finalize(pad);
return ret;
}
*bytes += pad->head + pad->tail;
@@ -1653,8 +1789,8 @@ int coroutine_fn bdrv_co_preadv_part(BdrvChild *child,
flags |= BDRV_REQ_COPY_ON_READ;
}
- ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, &pad,
- NULL, &flags);
+ ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, false,
+ &pad, NULL, &flags);
if (ret < 0) {
goto fail;
}
@@ -1664,7 +1800,7 @@ int coroutine_fn bdrv_co_preadv_part(BdrvChild *child,
bs->bl.request_alignment,
qiov, qiov_offset, flags);
tracked_request_end(&req);
- bdrv_padding_destroy(&pad);
+ bdrv_padding_finalize(&pad);
fail:
bdrv_dec_in_flight(bs);
@@ -1996,7 +2132,7 @@ bdrv_co_do_zero_pwritev(BdrvChild *child, int64_t offset, int64_t bytes,
/* This flag doesn't make sense for padding or zero writes */
flags &= ~BDRV_REQ_REGISTERED_BUF;
- padding = bdrv_init_padding(bs, offset, bytes, &pad);
+ padding = bdrv_init_padding(bs, offset, bytes, true, &pad);
if (padding) {
assert(!(flags & BDRV_REQ_NO_WAIT));
bdrv_make_request_serialising(req, align);
@@ -2044,7 +2180,7 @@ bdrv_co_do_zero_pwritev(BdrvChild *child, int64_t offset, int64_t bytes,
}
out:
- bdrv_padding_destroy(&pad);
+ bdrv_padding_finalize(&pad);
return ret;
}
@@ -2112,8 +2248,8 @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
* bdrv_co_do_zero_pwritev() does aligning by itself, so, we do
* alignment only if there is no ZERO flag.
*/
- ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, &pad,
- &padded, &flags);
+ ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, true,
+ &pad, &padded, &flags);
if (ret < 0) {
return ret;
}
@@ -2143,7 +2279,7 @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
ret = bdrv_aligned_pwritev(child, &req, offset, bytes, align,
qiov, qiov_offset, flags);
- bdrv_padding_destroy(&pad);
+ bdrv_padding_finalize(&pad);
out:
tracked_request_end(&req);
--
2.39.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 3/4] util/iov: Remove qemu_iovec_init_extended()
2023-04-11 17:34 [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 1/4] util/iov: Make qiov_slice() public Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
@ 2023-04-11 17:34 ` Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 4/4] iotests/iov-padding: New test Hanna Czenczek
3 siblings, 0 replies; 6+ messages in thread
From: Hanna Czenczek @ 2023-04-11 17:34 UTC (permalink / raw)
To: qemu-block
Cc: qemu-devel, Hanna Czenczek, Vladimir Sementsov-Ogievskiy,
Eric Blake, Kevin Wolf, Stefan Hajnoczi, Fam Zheng
bdrv_pad_request() was the main user of qemu_iovec_init_extended().
HEAD^ has removed that use, so we can remove qemu_iovec_init_extended()
now.
The only remaining user is qemu_iovec_init_slice(), which can easily
inline the small part it really needs.
Note that qemu_iovec_init_extended() offered a memcpy() optimization to
initialize the new I/O vector. qemu_iovec_concat_iov(), which is used
to replace its functionality, does not, but calls qemu_iovec_add() for
every single element. If we decide this optimization was important, we
will need to re-implement it in qemu_iovec_concat_iov(), which might
also benefit its pre-existing users.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
include/qemu/iov.h | 5 ---
util/iov.c | 79 +++++++---------------------------------------
2 files changed, 11 insertions(+), 73 deletions(-)
diff --git a/include/qemu/iov.h b/include/qemu/iov.h
index 46fadfb27a..63a1c01965 100644
--- a/include/qemu/iov.h
+++ b/include/qemu/iov.h
@@ -222,11 +222,6 @@ static inline void *qemu_iovec_buf(QEMUIOVector *qiov)
void qemu_iovec_init(QEMUIOVector *qiov, int alloc_hint);
void qemu_iovec_init_external(QEMUIOVector *qiov, struct iovec *iov, int niov);
-int qemu_iovec_init_extended(
- QEMUIOVector *qiov,
- void *head_buf, size_t head_len,
- QEMUIOVector *mid_qiov, size_t mid_offset, size_t mid_len,
- void *tail_buf, size_t tail_len);
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len);
struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
diff --git a/util/iov.c b/util/iov.c
index 65a70449da..866fb577f3 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -416,70 +416,6 @@ int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len)
return niov;
}
-/*
- * Compile new iovec, combining @head_buf buffer, sub-qiov of @mid_qiov,
- * and @tail_buf buffer into new qiov.
- */
-int qemu_iovec_init_extended(
- QEMUIOVector *qiov,
- void *head_buf, size_t head_len,
- QEMUIOVector *mid_qiov, size_t mid_offset, size_t mid_len,
- void *tail_buf, size_t tail_len)
-{
- size_t mid_head, mid_tail;
- int total_niov, mid_niov = 0;
- struct iovec *p, *mid_iov = NULL;
-
- assert(mid_qiov->niov <= IOV_MAX);
-
- if (SIZE_MAX - head_len < mid_len ||
- SIZE_MAX - head_len - mid_len < tail_len)
- {
- return -EINVAL;
- }
-
- if (mid_len) {
- mid_iov = qemu_iovec_slice(mid_qiov, mid_offset, mid_len,
- &mid_head, &mid_tail, &mid_niov);
- }
-
- total_niov = !!head_len + mid_niov + !!tail_len;
- if (total_niov > IOV_MAX) {
- return -EINVAL;
- }
-
- if (total_niov == 1) {
- qemu_iovec_init_buf(qiov, NULL, 0);
- p = &qiov->local_iov;
- } else {
- qiov->niov = qiov->nalloc = total_niov;
- qiov->size = head_len + mid_len + tail_len;
- p = qiov->iov = g_new(struct iovec, qiov->niov);
- }
-
- if (head_len) {
- p->iov_base = head_buf;
- p->iov_len = head_len;
- p++;
- }
-
- assert(!mid_niov == !mid_len);
- if (mid_niov) {
- memcpy(p, mid_iov, mid_niov * sizeof(*p));
- p[0].iov_base = (uint8_t *)p[0].iov_base + mid_head;
- p[0].iov_len -= mid_head;
- p[mid_niov - 1].iov_len -= mid_tail;
- p += mid_niov;
- }
-
- if (tail_len) {
- p->iov_base = tail_buf;
- p->iov_len = tail_len;
- }
-
- return 0;
-}
-
/*
* Check if the contents of subrange of qiov data is all zeroes.
*/
@@ -511,14 +447,21 @@ bool qemu_iovec_is_zero(QEMUIOVector *qiov, size_t offset, size_t bytes)
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len)
{
- int ret;
+ struct iovec *slice_iov;
+ int slice_niov;
+ size_t slice_head, slice_tail;
assert(source->size >= len);
assert(source->size - len >= offset);
- /* We shrink the request, so we can't overflow neither size_t nor MAX_IOV */
- ret = qemu_iovec_init_extended(qiov, NULL, 0, source, offset, len, NULL, 0);
- assert(ret == 0);
+ slice_iov = qemu_iovec_slice(source, offset, len,
+ &slice_head, &slice_tail, &slice_niov);
+ if (slice_niov == 1) {
+ qemu_iovec_init_buf(qiov, slice_iov[0].iov_base + slice_head, len);
+ } else {
+ qemu_iovec_init(qiov, slice_niov);
+ qemu_iovec_concat_iov(qiov, slice_iov, slice_niov, slice_head, len);
+ }
}
void qemu_iovec_destroy(QEMUIOVector *qiov)
--
2.39.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 4/4] iotests/iov-padding: New test
2023-04-11 17:34 [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX Hanna Czenczek
` (2 preceding siblings ...)
2023-04-11 17:34 ` [PATCH v2 3/4] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
@ 2023-04-11 17:34 ` Hanna Czenczek
3 siblings, 0 replies; 6+ messages in thread
From: Hanna Czenczek @ 2023-04-11 17:34 UTC (permalink / raw)
To: qemu-block
Cc: qemu-devel, Hanna Czenczek, Vladimir Sementsov-Ogievskiy,
Eric Blake, Kevin Wolf, Stefan Hajnoczi, Fam Zheng
Test that even vectored IO requests with 1024 vector elements that are
not aligned to the device's request alignment will succeed.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
tests/qemu-iotests/tests/iov-padding | 85 ++++++++++++++++++++++++
tests/qemu-iotests/tests/iov-padding.out | 59 ++++++++++++++++
2 files changed, 144 insertions(+)
create mode 100755 tests/qemu-iotests/tests/iov-padding
create mode 100644 tests/qemu-iotests/tests/iov-padding.out
diff --git a/tests/qemu-iotests/tests/iov-padding b/tests/qemu-iotests/tests/iov-padding
new file mode 100755
index 0000000000..b9604900c7
--- /dev/null
+++ b/tests/qemu-iotests/tests/iov-padding
@@ -0,0 +1,85 @@
+#!/usr/bin/env bash
+# group: rw quick
+#
+# Check the interaction of request padding (to fit alignment restrictions) with
+# vectored I/O from the guest
+#
+# Copyright Red Hat
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+
+seq=$(basename $0)
+echo "QA output created by $seq"
+
+status=1 # failure is the default!
+
+_cleanup()
+{
+ _cleanup_test_img
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+cd ..
+. ./common.rc
+. ./common.filter
+
+_supported_fmt raw
+_supported_proto file
+
+_make_test_img 1M
+
+IMGSPEC="driver=blkdebug,align=4096,image.driver=file,image.filename=$TEST_IMG"
+
+# Four combinations:
+# - Offset 4096, length 1023 * 512 + 512: Fully aligned to 4k
+# - Offset 4096, length 1023 * 512 + 4096: Head is aligned, tail is not
+# - Offset 512, length 1023 * 512 + 512: Neither head nor tail are aligned
+# - Offset 512, length 1023 * 512 + 4096: Tail is aligned, head is not
+for start_offset in 4096 512; do
+ for last_element_length in 512 4096; do
+ length=$((1023 * 512 + $last_element_length))
+
+ echo
+ echo "== performing 1024-element vectored requests to image (offset: $start_offset; length: $length) =="
+
+ # Fill with data for testing
+ $QEMU_IO -c 'write -P 1 0 1M' "$TEST_IMG" | _filter_qemu_io
+
+ # 1023 512-byte buffers, and then one with length $last_element_length
+ cmd_params="-P 2 $start_offset $(yes 512 | head -n 1023 | tr '\n' ' ') $last_element_length"
+ QEMU_IO_OPTIONS="$QEMU_IO_OPTIONS_NO_FMT" $QEMU_IO \
+ -c "writev $cmd_params" \
+ --image-opts \
+ "$IMGSPEC" \
+ | _filter_qemu_io
+
+ # Read all patterns -- read the part we just wrote with writev twice,
+ # once "normally", and once with a readv, so we see that that works, too
+ QEMU_IO_OPTIONS="$QEMU_IO_OPTIONS_NO_FMT" $QEMU_IO \
+ -c "read -P 1 0 $start_offset" \
+ -c "read -P 2 $start_offset $length" \
+ -c "readv $cmd_params" \
+ -c "read -P 1 $((start_offset + length)) $((1024 * 1024 - length - start_offset))" \
+ --image-opts \
+ "$IMGSPEC" \
+ | _filter_qemu_io
+ done
+done
+
+# success, all done
+echo "*** done"
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/tests/iov-padding.out b/tests/qemu-iotests/tests/iov-padding.out
new file mode 100644
index 0000000000..e07a91fac7
--- /dev/null
+++ b/tests/qemu-iotests/tests/iov-padding.out
@@ -0,0 +1,59 @@
+QA output created by iov-padding
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576
+
+== performing 1024-element vectored requests to image (offset: 4096; length: 524288) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 524288/524288 bytes at offset 4096
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 4096
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 4096
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 520192/520192 bytes at offset 528384
+508 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+== performing 1024-element vectored requests to image (offset: 4096; length: 527872) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 527872/527872 bytes at offset 4096
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 4096
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 4096
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 516608/516608 bytes at offset 531968
+504.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+== performing 1024-element vectored requests to image (offset: 512; length: 524288) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 524288/524288 bytes at offset 512
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 512/512 bytes at offset 0
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 512
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 512
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 523776/523776 bytes at offset 524800
+511.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+== performing 1024-element vectored requests to image (offset: 512; length: 527872) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 527872/527872 bytes at offset 512
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 512/512 bytes at offset 0
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 512
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 512
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 520192/520192 bytes at offset 528384
+508 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+*** done
--
2.39.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-04-11 17:34 ` [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
@ 2023-04-11 18:55 ` Vladimir Sementsov-Ogievskiy
0 siblings, 0 replies; 6+ messages in thread
From: Vladimir Sementsov-Ogievskiy @ 2023-04-11 18:55 UTC (permalink / raw)
To: Hanna Czenczek, qemu-block
Cc: qemu-devel, Eric Blake, Kevin Wolf, Stefan Hajnoczi, Fam Zheng
On 11.04.23 20:34, Hanna Czenczek wrote:
> When processing vectored guest requests that are not aligned to the
> storage request alignment, we pad them by adding head and/or tail
> buffers for a read-modify-write cycle.
>
> The guest can submit I/O vectors up to IOV_MAX (1024) in length, but
> with this padding, the vector can exceed that limit. As of
> 4c002cef0e9abe7135d7916c51abce47f7fc1ee2 ("util/iov: make
> qemu_iovec_init_extended() honest"), we refuse to pad vectors beyond the
> limit, instead returning an error to the guest.
>
> To the guest, this appears as a random I/O error. We should not return
> an I/O error to the guest when it issued a perfectly valid request.
>
> Before 4c002cef0e9abe7135d7916c51abce47f7fc1ee2, we just made the vector
> longer than IOV_MAX, which generally seems to work (because the guest
> assumes a smaller alignment than we really have, file-posix's
> raw_co_prw() will generally see bdrv_qiov_is_aligned() return false, and
> so emulate the request, so that the IOV_MAX does not matter). However,
> that does not seem exactly great.
>
> I see two ways to fix this problem:
> 1. We split such long requests into two requests.
> 2. We join some elements of the vector into new buffers to make it
> shorter.
>
> I am wary of (1), because it seems like it may have unintended side
> effects.
>
> (2) on the other hand seems relatively simple to implement, with
> hopefully few side effects, so this patch does that.
>
> To do this, the use of qemu_iovec_init_extended() in bdrv_pad_request()
> is effectively replaced by the new function bdrv_create_padded_qiov(),
> which not only wraps the request IOV with padding head/tail, but also
> ensures that the resulting vector will not have more than IOV_MAX
> elements. Putting that functionality into qemu_iovec_init_extended() is
> infeasible because it requires allocating a bounce buffer; doing so
> would require many more parameters (buffer alignment, how to initialize
> the buffer, and out parameters like the buffer, its length, and the
> original elements), which is not reasonable.
>
> Conversely, it is not difficult to move qemu_iovec_init_extended()'s
> functionality into bdrv_create_padded_qiov() by using public
> qemu_iovec_* functions, so that is what this patch does.
>
> Because bdrv_pad_request() was the only "serious" user of
> qemu_iovec_init_extended(), the next patch will remove the latter
> function, so the functionality is not implemented twice.
>
> Buglink:https://bugzilla.redhat.com/show_bug.cgi?id=2141964
> Signed-off-by: Hanna Czenczek<hreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
--
Best regards,
Vladimir
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-04-11 18:56 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-11 17:34 [PATCH v2 0/4] block: Split padded I/O vectors exceeding IOV_MAX Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 1/4] util/iov: Make qiov_slice() public Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 2/4] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
2023-04-11 18:55 ` Vladimir Sementsov-Ogievskiy
2023-04-11 17:34 ` [PATCH v2 3/4] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
2023-04-11 17:34 ` [PATCH v2 4/4] iotests/iov-padding: New test Hanna Czenczek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).