* [PULL 00/17] Block patches
@ 2023-06-05 15:45 Hanna Czenczek
2023-06-05 15:45 ` [PULL 01/17] util/iov: Make qiov_slice() public Hanna Czenczek
` (17 more replies)
0 siblings, 18 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
The following changes since commit 848a6caa88b9f082c89c9b41afa975761262981d:
Merge tag 'migration-20230602-pull-request' of https://gitlab.com/juan.quintela/qemu into staging (2023-06-02 17:33:29 -0700)
are available in the Git repository at:
https://gitlab.com/hreitz/qemu.git tags/pull-block-2023-06-05
for you to fetch changes up to 42a2890a76f4783cd1c212f27856edcf2b5e8a75:
qcow2: add discard-no-unref option (2023-06-05 13:15:42 +0200)
----------------------------------------------------------------
Block patches
- Fix padding of unaligned vectored requests to match the host alignment
for vectors with 1023 or 1024 buffers
- Refactor and fix bugs in parallels's image check functionality
- Add an option to the qcow2 driver to retain (qcow2-level) allocations
on discard requests from the guest (while still forwarding the discard
to the lower level and marking the range as zero)
----------------------------------------------------------------
Alexander Ivanov (12):
parallels: Out of image offset in BAT leads to image inflation
parallels: Fix high_off calculation in parallels_co_check()
parallels: Fix image_end_offset and data_end after out-of-image check
parallels: create parallels_set_bat_entry_helper() to assign BAT value
parallels: Use generic infrastructure for BAT writing in
parallels_co_check()
parallels: Move check of unclean image to a separate function
parallels: Move check of cluster outside image to a separate function
parallels: Fix statistics calculation
parallels: Move check of leaks to a separate function
parallels: Move statistic collection to a separate function
parallels: Replace qemu_co_mutex_lock by WITH_QEMU_LOCK_GUARD
parallels: Incorrect condition in out-of-image check
Hanna Czenczek (4):
util/iov: Make qiov_slice() public
block: Collapse padded I/O vecs exceeding IOV_MAX
util/iov: Remove qemu_iovec_init_extended()
iotests/iov-padding: New test
Jean-Louis Dupond (1):
qcow2: add discard-no-unref option
qapi/block-core.json | 12 ++
block/qcow2.h | 3 +
include/qemu/iov.h | 8 +-
block/io.c | 166 ++++++++++++++++++--
block/parallels.c | 190 ++++++++++++++++-------
block/qcow2-cluster.c | 32 +++-
block/qcow2.c | 18 +++
util/iov.c | 89 ++---------
qemu-options.hx | 12 ++
tests/qemu-iotests/tests/iov-padding | 85 ++++++++++
tests/qemu-iotests/tests/iov-padding.out | 59 +++++++
11 files changed, 523 insertions(+), 151 deletions(-)
create mode 100755 tests/qemu-iotests/tests/iov-padding
create mode 100644 tests/qemu-iotests/tests/iov-padding.out
--
2.40.1
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PULL 01/17] util/iov: Make qiov_slice() public
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
` (16 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
We want to inline qemu_iovec_init_extended() in block/io.c for padding
requests, and having access to qiov_slice() is useful for this. As a
public function, it is renamed to qemu_iovec_slice().
(We will need to count the number of I/O vector elements of a slice
there, and then later process this slice. Without qiov_slice(), we
would need to call qemu_iovec_subvec_niov(), and all further
IOV-processing functions may need to skip prefixing elements to
accomodate for a qiov_offset. Because qemu_iovec_subvec_niov()
internally calls qiov_slice(), we can just have the block/io.c code call
qiov_slice() itself, thus get the number of elements, and also create an
iovec array with the superfluous prefixing elements stripped, so the
following processing functions no longer need to skip them.)
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20230411173418.19549-2-hreitz@redhat.com>
---
include/qemu/iov.h | 3 +++
util/iov.c | 14 +++++++-------
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/include/qemu/iov.h b/include/qemu/iov.h
index 9330746680..46fadfb27a 100644
--- a/include/qemu/iov.h
+++ b/include/qemu/iov.h
@@ -229,6 +229,9 @@ int qemu_iovec_init_extended(
void *tail_buf, size_t tail_len);
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len);
+struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
+ size_t offset, size_t len,
+ size_t *head, size_t *tail, int *niov);
int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len);
void qemu_iovec_add(QEMUIOVector *qiov, void *base, size_t len);
void qemu_iovec_concat(QEMUIOVector *dst,
diff --git a/util/iov.c b/util/iov.c
index b4be580022..65a70449da 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -378,15 +378,15 @@ static struct iovec *iov_skip_offset(struct iovec *iov, size_t offset,
}
/*
- * qiov_slice
+ * qemu_iovec_slice
*
* Find subarray of iovec's, containing requested range. @head would
* be offset in first iov (returned by the function), @tail would be
* count of extra bytes in last iovec (returned iov + @niov - 1).
*/
-static struct iovec *qiov_slice(QEMUIOVector *qiov,
- size_t offset, size_t len,
- size_t *head, size_t *tail, int *niov)
+struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
+ size_t offset, size_t len,
+ size_t *head, size_t *tail, int *niov)
{
struct iovec *iov, *end_iov;
@@ -411,7 +411,7 @@ int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len)
size_t head, tail;
int niov;
- qiov_slice(qiov, offset, len, &head, &tail, &niov);
+ qemu_iovec_slice(qiov, offset, len, &head, &tail, &niov);
return niov;
}
@@ -439,8 +439,8 @@ int qemu_iovec_init_extended(
}
if (mid_len) {
- mid_iov = qiov_slice(mid_qiov, mid_offset, mid_len,
- &mid_head, &mid_tail, &mid_niov);
+ mid_iov = qemu_iovec_slice(mid_qiov, mid_offset, mid_len,
+ &mid_head, &mid_tail, &mid_niov);
}
total_niov = !!head_len + mid_niov + !!tail_len;
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
2023-06-05 15:45 ` [PULL 01/17] util/iov: Make qiov_slice() public Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-06 8:00 ` Michael Tokarev
2023-06-08 9:52 ` Peter Maydell
2023-06-05 15:45 ` [PULL 03/17] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
` (15 subsequent siblings)
17 siblings, 2 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
When processing vectored guest requests that are not aligned to the
storage request alignment, we pad them by adding head and/or tail
buffers for a read-modify-write cycle.
The guest can submit I/O vectors up to IOV_MAX (1024) in length, but
with this padding, the vector can exceed that limit. As of
4c002cef0e9abe7135d7916c51abce47f7fc1ee2 ("util/iov: make
qemu_iovec_init_extended() honest"), we refuse to pad vectors beyond the
limit, instead returning an error to the guest.
To the guest, this appears as a random I/O error. We should not return
an I/O error to the guest when it issued a perfectly valid request.
Before 4c002cef0e9abe7135d7916c51abce47f7fc1ee2, we just made the vector
longer than IOV_MAX, which generally seems to work (because the guest
assumes a smaller alignment than we really have, file-posix's
raw_co_prw() will generally see bdrv_qiov_is_aligned() return false, and
so emulate the request, so that the IOV_MAX does not matter). However,
that does not seem exactly great.
I see two ways to fix this problem:
1. We split such long requests into two requests.
2. We join some elements of the vector into new buffers to make it
shorter.
I am wary of (1), because it seems like it may have unintended side
effects.
(2) on the other hand seems relatively simple to implement, with
hopefully few side effects, so this patch does that.
To do this, the use of qemu_iovec_init_extended() in bdrv_pad_request()
is effectively replaced by the new function bdrv_create_padded_qiov(),
which not only wraps the request IOV with padding head/tail, but also
ensures that the resulting vector will not have more than IOV_MAX
elements. Putting that functionality into qemu_iovec_init_extended() is
infeasible because it requires allocating a bounce buffer; doing so
would require many more parameters (buffer alignment, how to initialize
the buffer, and out parameters like the buffer, its length, and the
original elements), which is not reasonable.
Conversely, it is not difficult to move qemu_iovec_init_extended()'s
functionality into bdrv_create_padded_qiov() by using public
qemu_iovec_* functions, so that is what this patch does.
Because bdrv_pad_request() was the only "serious" user of
qemu_iovec_init_extended(), the next patch will remove the latter
function, so the functionality is not implemented twice.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=2141964
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20230411173418.19549-3-hreitz@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
---
block/io.c | 166 ++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 151 insertions(+), 15 deletions(-)
diff --git a/block/io.c b/block/io.c
index f2dfc7c405..30748f0b59 100644
--- a/block/io.c
+++ b/block/io.c
@@ -1441,6 +1441,14 @@ out:
* @merge_reads is true for small requests,
* if @buf_len == @head + bytes + @tail. In this case it is possible that both
* head and tail exist but @buf_len == align and @tail_buf == @buf.
+ *
+ * @write is true for write requests, false for read requests.
+ *
+ * If padding makes the vector too long (exceeding IOV_MAX), then we need to
+ * merge existing vector elements into a single one. @collapse_bounce_buf acts
+ * as the bounce buffer in such cases. @pre_collapse_qiov has the pre-collapse
+ * I/O vector elements so for read requests, the data can be copied back after
+ * the read is done.
*/
typedef struct BdrvRequestPadding {
uint8_t *buf;
@@ -1449,11 +1457,17 @@ typedef struct BdrvRequestPadding {
size_t head;
size_t tail;
bool merge_reads;
+ bool write;
QEMUIOVector local_qiov;
+
+ uint8_t *collapse_bounce_buf;
+ size_t collapse_len;
+ QEMUIOVector pre_collapse_qiov;
} BdrvRequestPadding;
static bool bdrv_init_padding(BlockDriverState *bs,
int64_t offset, int64_t bytes,
+ bool write,
BdrvRequestPadding *pad)
{
int64_t align = bs->bl.request_alignment;
@@ -1485,6 +1499,8 @@ static bool bdrv_init_padding(BlockDriverState *bs,
pad->tail_buf = pad->buf + pad->buf_len - align;
}
+ pad->write = write;
+
return true;
}
@@ -1549,8 +1565,23 @@ zero_mem:
return 0;
}
-static void bdrv_padding_destroy(BdrvRequestPadding *pad)
+/**
+ * Free *pad's associated buffers, and perform any necessary finalization steps.
+ */
+static void bdrv_padding_finalize(BdrvRequestPadding *pad)
{
+ if (pad->collapse_bounce_buf) {
+ if (!pad->write) {
+ /*
+ * If padding required elements in the vector to be collapsed into a
+ * bounce buffer, copy the bounce buffer content back
+ */
+ qemu_iovec_from_buf(&pad->pre_collapse_qiov, 0,
+ pad->collapse_bounce_buf, pad->collapse_len);
+ }
+ qemu_vfree(pad->collapse_bounce_buf);
+ qemu_iovec_destroy(&pad->pre_collapse_qiov);
+ }
if (pad->buf) {
qemu_vfree(pad->buf);
qemu_iovec_destroy(&pad->local_qiov);
@@ -1558,6 +1589,101 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
memset(pad, 0, sizeof(*pad));
}
+/*
+ * Create pad->local_qiov by wrapping @iov in the padding head and tail, while
+ * ensuring that the resulting vector will not exceed IOV_MAX elements.
+ *
+ * To ensure this, when necessary, the first two or three elements of @iov are
+ * merged into pad->collapse_bounce_buf and replaced by a reference to that
+ * bounce buffer in pad->local_qiov.
+ *
+ * After performing a read request, the data from the bounce buffer must be
+ * copied back into pad->pre_collapse_qiov (e.g. by bdrv_padding_finalize()).
+ */
+static int bdrv_create_padded_qiov(BlockDriverState *bs,
+ BdrvRequestPadding *pad,
+ struct iovec *iov, int niov,
+ size_t iov_offset, size_t bytes)
+{
+ int padded_niov, surplus_count, collapse_count;
+
+ /* Assert this invariant */
+ assert(niov <= IOV_MAX);
+
+ /*
+ * Cannot pad if resulting length would exceed SIZE_MAX. Returning an error
+ * to the guest is not ideal, but there is little else we can do. At least
+ * this will practically never happen on 64-bit systems.
+ */
+ if (SIZE_MAX - pad->head < bytes ||
+ SIZE_MAX - pad->head - bytes < pad->tail)
+ {
+ return -EINVAL;
+ }
+
+ /* Length of the resulting IOV if we just concatenated everything */
+ padded_niov = !!pad->head + niov + !!pad->tail;
+
+ qemu_iovec_init(&pad->local_qiov, MIN(padded_niov, IOV_MAX));
+
+ if (pad->head) {
+ qemu_iovec_add(&pad->local_qiov, pad->buf, pad->head);
+ }
+
+ /*
+ * If padded_niov > IOV_MAX, we cannot just concatenate everything.
+ * Instead, merge the first two or three elements of @iov to reduce the
+ * number of vector elements as necessary.
+ */
+ if (padded_niov > IOV_MAX) {
+ /*
+ * Only head and tail can have lead to the number of entries exceeding
+ * IOV_MAX, so we can exceed it by the head and tail at most. We need
+ * to reduce the number of elements by `surplus_count`, so we merge that
+ * many elements plus one into one element.
+ */
+ surplus_count = padded_niov - IOV_MAX;
+ assert(surplus_count <= !!pad->head + !!pad->tail);
+ collapse_count = surplus_count + 1;
+
+ /*
+ * Move the elements to collapse into `pad->pre_collapse_qiov`, then
+ * advance `iov` (and associated variables) by those elements.
+ */
+ qemu_iovec_init(&pad->pre_collapse_qiov, collapse_count);
+ qemu_iovec_concat_iov(&pad->pre_collapse_qiov, iov,
+ collapse_count, iov_offset, SIZE_MAX);
+ iov += collapse_count;
+ iov_offset = 0;
+ niov -= collapse_count;
+ bytes -= pad->pre_collapse_qiov.size;
+
+ /*
+ * Construct the bounce buffer to match the length of the to-collapse
+ * vector elements, and for write requests, initialize it with the data
+ * from those elements. Then add it to `pad->local_qiov`.
+ */
+ pad->collapse_len = pad->pre_collapse_qiov.size;
+ pad->collapse_bounce_buf = qemu_blockalign(bs, pad->collapse_len);
+ if (pad->write) {
+ qemu_iovec_to_buf(&pad->pre_collapse_qiov, 0,
+ pad->collapse_bounce_buf, pad->collapse_len);
+ }
+ qemu_iovec_add(&pad->local_qiov,
+ pad->collapse_bounce_buf, pad->collapse_len);
+ }
+
+ qemu_iovec_concat_iov(&pad->local_qiov, iov, niov, iov_offset, bytes);
+
+ if (pad->tail) {
+ qemu_iovec_add(&pad->local_qiov,
+ pad->buf + pad->buf_len - pad->tail, pad->tail);
+ }
+
+ assert(pad->local_qiov.niov == MIN(padded_niov, IOV_MAX));
+ return 0;
+}
+
/*
* bdrv_pad_request
*
@@ -1565,6 +1691,8 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
* read of padding, bdrv_padding_rmw_read() should be called separately if
* needed.
*
+ * @write is true for write requests, false for read requests.
+ *
* Request parameters (@qiov, &qiov_offset, &offset, &bytes) are in-out:
* - on function start they represent original request
* - on failure or when padding is not needed they are unchanged
@@ -1573,26 +1701,34 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
static int bdrv_pad_request(BlockDriverState *bs,
QEMUIOVector **qiov, size_t *qiov_offset,
int64_t *offset, int64_t *bytes,
+ bool write,
BdrvRequestPadding *pad, bool *padded,
BdrvRequestFlags *flags)
{
int ret;
+ struct iovec *sliced_iov;
+ int sliced_niov;
+ size_t sliced_head, sliced_tail;
bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
- if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
+ if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
if (padded) {
*padded = false;
}
return 0;
}
- ret = qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
- *qiov, *qiov_offset, *bytes,
- pad->buf + pad->buf_len - pad->tail,
- pad->tail);
+ sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
+ &sliced_head, &sliced_tail,
+ &sliced_niov);
+
+ /* Guaranteed by bdrv_check_qiov_request() */
+ assert(*bytes <= SIZE_MAX);
+ ret = bdrv_create_padded_qiov(bs, pad, sliced_iov, sliced_niov,
+ sliced_head, *bytes);
if (ret < 0) {
- bdrv_padding_destroy(pad);
+ bdrv_padding_finalize(pad);
return ret;
}
*bytes += pad->head + pad->tail;
@@ -1659,8 +1795,8 @@ int coroutine_fn bdrv_co_preadv_part(BdrvChild *child,
flags |= BDRV_REQ_COPY_ON_READ;
}
- ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, &pad,
- NULL, &flags);
+ ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, false,
+ &pad, NULL, &flags);
if (ret < 0) {
goto fail;
}
@@ -1670,7 +1806,7 @@ int coroutine_fn bdrv_co_preadv_part(BdrvChild *child,
bs->bl.request_alignment,
qiov, qiov_offset, flags);
tracked_request_end(&req);
- bdrv_padding_destroy(&pad);
+ bdrv_padding_finalize(&pad);
fail:
bdrv_dec_in_flight(bs);
@@ -2002,7 +2138,7 @@ bdrv_co_do_zero_pwritev(BdrvChild *child, int64_t offset, int64_t bytes,
/* This flag doesn't make sense for padding or zero writes */
flags &= ~BDRV_REQ_REGISTERED_BUF;
- padding = bdrv_init_padding(bs, offset, bytes, &pad);
+ padding = bdrv_init_padding(bs, offset, bytes, true, &pad);
if (padding) {
assert(!(flags & BDRV_REQ_NO_WAIT));
bdrv_make_request_serialising(req, align);
@@ -2050,7 +2186,7 @@ bdrv_co_do_zero_pwritev(BdrvChild *child, int64_t offset, int64_t bytes,
}
out:
- bdrv_padding_destroy(&pad);
+ bdrv_padding_finalize(&pad);
return ret;
}
@@ -2118,8 +2254,8 @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
* bdrv_co_do_zero_pwritev() does aligning by itself, so, we do
* alignment only if there is no ZERO flag.
*/
- ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, &pad,
- &padded, &flags);
+ ret = bdrv_pad_request(bs, &qiov, &qiov_offset, &offset, &bytes, true,
+ &pad, &padded, &flags);
if (ret < 0) {
return ret;
}
@@ -2149,7 +2285,7 @@ int coroutine_fn bdrv_co_pwritev_part(BdrvChild *child,
ret = bdrv_aligned_pwritev(child, &req, offset, bytes, align,
qiov, qiov_offset, flags);
- bdrv_padding_destroy(&pad);
+ bdrv_padding_finalize(&pad);
out:
tracked_request_end(&req);
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 03/17] util/iov: Remove qemu_iovec_init_extended()
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
2023-06-05 15:45 ` [PULL 01/17] util/iov: Make qiov_slice() public Hanna Czenczek
2023-06-05 15:45 ` [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 04/17] iotests/iov-padding: New test Hanna Czenczek
` (14 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
bdrv_pad_request() was the main user of qemu_iovec_init_extended().
HEAD^ has removed that use, so we can remove qemu_iovec_init_extended()
now.
The only remaining user is qemu_iovec_init_slice(), which can easily
inline the small part it really needs.
Note that qemu_iovec_init_extended() offered a memcpy() optimization to
initialize the new I/O vector. qemu_iovec_concat_iov(), which is used
to replace its functionality, does not, but calls qemu_iovec_add() for
every single element. If we decide this optimization was important, we
will need to re-implement it in qemu_iovec_concat_iov(), which might
also benefit its pre-existing users.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20230411173418.19549-4-hreitz@redhat.com>
---
include/qemu/iov.h | 5 ---
util/iov.c | 79 +++++++---------------------------------------
2 files changed, 11 insertions(+), 73 deletions(-)
diff --git a/include/qemu/iov.h b/include/qemu/iov.h
index 46fadfb27a..63a1c01965 100644
--- a/include/qemu/iov.h
+++ b/include/qemu/iov.h
@@ -222,11 +222,6 @@ static inline void *qemu_iovec_buf(QEMUIOVector *qiov)
void qemu_iovec_init(QEMUIOVector *qiov, int alloc_hint);
void qemu_iovec_init_external(QEMUIOVector *qiov, struct iovec *iov, int niov);
-int qemu_iovec_init_extended(
- QEMUIOVector *qiov,
- void *head_buf, size_t head_len,
- QEMUIOVector *mid_qiov, size_t mid_offset, size_t mid_len,
- void *tail_buf, size_t tail_len);
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len);
struct iovec *qemu_iovec_slice(QEMUIOVector *qiov,
diff --git a/util/iov.c b/util/iov.c
index 65a70449da..866fb577f3 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -416,70 +416,6 @@ int qemu_iovec_subvec_niov(QEMUIOVector *qiov, size_t offset, size_t len)
return niov;
}
-/*
- * Compile new iovec, combining @head_buf buffer, sub-qiov of @mid_qiov,
- * and @tail_buf buffer into new qiov.
- */
-int qemu_iovec_init_extended(
- QEMUIOVector *qiov,
- void *head_buf, size_t head_len,
- QEMUIOVector *mid_qiov, size_t mid_offset, size_t mid_len,
- void *tail_buf, size_t tail_len)
-{
- size_t mid_head, mid_tail;
- int total_niov, mid_niov = 0;
- struct iovec *p, *mid_iov = NULL;
-
- assert(mid_qiov->niov <= IOV_MAX);
-
- if (SIZE_MAX - head_len < mid_len ||
- SIZE_MAX - head_len - mid_len < tail_len)
- {
- return -EINVAL;
- }
-
- if (mid_len) {
- mid_iov = qemu_iovec_slice(mid_qiov, mid_offset, mid_len,
- &mid_head, &mid_tail, &mid_niov);
- }
-
- total_niov = !!head_len + mid_niov + !!tail_len;
- if (total_niov > IOV_MAX) {
- return -EINVAL;
- }
-
- if (total_niov == 1) {
- qemu_iovec_init_buf(qiov, NULL, 0);
- p = &qiov->local_iov;
- } else {
- qiov->niov = qiov->nalloc = total_niov;
- qiov->size = head_len + mid_len + tail_len;
- p = qiov->iov = g_new(struct iovec, qiov->niov);
- }
-
- if (head_len) {
- p->iov_base = head_buf;
- p->iov_len = head_len;
- p++;
- }
-
- assert(!mid_niov == !mid_len);
- if (mid_niov) {
- memcpy(p, mid_iov, mid_niov * sizeof(*p));
- p[0].iov_base = (uint8_t *)p[0].iov_base + mid_head;
- p[0].iov_len -= mid_head;
- p[mid_niov - 1].iov_len -= mid_tail;
- p += mid_niov;
- }
-
- if (tail_len) {
- p->iov_base = tail_buf;
- p->iov_len = tail_len;
- }
-
- return 0;
-}
-
/*
* Check if the contents of subrange of qiov data is all zeroes.
*/
@@ -511,14 +447,21 @@ bool qemu_iovec_is_zero(QEMUIOVector *qiov, size_t offset, size_t bytes)
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len)
{
- int ret;
+ struct iovec *slice_iov;
+ int slice_niov;
+ size_t slice_head, slice_tail;
assert(source->size >= len);
assert(source->size - len >= offset);
- /* We shrink the request, so we can't overflow neither size_t nor MAX_IOV */
- ret = qemu_iovec_init_extended(qiov, NULL, 0, source, offset, len, NULL, 0);
- assert(ret == 0);
+ slice_iov = qemu_iovec_slice(source, offset, len,
+ &slice_head, &slice_tail, &slice_niov);
+ if (slice_niov == 1) {
+ qemu_iovec_init_buf(qiov, slice_iov[0].iov_base + slice_head, len);
+ } else {
+ qemu_iovec_init(qiov, slice_niov);
+ qemu_iovec_concat_iov(qiov, slice_iov, slice_niov, slice_head, len);
+ }
}
void qemu_iovec_destroy(QEMUIOVector *qiov)
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 04/17] iotests/iov-padding: New test
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (2 preceding siblings ...)
2023-06-05 15:45 ` [PULL 03/17] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation Hanna Czenczek
` (13 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
Test that even vectored IO requests with 1024 vector elements that are
not aligned to the device's request alignment will succeed.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
Message-Id: <20230411173418.19549-5-hreitz@redhat.com>
---
tests/qemu-iotests/tests/iov-padding | 85 ++++++++++++++++++++++++
tests/qemu-iotests/tests/iov-padding.out | 59 ++++++++++++++++
2 files changed, 144 insertions(+)
create mode 100755 tests/qemu-iotests/tests/iov-padding
create mode 100644 tests/qemu-iotests/tests/iov-padding.out
diff --git a/tests/qemu-iotests/tests/iov-padding b/tests/qemu-iotests/tests/iov-padding
new file mode 100755
index 0000000000..b9604900c7
--- /dev/null
+++ b/tests/qemu-iotests/tests/iov-padding
@@ -0,0 +1,85 @@
+#!/usr/bin/env bash
+# group: rw quick
+#
+# Check the interaction of request padding (to fit alignment restrictions) with
+# vectored I/O from the guest
+#
+# Copyright Red Hat
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+#
+
+seq=$(basename $0)
+echo "QA output created by $seq"
+
+status=1 # failure is the default!
+
+_cleanup()
+{
+ _cleanup_test_img
+}
+trap "_cleanup; exit \$status" 0 1 2 3 15
+
+# get standard environment, filters and checks
+cd ..
+. ./common.rc
+. ./common.filter
+
+_supported_fmt raw
+_supported_proto file
+
+_make_test_img 1M
+
+IMGSPEC="driver=blkdebug,align=4096,image.driver=file,image.filename=$TEST_IMG"
+
+# Four combinations:
+# - Offset 4096, length 1023 * 512 + 512: Fully aligned to 4k
+# - Offset 4096, length 1023 * 512 + 4096: Head is aligned, tail is not
+# - Offset 512, length 1023 * 512 + 512: Neither head nor tail are aligned
+# - Offset 512, length 1023 * 512 + 4096: Tail is aligned, head is not
+for start_offset in 4096 512; do
+ for last_element_length in 512 4096; do
+ length=$((1023 * 512 + $last_element_length))
+
+ echo
+ echo "== performing 1024-element vectored requests to image (offset: $start_offset; length: $length) =="
+
+ # Fill with data for testing
+ $QEMU_IO -c 'write -P 1 0 1M' "$TEST_IMG" | _filter_qemu_io
+
+ # 1023 512-byte buffers, and then one with length $last_element_length
+ cmd_params="-P 2 $start_offset $(yes 512 | head -n 1023 | tr '\n' ' ') $last_element_length"
+ QEMU_IO_OPTIONS="$QEMU_IO_OPTIONS_NO_FMT" $QEMU_IO \
+ -c "writev $cmd_params" \
+ --image-opts \
+ "$IMGSPEC" \
+ | _filter_qemu_io
+
+ # Read all patterns -- read the part we just wrote with writev twice,
+ # once "normally", and once with a readv, so we see that that works, too
+ QEMU_IO_OPTIONS="$QEMU_IO_OPTIONS_NO_FMT" $QEMU_IO \
+ -c "read -P 1 0 $start_offset" \
+ -c "read -P 2 $start_offset $length" \
+ -c "readv $cmd_params" \
+ -c "read -P 1 $((start_offset + length)) $((1024 * 1024 - length - start_offset))" \
+ --image-opts \
+ "$IMGSPEC" \
+ | _filter_qemu_io
+ done
+done
+
+# success, all done
+echo "*** done"
+rm -f $seq.full
+status=0
diff --git a/tests/qemu-iotests/tests/iov-padding.out b/tests/qemu-iotests/tests/iov-padding.out
new file mode 100644
index 0000000000..e07a91fac7
--- /dev/null
+++ b/tests/qemu-iotests/tests/iov-padding.out
@@ -0,0 +1,59 @@
+QA output created by iov-padding
+Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576
+
+== performing 1024-element vectored requests to image (offset: 4096; length: 524288) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 524288/524288 bytes at offset 4096
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 4096
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 4096
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 520192/520192 bytes at offset 528384
+508 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+== performing 1024-element vectored requests to image (offset: 4096; length: 527872) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 527872/527872 bytes at offset 4096
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 4096/4096 bytes at offset 0
+4 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 4096
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 4096
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 516608/516608 bytes at offset 531968
+504.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+== performing 1024-element vectored requests to image (offset: 512; length: 524288) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 524288/524288 bytes at offset 512
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 512/512 bytes at offset 0
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 512
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 524288/524288 bytes at offset 512
+512 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 523776/523776 bytes at offset 524800
+511.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+
+== performing 1024-element vectored requests to image (offset: 512; length: 527872) ==
+wrote 1048576/1048576 bytes at offset 0
+1 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+wrote 527872/527872 bytes at offset 512
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 512/512 bytes at offset 0
+512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 512
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 527872/527872 bytes at offset 512
+515.500 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+read 520192/520192 bytes at offset 528384
+508 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
+*** done
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (3 preceding siblings ...)
2023-06-05 15:45 ` [PULL 04/17] iotests/iov-padding: New test Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-07 6:51 ` Michael Tokarev
2023-06-05 15:45 ` [PULL 06/17] parallels: Fix high_off calculation in parallels_co_check() Hanna Czenczek
` (12 subsequent siblings)
17 siblings, 1 reply; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
data_end field in BDRVParallelsState is set to the biggest offset present
in BAT. If this offset is outside of the image, any further write will
create the cluster at this offset and/or the image will be truncated to
this offset on close. This is definitely not correct.
Raise an error in parallels_open() if data_end points outside the image
and it is not a check (let the check to repaire the image). Set data_end
to the end of the cluster with the last correct offset.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Message-Id: <20230424093147.197643-2-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/block/parallels.c b/block/parallels.c
index d8a3f13e24..7b6d770f8e 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -733,6 +733,7 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
BDRVParallelsState *s = bs->opaque;
ParallelsHeader ph;
int ret, size, i;
+ int64_t file_nb_sectors;
QemuOpts *opts = NULL;
Error *local_err = NULL;
char *buf;
@@ -742,6 +743,11 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
+ file_nb_sectors = bdrv_nb_sectors(bs->file->bs);
+ if (file_nb_sectors < 0) {
+ return -EINVAL;
+ }
+
ret = bdrv_pread(bs->file, 0, sizeof(ph), &ph, 0);
if (ret < 0) {
goto fail;
@@ -806,6 +812,17 @@ static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i);
+ if (off >= file_nb_sectors) {
+ if (flags & BDRV_O_CHECK) {
+ continue;
+ }
+ error_setg(errp, "parallels: Offset %" PRIi64 " in BAT[%d] entry "
+ "is larger than file size (%" PRIi64 ")",
+ off << BDRV_SECTOR_BITS, i,
+ file_nb_sectors << BDRV_SECTOR_BITS);
+ ret = -EINVAL;
+ goto fail;
+ }
if (off >= s->data_end) {
s->data_end = off + s->tracks;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 06/17] parallels: Fix high_off calculation in parallels_co_check()
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (4 preceding siblings ...)
2023-06-05 15:45 ` [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 07/17] parallels: Fix image_end_offset and data_end after out-of-image check Hanna Czenczek
` (11 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Don't let high_off be more than the file size even if we don't fix the
image.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230424093147.197643-3-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index 7b6d770f8e..204d20685b 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -462,12 +462,12 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
- prev_off = 0;
s->bat_bitmap[i] = 0;
res->corruptions_fixed++;
flush_bat = true;
- continue;
}
+ prev_off = 0;
+ continue;
}
res->bfi.allocated_clusters++;
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 07/17] parallels: Fix image_end_offset and data_end after out-of-image check
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (5 preceding siblings ...)
2023-06-05 15:45 ` [PULL 06/17] parallels: Fix high_off calculation in parallels_co_check() Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 08/17] parallels: create parallels_set_bat_entry_helper() to assign BAT value Hanna Czenczek
` (10 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Set data_end to the end of the last cluster inside the image. In such a
way we can be sure that corrupted offsets in the BAT can't affect on the
image size. If there are no allocated clusters set image_end_offset by
data_end.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-Id: <20230424093147.197643-4-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/block/parallels.c b/block/parallels.c
index 204d20685b..ea382e8382 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -490,7 +490,13 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
}
}
- res->image_end_offset = high_off + s->cluster_size;
+ if (high_off == 0) {
+ res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
+ } else {
+ res->image_end_offset = high_off + s->cluster_size;
+ s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
+ }
+
if (size > res->image_end_offset) {
int64_t count;
count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 08/17] parallels: create parallels_set_bat_entry_helper() to assign BAT value
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (6 preceding siblings ...)
2023-06-05 15:45 ` [PULL 07/17] parallels: Fix image_end_offset and data_end after out-of-image check Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 09/17] parallels: Use generic infrastructure for BAT writing in parallels_co_check() Hanna Czenczek
` (9 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
This helper will be reused in next patches during parallels_co_check
rework to simplify its code.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230424093147.197643-5-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index ea382e8382..db6426e5f9 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -165,6 +165,13 @@ static int64_t block_status(BDRVParallelsState *s, int64_t sector_num,
return start_off;
}
+static void parallels_set_bat_entry(BDRVParallelsState *s,
+ uint32_t index, uint32_t offset)
+{
+ s->bat_bitmap[index] = cpu_to_le32(offset);
+ bitmap_set(s->bat_dirty_bmap, bat_entry_off(index) / s->bat_dirty_block, 1);
+}
+
static int64_t coroutine_fn GRAPH_RDLOCK
allocate_clusters(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum)
@@ -251,10 +258,8 @@ allocate_clusters(BlockDriverState *bs, int64_t sector_num,
}
for (i = 0; i < to_allocate; i++) {
- s->bat_bitmap[idx + i] = cpu_to_le32(s->data_end / s->off_multiplier);
+ parallels_set_bat_entry(s, idx + i, s->data_end / s->off_multiplier);
s->data_end += s->tracks;
- bitmap_set(s->bat_dirty_bmap,
- bat_entry_off(idx + i) / s->bat_dirty_block, 1);
}
return bat2sect(s, idx) + sector_num % s->tracks;
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 09/17] parallels: Use generic infrastructure for BAT writing in parallels_co_check()
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (7 preceding siblings ...)
2023-06-05 15:45 ` [PULL 08/17] parallels: create parallels_set_bat_entry_helper() to assign BAT value Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 10/17] parallels: Move check of unclean image to a separate function Hanna Czenczek
` (8 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
BAT is written in the context of conventional operations over the image
inside bdrv_co_flush() when it calls parallels_co_flush_to_os() callback.
Thus we should not modify BAT array directly, but call
parallels_set_bat_entry() helper and bdrv_co_flush() further on. After
that there is no need to manually write BAT and track its modification.
This makes code more generic and allows to split parallels_set_bat_entry()
for independent pieces.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-Id: <20230424093147.197643-6-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index db6426e5f9..fa8a56a948 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -427,9 +427,8 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
{
BDRVParallelsState *s = bs->opaque;
int64_t size, prev_off, high_off;
- int ret;
+ int ret = 0;
uint32_t i;
- bool flush_bat = false;
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
@@ -467,9 +466,8 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
res->corruptions++;
if (fix & BDRV_FIX_ERRORS) {
- s->bat_bitmap[i] = 0;
+ parallels_set_bat_entry(s, i, 0);
res->corruptions_fixed++;
- flush_bat = true;
}
prev_off = 0;
continue;
@@ -486,15 +484,6 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
prev_off = off;
}
- ret = 0;
- if (flush_bat) {
- ret = bdrv_co_pwrite_sync(bs->file, 0, s->header_size, s->header, 0);
- if (ret < 0) {
- res->check_errors++;
- goto out;
- }
- }
-
if (high_off == 0) {
res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
} else {
@@ -529,6 +518,14 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
out:
qemu_co_mutex_unlock(&s->lock);
+
+ if (ret == 0) {
+ ret = bdrv_co_flush(bs);
+ if (ret < 0) {
+ res->check_errors++;
+ }
+ }
+
return ret;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 10/17] parallels: Move check of unclean image to a separate function
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (8 preceding siblings ...)
2023-06-05 15:45 ` [PULL 09/17] parallels: Use generic infrastructure for BAT writing in parallels_co_check() Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 11/17] parallels: Move check of cluster outside " Hanna Czenczek
` (7 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
We will add more and more checks so we need a better code structure
in parallels_co_check. Let each check performs in a separate loop
in a separate helper.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230424093147.197643-7-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 31 +++++++++++++++++++++----------
1 file changed, 21 insertions(+), 10 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index fa8a56a948..94b7408b6b 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -420,6 +420,25 @@ parallels_co_readv(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
return ret;
}
+static void parallels_check_unclean(BlockDriverState *bs,
+ BdrvCheckResult *res,
+ BdrvCheckMode fix)
+{
+ BDRVParallelsState *s = bs->opaque;
+
+ if (!s->header_unclean) {
+ return;
+ }
+
+ fprintf(stderr, "%s image was not closed correctly\n",
+ fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR");
+ res->corruptions++;
+ if (fix & BDRV_FIX_ERRORS) {
+ /* parallels_close will do the job right */
+ res->corruptions_fixed++;
+ s->header_unclean = false;
+ }
+}
static int coroutine_fn GRAPH_RDLOCK
parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
@@ -437,16 +456,8 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
}
qemu_co_mutex_lock(&s->lock);
- if (s->header_unclean) {
- fprintf(stderr, "%s image was not closed correctly\n",
- fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR");
- res->corruptions++;
- if (fix & BDRV_FIX_ERRORS) {
- /* parallels_close will do the job right */
- res->corruptions_fixed++;
- s->header_unclean = false;
- }
- }
+
+ parallels_check_unclean(bs, res, fix);
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 11/17] parallels: Move check of cluster outside image to a separate function
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (9 preceding siblings ...)
2023-06-05 15:45 ` [PULL 10/17] parallels: Move check of unclean image to a separate function Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 12/17] parallels: Fix statistics calculation Hanna Czenczek
` (6 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
We will add more and more checks so we need a better code structure in
parallels_co_check. Let each check performs in a separate loop in a
separate helper.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-Id: <20230424093147.197643-8-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 75 +++++++++++++++++++++++++++++++----------------
1 file changed, 49 insertions(+), 26 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index 94b7408b6b..7f0f72e879 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -440,13 +440,55 @@ static void parallels_check_unclean(BlockDriverState *bs,
}
}
+static int coroutine_fn GRAPH_RDLOCK
+parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
+ BdrvCheckMode fix)
+{
+ BDRVParallelsState *s = bs->opaque;
+ uint32_t i;
+ int64_t off, high_off, size;
+
+ size = bdrv_getlength(bs->file->bs);
+ if (size < 0) {
+ res->check_errors++;
+ return size;
+ }
+
+ high_off = 0;
+ for (i = 0; i < s->bat_size; i++) {
+ off = bat2sect(s, i) << BDRV_SECTOR_BITS;
+ if (off > size) {
+ fprintf(stderr, "%s cluster %u is outside image\n",
+ fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
+ res->corruptions++;
+ if (fix & BDRV_FIX_ERRORS) {
+ parallels_set_bat_entry(s, i, 0);
+ res->corruptions_fixed++;
+ }
+ continue;
+ }
+ if (high_off < off) {
+ high_off = off;
+ }
+ }
+
+ if (high_off == 0) {
+ res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
+ } else {
+ res->image_end_offset = high_off + s->cluster_size;
+ s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
+ }
+
+ return 0;
+}
+
static int coroutine_fn GRAPH_RDLOCK
parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
{
BDRVParallelsState *s = bs->opaque;
- int64_t size, prev_off, high_off;
- int ret = 0;
+ int64_t size, prev_off;
+ int ret;
uint32_t i;
size = bdrv_getlength(bs->file->bs);
@@ -459,10 +501,14 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
parallels_check_unclean(bs, res, fix);
+ ret = parallels_check_outside_image(bs, res, fix);
+ if (ret < 0) {
+ goto out;
+ }
+
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
- high_off = 0;
prev_off = 0;
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i) << BDRV_SECTOR_BITS;
@@ -471,23 +517,7 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
continue;
}
- /* cluster outside the image */
- if (off > size) {
- fprintf(stderr, "%s cluster %u is outside image\n",
- fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
- res->corruptions++;
- if (fix & BDRV_FIX_ERRORS) {
- parallels_set_bat_entry(s, i, 0);
- res->corruptions_fixed++;
- }
- prev_off = 0;
- continue;
- }
-
res->bfi.allocated_clusters++;
- if (off > high_off) {
- high_off = off;
- }
if (prev_off != 0 && (prev_off + s->cluster_size) != off) {
res->bfi.fragmented_clusters++;
@@ -495,13 +525,6 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
prev_off = off;
}
- if (high_off == 0) {
- res->image_end_offset = s->data_end << BDRV_SECTOR_BITS;
- } else {
- res->image_end_offset = high_off + s->cluster_size;
- s->data_end = res->image_end_offset >> BDRV_SECTOR_BITS;
- }
-
if (size > res->image_end_offset) {
int64_t count;
count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 12/17] parallels: Fix statistics calculation
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (10 preceding siblings ...)
2023-06-05 15:45 ` [PULL 11/17] parallels: Move check of cluster outside " Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 13/17] parallels: Move check of leaks to a separate function Hanna Czenczek
` (5 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Exclude out-of-image clusters from allocated and fragmented clusters
calculation.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Message-Id: <20230424093147.197643-9-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/parallels.c b/block/parallels.c
index 7f0f72e879..fd1e2860d0 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -512,7 +512,11 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
prev_off = 0;
for (i = 0; i < s->bat_size; i++) {
int64_t off = bat2sect(s, i) << BDRV_SECTOR_BITS;
- if (off == 0) {
+ /*
+ * If BDRV_FIX_ERRORS is not set, out-of-image BAT entries were not
+ * fixed. Skip not allocated and out-of-image BAT entries.
+ */
+ if (off == 0 || off + s->cluster_size > res->image_end_offset) {
prev_off = 0;
continue;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 13/17] parallels: Move check of leaks to a separate function
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (11 preceding siblings ...)
2023-06-05 15:45 ` [PULL 12/17] parallels: Fix statistics calculation Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 14/17] parallels: Move statistic collection " Hanna Czenczek
` (4 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
We will add more and more checks so we need a better code structure
in parallels_co_check. Let each check performs in a separate loop
in a separate helper.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Message-Id: <20230424093147.197643-10-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 74 ++++++++++++++++++++++++++++-------------------
1 file changed, 45 insertions(+), 29 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index fd1e2860d0..ba766e5e86 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -483,13 +483,12 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
}
static int coroutine_fn GRAPH_RDLOCK
-parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
- BdrvCheckMode fix)
+parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
+ BdrvCheckMode fix)
{
BDRVParallelsState *s = bs->opaque;
- int64_t size, prev_off;
+ int64_t size;
int ret;
- uint32_t i;
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
@@ -497,6 +496,43 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
return size;
}
+ if (size > res->image_end_offset) {
+ int64_t count;
+ count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
+ fprintf(stderr, "%s space leaked at the end of the image %" PRId64 "\n",
+ fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
+ size - res->image_end_offset);
+ res->leaks += count;
+ if (fix & BDRV_FIX_LEAKS) {
+ Error *local_err = NULL;
+
+ /*
+ * In order to really repair the image, we must shrink it.
+ * That means we have to pass exact=true.
+ */
+ ret = bdrv_co_truncate(bs->file, res->image_end_offset, true,
+ PREALLOC_MODE_OFF, 0, &local_err);
+ if (ret < 0) {
+ error_report_err(local_err);
+ res->check_errors++;
+ return ret;
+ }
+ res->leaks_fixed += count;
+ }
+ }
+
+ return 0;
+}
+
+static int coroutine_fn GRAPH_RDLOCK
+parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
+ BdrvCheckMode fix)
+{
+ BDRVParallelsState *s = bs->opaque;
+ int64_t prev_off;
+ int ret;
+ uint32_t i;
+
qemu_co_mutex_lock(&s->lock);
parallels_check_unclean(bs, res, fix);
@@ -506,6 +542,11 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
goto out;
}
+ ret = parallels_check_leak(bs, res, fix);
+ if (ret < 0) {
+ goto out;
+ }
+
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
@@ -529,31 +570,6 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
prev_off = off;
}
- if (size > res->image_end_offset) {
- int64_t count;
- count = DIV_ROUND_UP(size - res->image_end_offset, s->cluster_size);
- fprintf(stderr, "%s space leaked at the end of the image %" PRId64 "\n",
- fix & BDRV_FIX_LEAKS ? "Repairing" : "ERROR",
- size - res->image_end_offset);
- res->leaks += count;
- if (fix & BDRV_FIX_LEAKS) {
- Error *local_err = NULL;
-
- /*
- * In order to really repair the image, we must shrink it.
- * That means we have to pass exact=true.
- */
- ret = bdrv_co_truncate(bs->file, res->image_end_offset, true,
- PREALLOC_MODE_OFF, 0, &local_err);
- if (ret < 0) {
- error_report_err(local_err);
- res->check_errors++;
- goto out;
- }
- res->leaks_fixed += count;
- }
- }
-
out:
qemu_co_mutex_unlock(&s->lock);
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 14/17] parallels: Move statistic collection to a separate function
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (12 preceding siblings ...)
2023-06-05 15:45 ` [PULL 13/17] parallels: Move check of leaks to a separate function Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 15/17] parallels: Replace qemu_co_mutex_lock by WITH_QEMU_LOCK_GUARD Hanna Czenczek
` (3 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
We will add more and more checks so we need a better code structure
in parallels_co_check. Let each check performs in a separate loop
in a separate helper.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20230424093147.197643-11-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 52 +++++++++++++++++++++++++++--------------------
1 file changed, 30 insertions(+), 22 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index ba766e5e86..bb279ddc70 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -524,35 +524,20 @@ parallels_check_leak(BlockDriverState *bs, BdrvCheckResult *res,
return 0;
}
-static int coroutine_fn GRAPH_RDLOCK
-parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
- BdrvCheckMode fix)
+static void parallels_collect_statistics(BlockDriverState *bs,
+ BdrvCheckResult *res,
+ BdrvCheckMode fix)
{
BDRVParallelsState *s = bs->opaque;
- int64_t prev_off;
- int ret;
+ int64_t off, prev_off;
uint32_t i;
- qemu_co_mutex_lock(&s->lock);
-
- parallels_check_unclean(bs, res, fix);
-
- ret = parallels_check_outside_image(bs, res, fix);
- if (ret < 0) {
- goto out;
- }
-
- ret = parallels_check_leak(bs, res, fix);
- if (ret < 0) {
- goto out;
- }
-
res->bfi.total_clusters = s->bat_size;
res->bfi.compressed_clusters = 0; /* compression is not supported */
prev_off = 0;
for (i = 0; i < s->bat_size; i++) {
- int64_t off = bat2sect(s, i) << BDRV_SECTOR_BITS;
+ off = bat2sect(s, i) << BDRV_SECTOR_BITS;
/*
* If BDRV_FIX_ERRORS is not set, out-of-image BAT entries were not
* fixed. Skip not allocated and out-of-image BAT entries.
@@ -562,13 +547,36 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
continue;
}
- res->bfi.allocated_clusters++;
-
if (prev_off != 0 && (prev_off + s->cluster_size) != off) {
res->bfi.fragmented_clusters++;
}
prev_off = off;
+ res->bfi.allocated_clusters++;
}
+}
+
+static int coroutine_fn GRAPH_RDLOCK
+parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
+ BdrvCheckMode fix)
+{
+ BDRVParallelsState *s = bs->opaque;
+ int ret;
+
+ qemu_co_mutex_lock(&s->lock);
+
+ parallels_check_unclean(bs, res, fix);
+
+ ret = parallels_check_outside_image(bs, res, fix);
+ if (ret < 0) {
+ goto out;
+ }
+
+ ret = parallels_check_leak(bs, res, fix);
+ if (ret < 0) {
+ goto out;
+ }
+
+ parallels_collect_statistics(bs, res, fix);
out:
qemu_co_mutex_unlock(&s->lock);
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 15/17] parallels: Replace qemu_co_mutex_lock by WITH_QEMU_LOCK_GUARD
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (13 preceding siblings ...)
2023-06-05 15:45 ` [PULL 14/17] parallels: Move statistic collection " Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 16/17] parallels: Incorrect condition in out-of-image check Hanna Czenczek
` (2 subsequent siblings)
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Replace the way we use mutex in parallels_co_check() for simplier
and less error prone code.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-Id: <20230424093147.197643-12-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 33 ++++++++++++++-------------------
1 file changed, 14 insertions(+), 19 deletions(-)
diff --git a/block/parallels.c b/block/parallels.c
index bb279ddc70..84d1d045ad 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -562,30 +562,25 @@ parallels_co_check(BlockDriverState *bs, BdrvCheckResult *res,
BDRVParallelsState *s = bs->opaque;
int ret;
- qemu_co_mutex_lock(&s->lock);
+ WITH_QEMU_LOCK_GUARD(&s->lock) {
+ parallels_check_unclean(bs, res, fix);
- parallels_check_unclean(bs, res, fix);
+ ret = parallels_check_outside_image(bs, res, fix);
+ if (ret < 0) {
+ return ret;
+ }
- ret = parallels_check_outside_image(bs, res, fix);
- if (ret < 0) {
- goto out;
- }
+ ret = parallels_check_leak(bs, res, fix);
+ if (ret < 0) {
+ return ret;
+ }
- ret = parallels_check_leak(bs, res, fix);
- if (ret < 0) {
- goto out;
+ parallels_collect_statistics(bs, res, fix);
}
- parallels_collect_statistics(bs, res, fix);
-
-out:
- qemu_co_mutex_unlock(&s->lock);
-
- if (ret == 0) {
- ret = bdrv_co_flush(bs);
- if (ret < 0) {
- res->check_errors++;
- }
+ ret = bdrv_co_flush(bs);
+ if (ret < 0) {
+ res->check_errors++;
}
return ret;
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 16/17] parallels: Incorrect condition in out-of-image check
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (14 preceding siblings ...)
2023-06-05 15:45 ` [PULL 15/17] parallels: Replace qemu_co_mutex_lock by WITH_QEMU_LOCK_GUARD Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 17/17] qcow2: add discard-no-unref option Hanna Czenczek
2023-06-05 19:03 ` [PULL 00/17] Block patches Richard Henderson
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
All the offsets in the BAT must be lower than the file size.
Fix the check condition for correct check.
Signed-off-by: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Message-Id: <20230424093147.197643-13-alexander.ivanov@virtuozzo.com>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
block/parallels.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/parallels.c b/block/parallels.c
index 84d1d045ad..7c263d5085 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -457,7 +457,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
high_off = 0;
for (i = 0; i < s->bat_size; i++) {
off = bat2sect(s, i) << BDRV_SECTOR_BITS;
- if (off > size) {
+ if (off + s->cluster_size > size) {
fprintf(stderr, "%s cluster %u is outside image\n",
fix & BDRV_FIX_ERRORS ? "Repairing" : "ERROR", i);
res->corruptions++;
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PULL 17/17] qcow2: add discard-no-unref option
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (15 preceding siblings ...)
2023-06-05 15:45 ` [PULL 16/17] parallels: Incorrect condition in out-of-image check Hanna Czenczek
@ 2023-06-05 15:45 ` Hanna Czenczek
2023-06-05 19:03 ` [PULL 00/17] Block patches Richard Henderson
17 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-05 15:45 UTC (permalink / raw)
To: qemu-block; +Cc: qemu-devel, Hanna Czenczek, Kevin Wolf, Richard Henderson
From: Jean-Louis Dupond <jean-louis@dupond.be>
When we for example have a sparse qcow2 image and discard: unmap is enabled,
there can be a lot of fragmentation in the image after some time. Especially on VM's
that do a lot of writes/deletes.
This causes the qcow2 image to grow even over 110% of its virtual size,
because the free gaps in the image get too small to allocate new
continuous clusters. So it allocates new space at the end of the image.
Disabling discard is not an option, as discard is needed to keep the
incremental backup size as low as possible. Without discard, the
incremental backups would become large, as qemu thinks it's just dirty
blocks but it doesn't know the blocks are unneeded.
So we need to avoid fragmentation but also 'empty' the unneeded blocks in
the image to have a small incremental backup.
In addition, we also want to send the discards further down the stack, so
the underlying blocks are still discarded.
Therefor we introduce a new qcow2 option "discard-no-unref".
When setting this option to true, discards will no longer have the qcow2
driver relinquish cluster allocations. Other than that, the request is
handled as normal: All clusters in range are marked as zero, and, if
pass-discard-request is true, it is passed further down the stack.
The only difference is that the now-zero clusters are preallocated
instead of being unallocated.
This will avoid fragmentation on the qcow2 image.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1621
Signed-off-by: Jean-Louis Dupond <jean-louis@dupond.be>
Message-Id: <20230605084523.34134-2-jean-louis@dupond.be>
Reviewed-by: Hanna Czenczek <hreitz@redhat.com>
Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
---
qapi/block-core.json | 12 ++++++++++++
block/qcow2.h | 3 +++
block/qcow2-cluster.c | 32 ++++++++++++++++++++++++++++----
block/qcow2.c | 18 ++++++++++++++++++
qemu-options.hx | 12 ++++++++++++
5 files changed, 73 insertions(+), 4 deletions(-)
diff --git a/qapi/block-core.json b/qapi/block-core.json
index 4bf89171c6..5dd5f7e4b0 100644
--- a/qapi/block-core.json
+++ b/qapi/block-core.json
@@ -3478,6 +3478,17 @@
# @pass-discard-other: whether discard requests for the data source
# should be issued on other occasions where a cluster gets freed
#
+# @discard-no-unref: when enabled, discards from the guest will not cause
+# cluster allocations to be relinquished. This prevents qcow2 fragmentation
+# that would be caused by such discards. Besides potential
+# performance degradation, such fragmentation can lead to increased
+# allocation of clusters past the end of the image file,
+# resulting in image files whose file length can grow much larger
+# than their guest disk size would suggest.
+# If image file length is of concern (e.g. when storing qcow2
+# images directly on block devices), you should consider enabling
+# this option. (since 8.1)
+#
# @overlap-check: which overlap checks to perform for writes to the
# image, defaults to 'cached' (since 2.2)
#
@@ -3516,6 +3527,7 @@
'*pass-discard-request': 'bool',
'*pass-discard-snapshot': 'bool',
'*pass-discard-other': 'bool',
+ '*discard-no-unref': 'bool',
'*overlap-check': 'Qcow2OverlapChecks',
'*cache-size': 'int',
'*l2-cache-size': 'int',
diff --git a/block/qcow2.h b/block/qcow2.h
index 4f67eb912a..ea9adb5706 100644
--- a/block/qcow2.h
+++ b/block/qcow2.h
@@ -133,6 +133,7 @@
#define QCOW2_OPT_DISCARD_REQUEST "pass-discard-request"
#define QCOW2_OPT_DISCARD_SNAPSHOT "pass-discard-snapshot"
#define QCOW2_OPT_DISCARD_OTHER "pass-discard-other"
+#define QCOW2_OPT_DISCARD_NO_UNREF "discard-no-unref"
#define QCOW2_OPT_OVERLAP "overlap-check"
#define QCOW2_OPT_OVERLAP_TEMPLATE "overlap-check.template"
#define QCOW2_OPT_OVERLAP_MAIN_HEADER "overlap-check.main-header"
@@ -385,6 +386,8 @@ typedef struct BDRVQcow2State {
bool discard_passthrough[QCOW2_DISCARD_MAX];
+ bool discard_no_unref;
+
int overlap_check; /* bitmask of Qcow2MetadataOverlap values */
bool signaled_corruption;
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index 39cda7f907..2e76de027c 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -1925,6 +1925,10 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset,
uint64_t new_l2_bitmap = old_l2_bitmap;
QCow2ClusterType cluster_type =
qcow2_get_cluster_type(bs, old_l2_entry);
+ bool keep_reference = (cluster_type != QCOW2_CLUSTER_COMPRESSED) &&
+ !full_discard &&
+ (s->discard_no_unref &&
+ type == QCOW2_DISCARD_REQUEST);
/*
* If full_discard is true, the cluster should not read back as zeroes,
@@ -1943,10 +1947,22 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset,
new_l2_entry = new_l2_bitmap = 0;
} else if (bs->backing || qcow2_cluster_is_allocated(cluster_type)) {
if (has_subclusters(s)) {
- new_l2_entry = 0;
+ if (keep_reference) {
+ new_l2_entry = old_l2_entry;
+ } else {
+ new_l2_entry = 0;
+ }
new_l2_bitmap = QCOW_L2_BITMAP_ALL_ZEROES;
} else {
- new_l2_entry = s->qcow_version >= 3 ? QCOW_OFLAG_ZERO : 0;
+ if (s->qcow_version >= 3) {
+ if (keep_reference) {
+ new_l2_entry |= QCOW_OFLAG_ZERO;
+ } else {
+ new_l2_entry = QCOW_OFLAG_ZERO;
+ }
+ } else {
+ new_l2_entry = 0;
+ }
}
}
@@ -1960,8 +1976,16 @@ static int discard_in_l2_slice(BlockDriverState *bs, uint64_t offset,
if (has_subclusters(s)) {
set_l2_bitmap(s, l2_slice, l2_index + i, new_l2_bitmap);
}
- /* Then decrease the refcount */
- qcow2_free_any_cluster(bs, old_l2_entry, type);
+ if (!keep_reference) {
+ /* Then decrease the refcount */
+ qcow2_free_any_cluster(bs, old_l2_entry, type);
+ } else if (s->discard_passthrough[type] &&
+ (cluster_type == QCOW2_CLUSTER_NORMAL ||
+ cluster_type == QCOW2_CLUSTER_ZERO_ALLOC)) {
+ /* If we keep the reference, pass on the discard still */
+ bdrv_pdiscard(s->data_file, old_l2_entry & L2E_OFFSET_MASK,
+ s->cluster_size);
+ }
}
qcow2_cache_put(s->l2_table_cache, (void **) &l2_slice);
diff --git a/block/qcow2.c b/block/qcow2.c
index 7f3948360d..e23edd48c2 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -682,6 +682,7 @@ static const char *const mutable_opts[] = {
QCOW2_OPT_DISCARD_REQUEST,
QCOW2_OPT_DISCARD_SNAPSHOT,
QCOW2_OPT_DISCARD_OTHER,
+ QCOW2_OPT_DISCARD_NO_UNREF,
QCOW2_OPT_OVERLAP,
QCOW2_OPT_OVERLAP_TEMPLATE,
QCOW2_OPT_OVERLAP_MAIN_HEADER,
@@ -726,6 +727,11 @@ static QemuOptsList qcow2_runtime_opts = {
.type = QEMU_OPT_BOOL,
.help = "Generate discard requests when other clusters are freed",
},
+ {
+ .name = QCOW2_OPT_DISCARD_NO_UNREF,
+ .type = QEMU_OPT_BOOL,
+ .help = "Do not unreference discarded clusters",
+ },
{
.name = QCOW2_OPT_OVERLAP,
.type = QEMU_OPT_STRING,
@@ -969,6 +975,7 @@ typedef struct Qcow2ReopenState {
bool use_lazy_refcounts;
int overlap_check;
bool discard_passthrough[QCOW2_DISCARD_MAX];
+ bool discard_no_unref;
uint64_t cache_clean_interval;
QCryptoBlockOpenOptions *crypto_opts; /* Disk encryption runtime options */
} Qcow2ReopenState;
@@ -1140,6 +1147,15 @@ static int qcow2_update_options_prepare(BlockDriverState *bs,
r->discard_passthrough[QCOW2_DISCARD_OTHER] =
qemu_opt_get_bool(opts, QCOW2_OPT_DISCARD_OTHER, false);
+ r->discard_no_unref = qemu_opt_get_bool(opts, QCOW2_OPT_DISCARD_NO_UNREF,
+ false);
+ if (r->discard_no_unref && s->qcow_version < 3) {
+ error_setg(errp,
+ "discard-no-unref is only supported since qcow2 version 3");
+ ret = -EINVAL;
+ goto fail;
+ }
+
switch (s->crypt_method_header) {
case QCOW_CRYPT_NONE:
if (encryptfmt) {
@@ -1220,6 +1236,8 @@ static void qcow2_update_options_commit(BlockDriverState *bs,
s->discard_passthrough[i] = r->discard_passthrough[i];
}
+ s->discard_no_unref = r->discard_no_unref;
+
if (s->cache_clean_interval != r->cache_clean_interval) {
cache_clean_timer_del(bs);
s->cache_clean_interval = r->cache_clean_interval;
diff --git a/qemu-options.hx b/qemu-options.hx
index b37eb9662b..b57489d7ca 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1431,6 +1431,18 @@ SRST
issued on other occasions where a cluster gets freed
(on/off; default: off)
+ ``discard-no-unref``
+ When enabled, discards from the guest will not cause cluster
+ allocations to be relinquished. This prevents qcow2 fragmentation
+ that would be caused by such discards. Besides potential
+ performance degradation, such fragmentation can lead to increased
+ allocation of clusters past the end of the image file,
+ resulting in image files whose file length can grow much larger
+ than their guest disk size would suggest.
+ If image file length is of concern (e.g. when storing qcow2
+ images directly on block devices), you should consider enabling
+ this option.
+
``overlap-check``
Which overlap checks to perform for writes to the image
(none/constant/cached/all; default: cached). For details or
--
2.40.1
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PULL 00/17] Block patches
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
` (16 preceding siblings ...)
2023-06-05 15:45 ` [PULL 17/17] qcow2: add discard-no-unref option Hanna Czenczek
@ 2023-06-05 19:03 ` Richard Henderson
17 siblings, 0 replies; 29+ messages in thread
From: Richard Henderson @ 2023-06-05 19:03 UTC (permalink / raw)
To: Hanna Czenczek, qemu-block; +Cc: qemu-devel, Kevin Wolf
On 6/5/23 08:45, Hanna Czenczek wrote:
> The following changes since commit 848a6caa88b9f082c89c9b41afa975761262981d:
>
> Merge tag 'migration-20230602-pull-request' ofhttps://gitlab.com/juan.quintela/qemu into staging (2023-06-02 17:33:29 -0700)
>
> are available in the Git repository at:
>
> https://gitlab.com/hreitz/qemu.git tags/pull-block-2023-06-05
>
> for you to fetch changes up to 42a2890a76f4783cd1c212f27856edcf2b5e8a75:
>
> qcow2: add discard-no-unref option (2023-06-05 13:15:42 +0200)
>
> ----------------------------------------------------------------
> Block patches
>
> - Fix padding of unaligned vectored requests to match the host alignment
> for vectors with 1023 or 1024 buffers
> - Refactor and fix bugs in parallels's image check functionality
> - Add an option to the qcow2 driver to retain (qcow2-level) allocations
> on discard requests from the guest (while still forwarding the discard
> to the lower level and marking the range as zero)
Applied, thanks. Please update https://wiki.qemu.org/ChangeLog/8.1 as appropriate.
r~
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-06-05 15:45 ` [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
@ 2023-06-06 8:00 ` Michael Tokarev
2023-06-06 8:45 ` Hanna Czenczek
2023-06-08 9:52 ` Peter Maydell
1 sibling, 1 reply; 29+ messages in thread
From: Michael Tokarev @ 2023-06-06 8:00 UTC (permalink / raw)
To: Hanna Czenczek, qemu-block; +Cc: qemu-devel, Kevin Wolf, Richard Henderson
05.06.2023 18:45, Hanna Czenczek wrote:
> When processing vectored guest requests that are not aligned to the
> storage request alignment, we pad them by adding head and/or tail
> buffers for a read-modify-write cycle.
>
> The guest can submit I/O vectors up to IOV_MAX (1024) in length, but
> with this padding, the vector can exceed that limit. As of
> 4c002cef0e9abe7135d7916c51abce47f7fc1ee2 ("util/iov: make
> qemu_iovec_init_extended() honest"), we refuse to pad vectors beyond the
> limit, instead returning an error to the guest.
>
> To the guest, this appears as a random I/O error. We should not return
> an I/O error to the guest when it issued a perfectly valid request.
>
> Before 4c002cef0e9abe7135d7916c51abce47f7fc1ee2, we just made the vector
> longer than IOV_MAX, which generally seems to work (because the guest
> assumes a smaller alignment than we really have, file-posix's
> raw_co_prw() will generally see bdrv_qiov_is_aligned() return false, and
> so emulate the request, so that the IOV_MAX does not matter). However,
> that does not seem exactly great.
Why it is not "exactly great"? To me, it seems to be the best solution.
I'd say we should be able to tolerate "slight" increase over IOV_MAX for
"internal purposes", so to say. We can limit guest-supplied vector to
IOV_MAX, but internally guard against integer overflow only.
> I see two ways to fix this problem:
> 1. We split such long requests into two requests.
> 2. We join some elements of the vector into new buffers to make it
> shorter.
This seems to be over-complicated, both of them, no?
/mjt
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-06-06 8:00 ` Michael Tokarev
@ 2023-06-06 8:45 ` Hanna Czenczek
2023-06-06 10:47 ` Michael Tokarev
0 siblings, 1 reply; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-06 8:45 UTC (permalink / raw)
To: Michael Tokarev, qemu-block
Cc: qemu-devel, Kevin Wolf, Richard Henderson,
Vladimir Sementsov-Ogievskiy
On 06.06.23 10:00, Michael Tokarev wrote:
> 05.06.2023 18:45, Hanna Czenczek wrote:
>> When processing vectored guest requests that are not aligned to the
>> storage request alignment, we pad them by adding head and/or tail
>> buffers for a read-modify-write cycle.
>>
>> The guest can submit I/O vectors up to IOV_MAX (1024) in length, but
>> with this padding, the vector can exceed that limit. As of
>> 4c002cef0e9abe7135d7916c51abce47f7fc1ee2 ("util/iov: make
>> qemu_iovec_init_extended() honest"), we refuse to pad vectors beyond the
>> limit, instead returning an error to the guest.
>>
>> To the guest, this appears as a random I/O error. We should not return
>> an I/O error to the guest when it issued a perfectly valid request.
>>
>> Before 4c002cef0e9abe7135d7916c51abce47f7fc1ee2, we just made the vector
>> longer than IOV_MAX, which generally seems to work (because the guest
>> assumes a smaller alignment than we really have, file-posix's
>> raw_co_prw() will generally see bdrv_qiov_is_aligned() return false, and
>> so emulate the request, so that the IOV_MAX does not matter). However,
>> that does not seem exactly great.
>
> Why it is not "exactly great"? To me, it seems to be the best solution.
> I'd say we should be able to tolerate "slight" increase over IOV_MAX for
> "internal purposes", so to say. We can limit guest-supplied vector to
> IOV_MAX, but internally guard against integer overflow only.
That’s a good point that may have been worth investigating.
I considered it not great because I assumed that that
4c002cef0e9abe7135d7916c51abce47f7fc1ee2 was written with intent, i.e.
that the IOV_MAX limit was put in because we just generally assume in
the block layer that it isn’t exceeded. It may have worked out fine
before for one specific protocol driver (file-posix) most of the
time[1], but I think it’s reasonable to assume that code in the block
layer has generally been written under the assumption that vectors will
not exceed the IOV_MAX limit (or otherwise we wouldn’t use that constant
in the block layer). So in addition to file-posix, we’d also need to
inspect other code (like the blkio driver that will submit vectored
requests to an external library) what the implications of exceeding that
limit are in all places.
That is not to say that it might not have been the simpler solution to
agree to exceed the limit internally, but it is to say that the full
implications would need to be investigated first. In contrast, the
solution added here is more complicated in code, but is localized.
[1] It won’t be fine if all buffers are 4k in size and 4k-aligned, which
I admit is unlikely for a request whose offset isn’t aligned, but which
could happen with a partition that isn’t aligned to 4k.
>> I see two ways to fix this problem:
>> 1. We split such long requests into two requests.
>> 2. We join some elements of the vector into new buffers to make it
>> shorter.
>
> This seems to be over-complicated, both of them, no?
I would have preferred to have this discussion while the patch was still
on-list for review (this specific version was for two months, counting
from the first version was three). Do you think it is so complicated
and thus bug-prone that we must revert this series now and try the other
route?
I can agree that perhaps the other route could have been simpler, but
now we already have patches that are reviewed and in master, which solve
the problem. So I won’t spend more time on tackling this issue from
another angle. If you are happy to do so, patches are always welcome.
Hanna
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-06-06 8:45 ` Hanna Czenczek
@ 2023-06-06 10:47 ` Michael Tokarev
0 siblings, 0 replies; 29+ messages in thread
From: Michael Tokarev @ 2023-06-06 10:47 UTC (permalink / raw)
To: Hanna Czenczek, qemu-block
Cc: qemu-devel, Kevin Wolf, Richard Henderson,
Vladimir Sementsov-Ogievskiy
06.06.2023 11:45, Hanna Czenczek wrote:
> On 06.06.23 10:00, Michael Tokarev wrote:
..
>> This seems to be over-complicated, both of them, no?
>
> I would have preferred to have this discussion while the patch was still on-list for review (this specific version was for two months, counting from
> the first version was three). Do you think it is so complicated and thus bug-prone that we must revert this series now and try the other route?
Well. I come across this change only now when reviewing patches applied to qemu/master.
This one fixes a real bug which people were hitting, which is also quite difficult to
diagnose and which has a possibility for data corruption and other "interesting" effects,
so it is quite a natural thing to at least think about back-porting this change to
previous -stable qemu release. Bugs like this should be fixed in -stable IMHO.
Sadly I haven't noticed this change before, sure I'd have exactly the same thoughts
by then, and would be glad to help analyzing other parts of the code with potential
of having issues with IOV_MAX-exceeding vectors.
> I can agree that perhaps the other route could have been simpler, but now we already have patches that are reviewed and in master, which solve the
> problem. So I won’t spend more time on tackling this issue from another angle. If you are happy to do so, patches are always welcome.
That's a good point too.
Thanks,
/mjt
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation
2023-06-05 15:45 ` [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation Hanna Czenczek
@ 2023-06-07 6:51 ` Michael Tokarev
2023-06-07 8:47 ` Michael Tokarev
2023-06-07 15:14 ` Hanna Czenczek
0 siblings, 2 replies; 29+ messages in thread
From: Michael Tokarev @ 2023-06-07 6:51 UTC (permalink / raw)
To: Hanna Czenczek, qemu-block, Alexander Ivanov
Cc: qemu-devel, Kevin Wolf, Richard Henderson
05.06.2023 18:45, Hanna Czenczek wrote:
> From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
>
> data_end field in BDRVParallelsState is set to the biggest offset present
> in BAT. If this offset is outside of the image, any further write will
> create the cluster at this offset and/or the image will be truncated to
> this offset on close. This is definitely not correct.
>
> Raise an error in parallels_open() if data_end points outside the image
> and it is not a check (let the check to repaire the image). Set data_end
> to the end of the cluster with the last correct offset.
Hi!
This, and a few other parallels changes in this series:
parallels: Out of image offset in BAT leads to image inflation
parallels: Fix high_off calculation in parallels_co_check()
parallels: Fix image_end_offset and data_end after out-of-image check
parallels: Fix statistics calculation (?)
Should these be applied to -stable too, or is it not important?
Thanks,
/mjt
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation
2023-06-07 6:51 ` Michael Tokarev
@ 2023-06-07 8:47 ` Michael Tokarev
2023-06-07 15:14 ` Hanna Czenczek
1 sibling, 0 replies; 29+ messages in thread
From: Michael Tokarev @ 2023-06-07 8:47 UTC (permalink / raw)
To: Hanna Czenczek, qemu-block, Alexander Ivanov
Cc: qemu-devel, Kevin Wolf, Richard Henderson
07.06.2023 09:51, Michael Tokarev wrote:
> 05.06.2023 18:45, Hanna Czenczek wrote:
>> From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
>>
>> data_end field in BDRVParallelsState is set to the biggest offset present
>> in BAT. If this offset is outside of the image, any further write will
>> create the cluster at this offset and/or the image will be truncated to
>> this offset on close. This is definitely not correct.
>>
>> Raise an error in parallels_open() if data_end points outside the image
>> and it is not a check (let the check to repaire the image). Set data_end
>> to the end of the cluster with the last correct offset.
>
> Hi!
>
> This, and a few other parallels changes in this series:
>
> parallels: Out of image offset in BAT leads to image inflation
> parallels: Fix high_off calculation in parallels_co_check()
> parallels: Fix image_end_offset and data_end after out-of-image check
> parallels: Fix statistics calculation (?)
And probably also:
parallels: Incorrect condition in out-of-image check
> Should these be applied to -stable too, or is it not important?
>
> Thanks,
>
> /mjt
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation
2023-06-07 6:51 ` Michael Tokarev
2023-06-07 8:47 ` Michael Tokarev
@ 2023-06-07 15:14 ` Hanna Czenczek
2023-06-09 8:54 ` Denis V. Lunev
1 sibling, 1 reply; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-07 15:14 UTC (permalink / raw)
To: Michael Tokarev, qemu-block, Alexander Ivanov
Cc: qemu-devel, Kevin Wolf, Richard Henderson
On 07.06.23 08:51, Michael Tokarev wrote:
> 05.06.2023 18:45, Hanna Czenczek wrote:
>> From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
>>
>> data_end field in BDRVParallelsState is set to the biggest offset
>> present
>> in BAT. If this offset is outside of the image, any further write will
>> create the cluster at this offset and/or the image will be truncated to
>> this offset on close. This is definitely not correct.
>>
>> Raise an error in parallels_open() if data_end points outside the image
>> and it is not a check (let the check to repaire the image). Set data_end
>> to the end of the cluster with the last correct offset.
>
> Hi!
>
> This, and a few other parallels changes in this series:
>
> parallels: Out of image offset in BAT leads to image inflation
> parallels: Fix high_off calculation in parallels_co_check()
> parallels: Fix image_end_offset and data_end after out-of-image check
> parallels: Fix statistics calculation (?)
>
> Should these be applied to -stable too, or is it not important?
Personally, I don’t think they need to be in stable; but I’ll leave the
final judgment to Alexander.
Hanna
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-06-05 15:45 ` [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
2023-06-06 8:00 ` Michael Tokarev
@ 2023-06-08 9:52 ` Peter Maydell
2023-06-09 7:45 ` Hanna Czenczek
1 sibling, 1 reply; 29+ messages in thread
From: Peter Maydell @ 2023-06-08 9:52 UTC (permalink / raw)
To: Hanna Czenczek; +Cc: qemu-block, qemu-devel, Kevin Wolf, Richard Henderson
On Mon, 5 Jun 2023 at 16:48, Hanna Czenczek <hreitz@redhat.com> wrote:
>
> When processing vectored guest requests that are not aligned to the
> storage request alignment, we pad them by adding head and/or tail
> buffers for a read-modify-write cycle.
Hi; Coverity complains (CID 1512819) that the assert added
in this commit is always true:
> @@ -1573,26 +1701,34 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
> static int bdrv_pad_request(BlockDriverState *bs,
> QEMUIOVector **qiov, size_t *qiov_offset,
> int64_t *offset, int64_t *bytes,
> + bool write,
> BdrvRequestPadding *pad, bool *padded,
> BdrvRequestFlags *flags)
> {
> int ret;
> + struct iovec *sliced_iov;
> + int sliced_niov;
> + size_t sliced_head, sliced_tail;
>
> bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
>
> - if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
> + if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
> if (padded) {
> *padded = false;
> }
> return 0;
> }
>
> - ret = qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
> - *qiov, *qiov_offset, *bytes,
> - pad->buf + pad->buf_len - pad->tail,
> - pad->tail);
> + sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
> + &sliced_head, &sliced_tail,
> + &sliced_niov);
> +
> + /* Guaranteed by bdrv_check_qiov_request() */
> + assert(*bytes <= SIZE_MAX);
This one. (The Coverity complaint is because SIZE_MAX for it is
UINT64_MAX and an int64_t can't possibly be bigger than that.)
Is this because the assert() is deliberately handling the case
of a 32-bit host where SIZE_MAX might not be UINT64_MAX ? Or was
the bound supposed to be SSIZE_MAX or INT64_MAX ?
Looking at bdrv_check_qiov_request(), it seems to check bytes
against BDRV_MAX_LENGTH, which is defined as something somewhere
near INT64_MAX. So on a 32-bit host I'm not sure that function
does guarantee that the bytes count is going to be less than
SIZE_MAX...
(CID 1512819)
thanks
-- PMM
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX
2023-06-08 9:52 ` Peter Maydell
@ 2023-06-09 7:45 ` Hanna Czenczek
0 siblings, 0 replies; 29+ messages in thread
From: Hanna Czenczek @ 2023-06-09 7:45 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-block, qemu-devel, Kevin Wolf, Richard Henderson
On 08.06.23 11:52, Peter Maydell wrote:
> On Mon, 5 Jun 2023 at 16:48, Hanna Czenczek <hreitz@redhat.com> wrote:
>> When processing vectored guest requests that are not aligned to the
>> storage request alignment, we pad them by adding head and/or tail
>> buffers for a read-modify-write cycle.
> Hi; Coverity complains (CID 1512819) that the assert added
> in this commit is always true:
>
>> @@ -1573,26 +1701,34 @@ static void bdrv_padding_destroy(BdrvRequestPadding *pad)
>> static int bdrv_pad_request(BlockDriverState *bs,
>> QEMUIOVector **qiov, size_t *qiov_offset,
>> int64_t *offset, int64_t *bytes,
>> + bool write,
>> BdrvRequestPadding *pad, bool *padded,
>> BdrvRequestFlags *flags)
>> {
>> int ret;
>> + struct iovec *sliced_iov;
>> + int sliced_niov;
>> + size_t sliced_head, sliced_tail;
>>
>> bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
>>
>> - if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
>> + if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
>> if (padded) {
>> *padded = false;
>> }
>> return 0;
>> }
>>
>> - ret = qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
>> - *qiov, *qiov_offset, *bytes,
>> - pad->buf + pad->buf_len - pad->tail,
>> - pad->tail);
>> + sliced_iov = qemu_iovec_slice(*qiov, *qiov_offset, *bytes,
>> + &sliced_head, &sliced_tail,
>> + &sliced_niov);
>> +
>> + /* Guaranteed by bdrv_check_qiov_request() */
>> + assert(*bytes <= SIZE_MAX);
> This one. (The Coverity complaint is because SIZE_MAX for it is
> UINT64_MAX and an int64_t can't possibly be bigger than that.)
>
> Is this because the assert() is deliberately handling the case
> of a 32-bit host where SIZE_MAX might not be UINT64_MAX ? Or was
> the bound supposed to be SSIZE_MAX or INT64_MAX ?
It’s supposed to be SIZE_MAX, because of the subsequent
bdrv_created_padded_qiov() call that takes a size_t. So it is for a
case where SIZE_MAX < UINT64_MAX, i.e. 32-bit hosts, yes.
> Looking at bdrv_check_qiov_request(), it seems to check bytes
> against BDRV_MAX_LENGTH, which is defined as something somewhere
> near INT64_MAX. So on a 32-bit host I'm not sure that function
> does guarantee that the bytes count is going to be less than
> SIZE_MAX...
Ah, crap. I was thinking of BDRV_REQUEST_MAX_BYTES, which is checked by
bdrv_check_request32(), which both callers of bdrv_pad_request()
run/check before calling bdrv_pad_request(). So bdrv_pad_request()
should use the same function.
I’ll send a patch to change the bdrv_check_qiov_request() to
bdrv_check_request32(). Because both callers (bdrv_co_preadv_part(),
bdrv_co_pwritev_part()) already call it (the latter only for non-zero
writes, but bdrv_pad_request() similarly is called only for non-zero
writes), there should be no change in behavior, and the asserted
condition should in practice already be always fulfilled (because of the
callers checking).
Hanna
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation
2023-06-07 15:14 ` Hanna Czenczek
@ 2023-06-09 8:54 ` Denis V. Lunev
2023-06-09 9:05 ` Michael Tokarev
0 siblings, 1 reply; 29+ messages in thread
From: Denis V. Lunev @ 2023-06-09 8:54 UTC (permalink / raw)
To: Hanna Czenczek, Michael Tokarev, qemu-block, Alexander Ivanov
Cc: qemu-devel, Kevin Wolf, Richard Henderson
On 6/7/23 17:14, Hanna Czenczek wrote:
> On 07.06.23 08:51, Michael Tokarev wrote:
>> 05.06.2023 18:45, Hanna Czenczek wrote:
>>> From: Alexander Ivanov <alexander.ivanov@virtuozzo.com>
>>>
>>> data_end field in BDRVParallelsState is set to the biggest offset
>>> present
>>> in BAT. If this offset is outside of the image, any further write will
>>> create the cluster at this offset and/or the image will be truncated to
>>> this offset on close. This is definitely not correct.
>>>
>>> Raise an error in parallels_open() if data_end points outside the image
>>> and it is not a check (let the check to repaire the image). Set
>>> data_end
>>> to the end of the cluster with the last correct offset.
>>
>> Hi!
>>
>> This, and a few other parallels changes in this series:
>>
>> parallels: Out of image offset in BAT leads to image inflation
>> parallels: Fix high_off calculation in parallels_co_check()
>> parallels: Fix image_end_offset and data_end after out-of-image check
>> parallels: Fix statistics calculation (?)
>>
>> Should these be applied to -stable too, or is it not important?
>
> Personally, I don’t think they need to be in stable; but I’ll leave
> the final judgment to Alexander.
>
> Hanna
>
>
I do not think that this needs to go to stable too.
Thanks you in advance,
Den
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation
2023-06-09 8:54 ` Denis V. Lunev
@ 2023-06-09 9:05 ` Michael Tokarev
0 siblings, 0 replies; 29+ messages in thread
From: Michael Tokarev @ 2023-06-09 9:05 UTC (permalink / raw)
To: Denis V. Lunev, Hanna Czenczek, qemu-block, Alexander Ivanov
Cc: qemu-devel, Kevin Wolf, Richard Henderson
09.06.2023 11:54, Denis V. Lunev wrote:
> On 6/7/23 17:14, Hanna Czenczek wrote:
..>>> This, and a few other parallels changes in this series:
>>> Should these be applied to -stable too, or is it not important?
>>
>> Personally, I don’t think they need to be in stable; but I’ll leave the final judgment to Alexander.
>>
> I do not think that this needs to go to stable too.
Ok, excellent, thank you!
/mjt
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2023-06-09 9:06 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-05 15:45 [PULL 00/17] Block patches Hanna Czenczek
2023-06-05 15:45 ` [PULL 01/17] util/iov: Make qiov_slice() public Hanna Czenczek
2023-06-05 15:45 ` [PULL 02/17] block: Collapse padded I/O vecs exceeding IOV_MAX Hanna Czenczek
2023-06-06 8:00 ` Michael Tokarev
2023-06-06 8:45 ` Hanna Czenczek
2023-06-06 10:47 ` Michael Tokarev
2023-06-08 9:52 ` Peter Maydell
2023-06-09 7:45 ` Hanna Czenczek
2023-06-05 15:45 ` [PULL 03/17] util/iov: Remove qemu_iovec_init_extended() Hanna Czenczek
2023-06-05 15:45 ` [PULL 04/17] iotests/iov-padding: New test Hanna Czenczek
2023-06-05 15:45 ` [PULL 05/17] parallels: Out of image offset in BAT leads to image inflation Hanna Czenczek
2023-06-07 6:51 ` Michael Tokarev
2023-06-07 8:47 ` Michael Tokarev
2023-06-07 15:14 ` Hanna Czenczek
2023-06-09 8:54 ` Denis V. Lunev
2023-06-09 9:05 ` Michael Tokarev
2023-06-05 15:45 ` [PULL 06/17] parallels: Fix high_off calculation in parallels_co_check() Hanna Czenczek
2023-06-05 15:45 ` [PULL 07/17] parallels: Fix image_end_offset and data_end after out-of-image check Hanna Czenczek
2023-06-05 15:45 ` [PULL 08/17] parallels: create parallels_set_bat_entry_helper() to assign BAT value Hanna Czenczek
2023-06-05 15:45 ` [PULL 09/17] parallels: Use generic infrastructure for BAT writing in parallels_co_check() Hanna Czenczek
2023-06-05 15:45 ` [PULL 10/17] parallels: Move check of unclean image to a separate function Hanna Czenczek
2023-06-05 15:45 ` [PULL 11/17] parallels: Move check of cluster outside " Hanna Czenczek
2023-06-05 15:45 ` [PULL 12/17] parallels: Fix statistics calculation Hanna Czenczek
2023-06-05 15:45 ` [PULL 13/17] parallels: Move check of leaks to a separate function Hanna Czenczek
2023-06-05 15:45 ` [PULL 14/17] parallels: Move statistic collection " Hanna Czenczek
2023-06-05 15:45 ` [PULL 15/17] parallels: Replace qemu_co_mutex_lock by WITH_QEMU_LOCK_GUARD Hanna Czenczek
2023-06-05 15:45 ` [PULL 16/17] parallels: Incorrect condition in out-of-image check Hanna Czenczek
2023-06-05 15:45 ` [PULL 17/17] qcow2: add discard-no-unref option Hanna Czenczek
2023-06-05 19:03 ` [PULL 00/17] Block patches Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).