qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] block-backend: avoid deadlocks due to early queuing of request
@ 2023-04-05 16:17 Paolo Bonzini
  2023-04-05 16:17 ` [PATCH 1/3] aio-posix: disable polling after aio_disable_external() Paolo Bonzini
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Paolo Bonzini @ 2023-04-05 16:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, hreitz, kwolf, f.ebner

IDE TRIM is a BB user that wants to elevate its BB's in-flight counter
for a "macro" operation that consists of several actual I/O operations.
Each of those operations is individually started and awaited.  It does
this so that blk_drain() will drain the whole TRIM, and not just a
single one of the many discard operations it may encompass.

When request queuing is enabled, this leads to a deadlock: The currently
ongoing discard is drained, and the next one is queued, waiting for the
drain to stop.  Meanwhile, TRIM still keeps the in-flight counter
elevated, waiting for all discards to stop -- which will never happen,
because with the in-flight counter elevated, the BB is never considered
drained, so the drained section does not begin and cannot end.

Draining has two purposes, granting exclusive access to a BlockDriverState
and waiting for all previous requests to complete.  Request queuing was
introduced mostly to ensure exclusive access to the BlockDriverState.
However, the implementation is stricter: it prevents new requests from
being submitted to the BlockDriverState, not allowing them to start
instead of just letting them complete before bdrv_drained_begin() returns.

The reason for this was to ensure progress and avoid a livelock
in blk_drain(), blk_drain_all_begin(), bdrv_drained_begin() or
bdrv_drain_all_begin(), if there is an endless stream of requests to
a BlockBackend.  However, as proved by the IDE TRIM testcase, the current
implementation of request queuing is prone to deadlocks and hard to fix
in different ways---even though Hanna tried, all of her attempts were
unsatisfactory one way or the other.

As the name suggests, deadlocks are worse than livelocks :) so let's
avoid them: turn the request queuing on only after the BlockBackend has
quiesced, and leave the second functionality of bdrv_drained_begin()
to the BQL or to the individual BlockDevOps implementations.

And in fact, this is not really a problem.  Of the various users of
BlockBackend, all of them can avoid the livelock:

- for a device that runs in the vCPU thread, requests will only be
submitted while holding the big QEMU lock, meaning they _won't_ be
submitted during bdrv_drained_begin() or bdrv_drain_all_begin().

- for anything that is blocked by aio_disable_external(), the iothread
will not be woken up.  There is still the case of polling, which has
to be disabled with patch 1.  This is slightly hackish but anyway
aio_disable_external() is going away, meaning that these cases will
fall under the third bucket...

- ... i.e. BlockBackends that can use a .drained_begin callback in
their BlockDevOps to temporarily stop I/O submissions.  Note that this
callback is not _absolutely_ necessary, in particular it is not needed
for safety because the patches do not get away with request queuing.

In the end, request queuing should indeed be unnecessary if .drained_begin
is implemented properly in all BlockDevOps.  It should be possible to warn
if a request come at the wrong time.  However, this is left for later.

Paolo


Based-on: <20230405101634.10537-1-pbonzini@redhat.com>


Paolo Bonzini (3):
  aio-posix: disable polling after aio_disable_external()
  block-backend: make global properties write-once
  block-backend: delay application of request queuing

 block/block-backend.c             | 61 +++++++++++++++++++++----------
 block/commit.c                    |  4 +-
 block/export/export.c             |  2 +-
 block/mirror.c                    |  4 +-
 block/parallels.c                 |  2 +-
 block/qcow.c                      |  2 +-
 block/qcow2.c                     |  2 +-
 block/qed.c                       |  2 +-
 block/stream.c                    |  4 +-
 block/vdi.c                       |  2 +-
 block/vhdx.c                      |  2 +-
 block/vmdk.c                      |  4 +-
 block/vpc.c                       |  2 +-
 include/sysemu/block-backend-io.h |  6 +--
 nbd/server.c                      |  3 +-
 tests/unit/test-bdrv-drain.c      |  4 +-
 tests/unit/test-block-iothread.c  |  2 +-
 util/aio-posix.c                  |  2 +-
 18 files changed, 66 insertions(+), 44 deletions(-)

-- 
2.39.2



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] aio-posix: disable polling after aio_disable_external()
  2023-04-05 16:17 [PATCH 0/3] block-backend: avoid deadlocks due to early queuing of request Paolo Bonzini
@ 2023-04-05 16:17 ` Paolo Bonzini
  2023-04-05 16:17 ` [PATCH 2/3] block-backend: make global properties write-once Paolo Bonzini
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2023-04-05 16:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, hreitz, kwolf, f.ebner

Polling can cause external requests to be picked up even if the AioContext
is not looking at external file descriptors.  Disable all polling between
aio_disable_external() and aio_enable_external(), since aio_set_fd_poll()
does not distinguish external handlers from those that in principle could
run.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/aio-posix.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/util/aio-posix.c b/util/aio-posix.c
index a8be940f760d..0d22e3d6d37c 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -29,7 +29,7 @@
 
 bool aio_poll_disabled(AioContext *ctx)
 {
-    return qatomic_read(&ctx->poll_disable_cnt);
+    return qatomic_read(&ctx->poll_disable_cnt) || qatomic_read(&ctx->external_disable_cnt);
 }
 
 void aio_add_ready_handler(AioHandlerList *ready_list,
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] block-backend: make global properties write-once
  2023-04-05 16:17 [PATCH 0/3] block-backend: avoid deadlocks due to early queuing of request Paolo Bonzini
  2023-04-05 16:17 ` [PATCH 1/3] aio-posix: disable polling after aio_disable_external() Paolo Bonzini
@ 2023-04-05 16:17 ` Paolo Bonzini
  2023-04-05 16:30 ` [PATCH] block-backend: delay application of request queuing Paolo Bonzini
  2023-04-05 16:31 ` [PATCH 3/3] " Paolo Bonzini
  3 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2023-04-05 16:17 UTC (permalink / raw)
  To: qemu-devel; +Cc: qemu-block, hreitz, kwolf, f.ebner

The three global properties allow_aio_context_change,
disable_request_queuing and allow_write_before_eof are
always set for the whole life of a BlockBackend.  Make
this clear by removing the possibility of clearing them,
and by marking the corresponding function GLOBAL_STATE_CODE().

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/block-backend.c             | 27 +++++++++++++++------------
 block/commit.c                    |  4 ++--
 block/export/export.c             |  2 +-
 block/mirror.c                    |  4 ++--
 block/parallels.c                 |  2 +-
 block/qcow.c                      |  2 +-
 block/qcow2.c                     |  2 +-
 block/qed.c                       |  2 +-
 block/stream.c                    |  4 ++--
 block/vdi.c                       |  2 +-
 block/vhdx.c                      |  2 +-
 block/vmdk.c                      |  4 ++--
 block/vpc.c                       |  2 +-
 include/sysemu/block-backend-io.h |  6 +++---
 nbd/server.c                      |  3 +--
 tests/unit/test-bdrv-drain.c      |  4 ++--
 tests/unit/test-block-iothread.c  |  2 +-
 17 files changed, 38 insertions(+), 36 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 9e0f48692a35..10419f8be91e 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -73,8 +73,13 @@ struct BlockBackend {
     uint64_t shared_perm;
     bool disable_perm;
 
+    /*
+     * Can only become true; should be written before any requests is
+     * submitted to the BlockBackend.
+     */
     bool allow_aio_context_change;
     bool allow_write_beyond_eof;
+    bool disable_request_queuing;
 
     /* Protected by BQL */
     NotifierList remove_bs_notifiers, insert_bs_notifiers;
@@ -83,7 +88,6 @@ struct BlockBackend {
     int quiesce_counter; /* atomic: written under BQL, read by other threads */
     QemuMutex queued_requests_lock; /* protects queued_requests */
     CoQueue queued_requests;
-    bool disable_request_queuing; /* atomic */
 
     VMChangeStateEntry *vmsh;
     bool force_allow_inactivate;
@@ -1221,22 +1225,22 @@ void blk_iostatus_set_err(BlockBackend *blk, int error)
     }
 }
 
-void blk_set_allow_write_beyond_eof(BlockBackend *blk, bool allow)
+void blk_allow_write_beyond_eof(BlockBackend *blk)
 {
-    IO_CODE();
-    blk->allow_write_beyond_eof = allow;
+    GLOBAL_STATE_CODE();
+    blk->allow_write_beyond_eof = true;
 }
 
-void blk_set_allow_aio_context_change(BlockBackend *blk, bool allow)
+void blk_allow_aio_context_change(BlockBackend *blk)
 {
-    IO_CODE();
-    blk->allow_aio_context_change = allow;
+    GLOBAL_STATE_CODE();
+    blk->allow_aio_context_change = true;
 }
 
-void blk_set_disable_request_queuing(BlockBackend *blk, bool disable)
+void blk_disable_request_queuing(BlockBackend *blk)
 {
-    IO_CODE();
-    qatomic_set(&blk->disable_request_queuing, disable);
+    GLOBAL_STATE_CODE();
+    blk->disable_request_queuing = true;
 }
 
 static int coroutine_fn GRAPH_RDLOCK
@@ -1275,8 +1279,7 @@ static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
     assert(blk->in_flight > 0);
 
-    if (qatomic_read(&blk->quiesce_counter) &&
-        !qatomic_read(&blk->disable_request_queuing)) {
+    if (qatomic_read(&blk->quiesce_counter) && !blk->disable_request_queuing) {
         /*
          * Take lock before decrementing in flight counter so main loop thread
          * waits for us to enqueue ourselves before it can leave the drained
diff --git a/block/commit.c b/block/commit.c
index 2b20fd0fd4d2..88e1d7373d36 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -379,7 +379,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
     if (ret < 0) {
         goto fail;
     }
-    blk_set_disable_request_queuing(s->base, true);
+    blk_disable_request_queuing(s->base);
     s->base_bs = base;
 
     /* Required permissions are already taken with block_job_add_bdrv() */
@@ -388,7 +388,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
     if (ret < 0) {
         goto fail;
     }
-    blk_set_disable_request_queuing(s->top, true);
+    blk_disable_request_queuing(s->top);
 
     s->backing_file_str = g_strdup(backing_file_str);
     s->on_error = on_error;
diff --git a/block/export/export.c b/block/export/export.c
index e3fee6061169..0a1336c07fed 100644
--- a/block/export/export.c
+++ b/block/export/export.c
@@ -155,7 +155,7 @@ BlockExport *blk_exp_add(BlockExportOptions *export, Error **errp)
     blk = blk_new(ctx, perm, BLK_PERM_ALL);
 
     if (!fixed_iothread) {
-        blk_set_allow_aio_context_change(blk, true);
+        blk_allow_aio_context_change(blk);
     }
 
     ret = blk_insert_bs(blk, bs, errp);
diff --git a/block/mirror.c b/block/mirror.c
index 1c46ad51bf50..93eda37660a3 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1787,8 +1787,8 @@ static BlockJob *mirror_start_job(
          * ensure that. */
         blk_set_force_allow_inactivate(s->target);
     }
-    blk_set_allow_aio_context_change(s->target, true);
-    blk_set_disable_request_queuing(s->target, true);
+    blk_allow_aio_context_change(s->target);
+    blk_disable_request_queuing(s->target);
 
     s->replaces = g_strdup(replaces);
     s->on_source_error = on_source_error;
diff --git a/block/parallels.c b/block/parallels.c
index 013684801a61..97a5c629bbab 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -578,7 +578,7 @@ static int coroutine_fn parallels_co_create(BlockdevCreateOptions* opts,
         ret = -EPERM;
         goto out;
     }
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     /* Create image format */
     bat_entries = DIV_ROUND_UP(total_size, cl_size);
diff --git a/block/qcow.c b/block/qcow.c
index 490e4f819ed1..5089dd0c6bf3 100644
--- a/block/qcow.c
+++ b/block/qcow.c
@@ -842,7 +842,7 @@ static int coroutine_fn qcow_co_create(BlockdevCreateOptions *opts,
         ret = -EPERM;
         goto exit;
     }
-    blk_set_allow_write_beyond_eof(qcow_blk, true);
+    blk_allow_write_beyond_eof(qcow_blk);
 
     /* Create image format */
     memset(&header, 0, sizeof(header));
diff --git a/block/qcow2.c b/block/qcow2.c
index f8ea03a34515..761aa7e1555a 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -3643,7 +3643,7 @@ qcow2_co_create(BlockdevCreateOptions *create_options, Error **errp)
         ret = -EPERM;
         goto out;
     }
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     /* Write the header */
     QEMU_BUILD_BUG_ON((1 << MIN_CLUSTER_BITS) < sizeof(*header));
diff --git a/block/qed.c b/block/qed.c
index 0705a7b4e25f..7fec1cabc4f6 100644
--- a/block/qed.c
+++ b/block/qed.c
@@ -690,7 +690,7 @@ static int coroutine_fn bdrv_qed_co_create(BlockdevCreateOptions *opts,
         ret = -EPERM;
         goto out;
     }
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     /* Prepare image format */
     header = (QEDHeader) {
diff --git a/block/stream.c b/block/stream.c
index d92a4c99d359..935e109a4e05 100644
--- a/block/stream.c
+++ b/block/stream.c
@@ -336,8 +336,8 @@ void stream_start(const char *job_id, BlockDriverState *bs,
      * Disable request queuing in the BlockBackend to avoid deadlocks on drain:
      * The job reports that it's busy until it reaches a pause point.
      */
-    blk_set_disable_request_queuing(s->blk, true);
-    blk_set_allow_aio_context_change(s->blk, true);
+    blk_disable_request_queuing(s->blk);
+    blk_allow_aio_context_change(s->blk);
 
     /*
      * Prevent concurrent jobs trying to modify the graph structure here, we
diff --git a/block/vdi.c b/block/vdi.c
index f2434d6153e1..1e4eb6a0bd0b 100644
--- a/block/vdi.c
+++ b/block/vdi.c
@@ -813,7 +813,7 @@ static int coroutine_fn vdi_co_do_create(BlockdevCreateOptions *create_options,
         goto exit;
     }
 
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     /* We need enough blocks to store the given disk size,
        so always round up. */
diff --git a/block/vhdx.c b/block/vhdx.c
index 81420722a188..7f59b6cb0403 100644
--- a/block/vhdx.c
+++ b/block/vhdx.c
@@ -2003,7 +2003,7 @@ static int coroutine_fn vhdx_co_create(BlockdevCreateOptions *opts,
         ret = -EPERM;
         goto delete_and_exit;
     }
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     /* Create (A) */
 
diff --git a/block/vmdk.c b/block/vmdk.c
index 3f8c731e32e8..08a009f527e1 100644
--- a/block/vmdk.c
+++ b/block/vmdk.c
@@ -2298,7 +2298,7 @@ vmdk_create_extent(const char *filename, int64_t filesize, bool flat,
         goto exit;
     }
 
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     ret = vmdk_init_extent(blk, filesize, flat, compress, zeroed_grain, errp);
 exit:
@@ -2796,7 +2796,7 @@ static BlockBackend * coroutine_fn vmdk_co_create_cb(int64_t size, int idx,
     if (!blk) {
         return NULL;
     }
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
     bdrv_unref(bs);
 
     if (size != -1) {
diff --git a/block/vpc.c b/block/vpc.c
index b89b0ff8e275..1dc9a86c6aa2 100644
--- a/block/vpc.c
+++ b/block/vpc.c
@@ -1016,7 +1016,7 @@ static int coroutine_fn vpc_co_create(BlockdevCreateOptions *opts,
         ret = -EPERM;
         goto out;
     }
-    blk_set_allow_write_beyond_eof(blk, true);
+    blk_allow_write_beyond_eof(blk);
 
     /* Get geometry and check that it matches the image size*/
     ret = calculate_rounded_image_size(vpc_opts, &cyls, &heads, &secs_per_cyl,
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
index db29c164997d..1a55d25c133a 100644
--- a/include/sysemu/block-backend-io.h
+++ b/include/sysemu/block-backend-io.h
@@ -27,9 +27,9 @@ const char *blk_name(const BlockBackend *blk);
 
 BlockDriverState *blk_bs(BlockBackend *blk);
 
-void blk_set_allow_write_beyond_eof(BlockBackend *blk, bool allow);
-void blk_set_allow_aio_context_change(BlockBackend *blk, bool allow);
-void blk_set_disable_request_queuing(BlockBackend *blk, bool disable);
+void blk_allow_write_beyond_eof(BlockBackend *blk);
+void blk_allow_aio_context_change(BlockBackend *blk);
+void blk_disable_request_queuing(BlockBackend *blk);
 bool blk_iostatus_is_enabled(const BlockBackend *blk);
 
 char *blk_get_attached_dev_id(BlockBackend *blk);
diff --git a/nbd/server.c b/nbd/server.c
index cb41b56095ee..423dc2d2517e 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1777,7 +1777,7 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
      * be properly quiesced when entering a drained section, as our coroutines
      * servicing pending requests might enter blk_pread().
      */
-    blk_set_disable_request_queuing(blk, true);
+    blk_disable_request_queuing(blk);
 
     blk_add_aio_context_notifier(blk, blk_aio_attached, blk_aio_detach, exp);
 
@@ -1853,7 +1853,6 @@ static void nbd_export_delete(BlockExport *blk_exp)
     }
     blk_remove_aio_context_notifier(exp->common.blk, blk_aio_attached,
                                     blk_aio_detach, exp);
-    blk_set_disable_request_queuing(exp->common.blk, false);
 
     for (i = 0; i < exp->nr_export_bitmaps; i++) {
         bdrv_dirty_bitmap_set_busy(exp->export_bitmaps[i], false);
diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index d9d38070621a..9484e194d6f9 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -513,7 +513,7 @@ static void test_iothread_common(enum drain_type drain_type, int drain_thread)
                               &error_abort);
     s = bs->opaque;
     blk_insert_bs(blk, bs, &error_abort);
-    blk_set_disable_request_queuing(blk, true);
+    blk_disable_request_queuing(blk);
 
     blk_set_aio_context(blk, ctx_a, &error_abort);
     aio_context_acquire(ctx_a);
@@ -739,7 +739,7 @@ static void test_blockjob_common_drain_node(enum drain_type drain_type,
                                   &error_abort);
     blk_target = blk_new(qemu_get_aio_context(), BLK_PERM_ALL, BLK_PERM_ALL);
     blk_insert_bs(blk_target, target, &error_abort);
-    blk_set_allow_aio_context_change(blk_target, true);
+    blk_allow_aio_context_change(blk_target);
 
     aio_context_acquire(ctx);
     tjob = block_job_create("job0", &test_job_driver, NULL, src,
diff --git a/tests/unit/test-block-iothread.c b/tests/unit/test-block-iothread.c
index 3a5e1eb2c413..90b60ce32c68 100644
--- a/tests/unit/test-block-iothread.c
+++ b/tests/unit/test-block-iothread.c
@@ -795,7 +795,7 @@ static void test_propagate_mirror(void)
 
     /* ...unless we explicitly allow it */
     aio_context_acquire(ctx);
-    blk_set_allow_aio_context_change(blk, true);
+    blk_allow_aio_context_change(blk);
     bdrv_try_change_aio_context(target, ctx, NULL, &error_abort);
     aio_context_release(ctx);
 
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH] block-backend: delay application of request queuing
  2023-04-05 16:17 [PATCH 0/3] block-backend: avoid deadlocks due to early queuing of request Paolo Bonzini
  2023-04-05 16:17 ` [PATCH 1/3] aio-posix: disable polling after aio_disable_external() Paolo Bonzini
  2023-04-05 16:17 ` [PATCH 2/3] block-backend: make global properties write-once Paolo Bonzini
@ 2023-04-05 16:30 ` Paolo Bonzini
  2023-04-05 16:31 ` [PATCH 3/3] " Paolo Bonzini
  3 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2023-04-05 16:30 UTC (permalink / raw)
  To: qemu-devel; +Cc: Fiona Ebner

Request queuing prevents new requests from being submitted to the
BlockDriverState, not allowing them to start instead of just letting
them complete before bdrv_drained_begin() returns.

The reason for this was to ensure progress and avoid a livelock
in blk_drain(), blk_drain_all_begin(), bdrv_drained_begin() or
bdrv_drain_all_begin(), if there is an endless stream of requests to
a BlockBackend.  However, this is prone to deadlocks.

In particular, IDE TRIM wants to elevate its BB's in-flight counter for a
"macro" operation that consists of several actual I/O operations.  Each of
those operations is individually started and awaited.  It does this so
that blk_drain() will drain the whole TRIM, and not just a single one
of the many discard operations it may encompass.  When request queuing
is enabled, this leads to a deadlock: The currently ongoing discard is
drained, and the next one is queued, waiting for the drain to stop.
Meanwhile, TRIM still keeps the in-flight counter elevated, waiting
for all discards to stop -- which will never happen, because with the
in-flight counter elevated, the BB is never considered drained, so the
drained section does not begin and cannot end.

Fixing the implementation of request queuing is hard to do in general,
and even harder to do without adding more hacks.  As the name suggests,
deadlocks are worse than livelocks :) so let's avoid them: turn the
request queuing on only after the BlockBackend has quiesced, and leave
the second functionality of bdrv_drained_begin() to the BQL or to the
individual BlockDevOps implementations.

In fact, devices such as IDE that run in the vCPU thread do not suffer
from this livelock because they only submit requests while they are
allowed to hold the big QEMU lock (i.e., not during bdrv_drained_begin()
or bdrv_drain_all_begin().  Other devices can avoid it through external
file descriptor (so that aio_disable_external() will prevent submission
of new requests) or with a .drained_begin callback in their BlockDevOps.

Note that this change does not affect the safety of bdrv_drained_begin(),
since the patch does not completely get away with request queuing.

Reported-by: Fiona Ebner <f.ebner@proxmox.com>
Fixes: 7e5cdb345f77 ("ide: Increment BB in-flight counter for TRIM BH")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/block-backend.c | 40 ++++++++++++++++++++++++++++++----------
 1 file changed, 30 insertions(+), 10 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 10419f8be91e..acb4cb91a5ee 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -42,6 +42,12 @@ typedef struct BlockBackendAioNotifier {
     QLIST_ENTRY(BlockBackendAioNotifier) list;
 } BlockBackendAioNotifier;
 
+typedef enum {
+    BLK_QUEUE_READY,
+    BLK_QUEUE_DISABLED,
+    BLK_QUEUE_QUIESCENT,
+} BlockBackendQueueState;
+
 struct BlockBackend {
     char *name;
     int refcnt;
@@ -79,13 +85,14 @@ struct BlockBackend {
      */
     bool allow_aio_context_change;
     bool allow_write_beyond_eof;
-    bool disable_request_queuing;
 
     /* Protected by BQL */
     NotifierList remove_bs_notifiers, insert_bs_notifiers;
     QLIST_HEAD(, BlockBackendAioNotifier) aio_notifiers;
 
     int quiesce_counter; /* atomic: written under BQL, read by other threads */
+    BlockBackendQueueState request_queuing;
+
     QemuMutex queued_requests_lock; /* protects queued_requests */
     CoQueue queued_requests;
 
@@ -368,6 +375,7 @@ BlockBackend *blk_new(AioContext *ctx, uint64_t perm, uint64_t shared_perm)
     blk->shared_perm = shared_perm;
     blk_set_enable_write_cache(blk, true);
 
+    blk->request_queuing = BLK_QUEUE_READY;
     blk->on_read_error = BLOCKDEV_ON_ERROR_REPORT;
     blk->on_write_error = BLOCKDEV_ON_ERROR_ENOSPC;
 
@@ -1240,7 +1248,7 @@ void blk_allow_aio_context_change(BlockBackend *blk)
 void blk_disable_request_queuing(BlockBackend *blk)
 {
     GLOBAL_STATE_CODE();
-    blk->disable_request_queuing = true;
+    blk->request_queuing = BLK_QUEUE_DISABLED;
 }
 
 static int coroutine_fn GRAPH_RDLOCK
@@ -1279,16 +1287,18 @@ static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
     assert(blk->in_flight > 0);
 
-    if (qatomic_read(&blk->quiesce_counter) && !blk->disable_request_queuing) {
+    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_QUIESCENT) {
         /*
          * Take lock before decrementing in flight counter so main loop thread
          * waits for us to enqueue ourselves before it can leave the drained
          * section.
          */
         qemu_mutex_lock(&blk->queued_requests_lock);
-        blk_dec_in_flight(blk);
-        qemu_co_queue_wait(&blk->queued_requests, &blk->queued_requests_lock);
-        blk_inc_in_flight(blk);
+        if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_QUIESCENT) {
+            blk_dec_in_flight(blk);
+            qemu_co_queue_wait(&blk->queued_requests, &blk->queued_requests_lock);
+            blk_inc_in_flight(blk);
+        }
         qemu_mutex_unlock(&blk->queued_requests_lock);
     }
 }
@@ -2600,7 +2610,14 @@ static bool blk_root_drained_poll(BdrvChild *child)
     if (blk->dev_ops && blk->dev_ops->drained_poll) {
         busy = blk->dev_ops->drained_poll(blk->dev_opaque);
     }
-    return busy || !!blk->in_flight;
+    if (busy || blk->in_flight) {
+        return true;
+    }
+
+    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_READY) {
+        qatomic_set(&blk->request_queuing, BLK_QUEUE_QUIESCENT);
+    }
+    return false;
 }
 
 static void blk_root_drained_end(BdrvChild *child)
@@ -2616,9 +2633,12 @@ static void blk_root_drained_end(BdrvChild *child)
             blk->dev_ops->drained_end(blk->dev_opaque);
         }
         qemu_mutex_lock(&blk->queued_requests_lock);
-        while (qemu_co_enter_next(&blk->queued_requests,
-                                  &blk->queued_requests_lock)) {
-            /* Resume all queued requests */
+        if (qatomic_read(&blk->request_queuing) != BLK_QUEUE_DISABLED) {
+            qatomic_set(&blk->request_queuing, BLK_QUEUE_READY);
+            while (qemu_co_enter_next(&blk->queued_requests,
+                                      &blk->queued_requests_lock)) {
+                /* Resume all queued requests */
+            }
         }
         qemu_mutex_unlock(&blk->queued_requests_lock);
     }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] block-backend: delay application of request queuing
  2023-04-05 16:17 [PATCH 0/3] block-backend: avoid deadlocks due to early queuing of request Paolo Bonzini
                   ` (2 preceding siblings ...)
  2023-04-05 16:30 ` [PATCH] block-backend: delay application of request queuing Paolo Bonzini
@ 2023-04-05 16:31 ` Paolo Bonzini
  2023-04-12 11:54   ` Hanna Czenczek
  3 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2023-04-05 16:31 UTC (permalink / raw)
  To: qemu-devel; +Cc: Fiona Ebner

Request queuing prevents new requests from being submitted to the
BlockDriverState, not allowing them to start instead of just letting
them complete before bdrv_drained_begin() returns.

The reason for this was to ensure progress and avoid a livelock
in blk_drain(), blk_drain_all_begin(), bdrv_drained_begin() or
bdrv_drain_all_begin(), if there is an endless stream of requests to
a BlockBackend.  However, this is prone to deadlocks.

In particular, IDE TRIM wants to elevate its BB's in-flight counter for a
"macro" operation that consists of several actual I/O operations.  Each of
those operations is individually started and awaited.  It does this so
that blk_drain() will drain the whole TRIM, and not just a single one
of the many discard operations it may encompass.  When request queuing
is enabled, this leads to a deadlock: The currently ongoing discard is
drained, and the next one is queued, waiting for the drain to stop.
Meanwhile, TRIM still keeps the in-flight counter elevated, waiting
for all discards to stop -- which will never happen, because with the
in-flight counter elevated, the BB is never considered drained, so the
drained section does not begin and cannot end.

Fixing the implementation of request queuing is hard to do in general,
and even harder to do without adding more hacks.  As the name suggests,
deadlocks are worse than livelocks :) so let's avoid them: turn the
request queuing on only after the BlockBackend has quiesced, and leave
the second functionality of bdrv_drained_begin() to the BQL or to the
individual BlockDevOps implementations.

In fact, devices such as IDE that run in the vCPU thread do not suffer
from this livelock because they only submit requests while they are
allowed to hold the big QEMU lock (i.e., not during bdrv_drained_begin()
or bdrv_drain_all_begin().  Other devices can avoid it through external
file descriptor (so that aio_disable_external() will prevent submission
of new requests) or with a .drained_begin callback in their BlockDevOps.

Note that this change does not affect the safety of bdrv_drained_begin(),
since the patch does not completely get away with request queuing.

Reported-by: Fiona Ebner <f.ebner@proxmox.com>
Fixes: 7e5cdb345f77 ("ide: Increment BB in-flight counter for TRIM BH")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block/block-backend.c | 40 ++++++++++++++++++++++++++++++----------
 1 file changed, 30 insertions(+), 10 deletions(-)

diff --git a/block/block-backend.c b/block/block-backend.c
index 10419f8be91e..acb4cb91a5ee 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -42,6 +42,12 @@ typedef struct BlockBackendAioNotifier {
     QLIST_ENTRY(BlockBackendAioNotifier) list;
 } BlockBackendAioNotifier;
 
+typedef enum {
+    BLK_QUEUE_READY,
+    BLK_QUEUE_DISABLED,
+    BLK_QUEUE_QUIESCENT,
+} BlockBackendQueueState;
+
 struct BlockBackend {
     char *name;
     int refcnt;
@@ -79,13 +85,14 @@ struct BlockBackend {
      */
     bool allow_aio_context_change;
     bool allow_write_beyond_eof;
-    bool disable_request_queuing;
 
     /* Protected by BQL */
     NotifierList remove_bs_notifiers, insert_bs_notifiers;
     QLIST_HEAD(, BlockBackendAioNotifier) aio_notifiers;
 
     int quiesce_counter; /* atomic: written under BQL, read by other threads */
+    BlockBackendQueueState request_queuing;
+
     QemuMutex queued_requests_lock; /* protects queued_requests */
     CoQueue queued_requests;
 
@@ -368,6 +375,7 @@ BlockBackend *blk_new(AioContext *ctx, uint64_t perm, uint64_t shared_perm)
     blk->shared_perm = shared_perm;
     blk_set_enable_write_cache(blk, true);
 
+    blk->request_queuing = BLK_QUEUE_READY;
     blk->on_read_error = BLOCKDEV_ON_ERROR_REPORT;
     blk->on_write_error = BLOCKDEV_ON_ERROR_ENOSPC;
 
@@ -1240,7 +1248,7 @@ void blk_allow_aio_context_change(BlockBackend *blk)
 void blk_disable_request_queuing(BlockBackend *blk)
 {
     GLOBAL_STATE_CODE();
-    blk->disable_request_queuing = true;
+    blk->request_queuing = BLK_QUEUE_DISABLED;
 }
 
 static int coroutine_fn GRAPH_RDLOCK
@@ -1279,16 +1287,18 @@ static void coroutine_fn blk_wait_while_drained(BlockBackend *blk)
 {
     assert(blk->in_flight > 0);
 
-    if (qatomic_read(&blk->quiesce_counter) && !blk->disable_request_queuing) {
+    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_QUIESCENT) {
         /*
          * Take lock before decrementing in flight counter so main loop thread
          * waits for us to enqueue ourselves before it can leave the drained
          * section.
          */
         qemu_mutex_lock(&blk->queued_requests_lock);
-        blk_dec_in_flight(blk);
-        qemu_co_queue_wait(&blk->queued_requests, &blk->queued_requests_lock);
-        blk_inc_in_flight(blk);
+        if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_QUIESCENT) {
+            blk_dec_in_flight(blk);
+            qemu_co_queue_wait(&blk->queued_requests, &blk->queued_requests_lock);
+            blk_inc_in_flight(blk);
+        }
         qemu_mutex_unlock(&blk->queued_requests_lock);
     }
 }
@@ -2600,7 +2610,14 @@ static bool blk_root_drained_poll(BdrvChild *child)
     if (blk->dev_ops && blk->dev_ops->drained_poll) {
         busy = blk->dev_ops->drained_poll(blk->dev_opaque);
     }
-    return busy || !!blk->in_flight;
+    if (busy || blk->in_flight) {
+        return true;
+    }
+
+    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_READY) {
+        qatomic_set(&blk->request_queuing, BLK_QUEUE_QUIESCENT);
+    }
+    return false;
 }
 
 static void blk_root_drained_end(BdrvChild *child)
@@ -2616,9 +2633,12 @@ static void blk_root_drained_end(BdrvChild *child)
             blk->dev_ops->drained_end(blk->dev_opaque);
         }
         qemu_mutex_lock(&blk->queued_requests_lock);
-        while (qemu_co_enter_next(&blk->queued_requests,
-                                  &blk->queued_requests_lock)) {
-            /* Resume all queued requests */
+        if (qatomic_read(&blk->request_queuing) != BLK_QUEUE_DISABLED) {
+            qatomic_set(&blk->request_queuing, BLK_QUEUE_READY);
+            while (qemu_co_enter_next(&blk->queued_requests,
+                                      &blk->queued_requests_lock)) {
+                /* Resume all queued requests */
+            }
         }
         qemu_mutex_unlock(&blk->queued_requests_lock);
     }
-- 
2.39.2



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] block-backend: delay application of request queuing
  2023-04-05 16:31 ` [PATCH 3/3] " Paolo Bonzini
@ 2023-04-12 11:54   ` Hanna Czenczek
  2023-04-12 12:03     ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Hanna Czenczek @ 2023-04-12 11:54 UTC (permalink / raw)
  To: Paolo Bonzini, qemu-devel; +Cc: Fiona Ebner

On 05.04.23 18:31, Paolo Bonzini wrote:
> Request queuing prevents new requests from being submitted to the
> BlockDriverState, not allowing them to start instead of just letting
> them complete before bdrv_drained_begin() returns.
>
> The reason for this was to ensure progress and avoid a livelock
> in blk_drain(), blk_drain_all_begin(), bdrv_drained_begin() or
> bdrv_drain_all_begin(), if there is an endless stream of requests to
> a BlockBackend.  However, this is prone to deadlocks.
>
> In particular, IDE TRIM wants to elevate its BB's in-flight counter for a
> "macro" operation that consists of several actual I/O operations.  Each of
> those operations is individually started and awaited.  It does this so
> that blk_drain() will drain the whole TRIM, and not just a single one
> of the many discard operations it may encompass.  When request queuing
> is enabled, this leads to a deadlock: The currently ongoing discard is
> drained, and the next one is queued, waiting for the drain to stop.
> Meanwhile, TRIM still keeps the in-flight counter elevated, waiting
> for all discards to stop -- which will never happen, because with the
> in-flight counter elevated, the BB is never considered drained, so the
> drained section does not begin and cannot end.
>
> Fixing the implementation of request queuing is hard to do in general,
> and even harder to do without adding more hacks.  As the name suggests,
> deadlocks are worse than livelocks :) so let's avoid them: turn the
> request queuing on only after the BlockBackend has quiesced, and leave
> the second functionality of bdrv_drained_begin() to the BQL or to the
> individual BlockDevOps implementations.
>
> In fact, devices such as IDE that run in the vCPU thread do not suffer
> from this livelock because they only submit requests while they are
> allowed to hold the big QEMU lock (i.e., not during bdrv_drained_begin()
> or bdrv_drain_all_begin().  Other devices can avoid it through external
> file descriptor (so that aio_disable_external() will prevent submission
> of new requests) or with a .drained_begin callback in their BlockDevOps.
>
> Note that this change does not affect the safety of bdrv_drained_begin(),
> since the patch does not completely get away with request queuing.
>
> Reported-by: Fiona Ebner <f.ebner@proxmox.com>
> Fixes: 7e5cdb345f77 ("ide: Increment BB in-flight counter for TRIM BH")
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>   block/block-backend.c | 40 ++++++++++++++++++++++++++++++----------
>   1 file changed, 30 insertions(+), 10 deletions(-)
>
> diff --git a/block/block-backend.c b/block/block-backend.c
> index 10419f8be91e..acb4cb91a5ee 100644
> --- a/block/block-backend.c
> +++ b/block/block-backend.c

[...]

> @@ -2600,7 +2610,14 @@ static bool blk_root_drained_poll(BdrvChild *child)
>       if (blk->dev_ops && blk->dev_ops->drained_poll) {
>           busy = blk->dev_ops->drained_poll(blk->dev_opaque);
>       }
> -    return busy || !!blk->in_flight;
> +    if (busy || blk->in_flight) {
> +        return true;
> +    }
> +
> +    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_READY) {
> +        qatomic_set(&blk->request_queuing, BLK_QUEUE_QUIESCENT);
> +    }
> +    return false;
>   }

This implicitly relies on nobody increasing blk->in_flight (or 
dev_ops->drained_poll() returning `true` again) while the BB is starting 
to be drained, because if the function were to be called again after it 
has returned `false` once per drained section (not sure if that’s 
possible![1]), then we’d end up in the original situation, with 
in_flight elevated and queuing enabled.

Is that really strictly guaranteed somehow or is it rather a complex 
conglomerate of many cases that in the end happen to work out 
individually?  I mean, I could imagine that running 
BlockDevOps.drained_begin() is supposed to guarantee that, but it can’t, 
because only NBD seems to implement it.  The commit message talks about 
IDE being fine (by accident?) because it needs BQL availability to 
submit new requests.  But that’s very complex and I’d rather have a 
strict requirement to guarantee correctness.

[1] If the blk_root_drained_poll() isn’t called anymore after returning 
`false`, all will be good, but I assume it will be, because we have a 
quiesce_counter, not a quiesce_bool.  We could kind of emulate this by 
continuing to return `false` after blk_root_drained_poll() has returned 
`false` once, until the quiesce_counter becomes 0.

We could also have blk_root_drained_poll(), if it sees in_flight > 0 && 
request_queuing == BLK_QUEUE_QUIESCENT, revert request_queuing to 
BLK_QUEUE_READY and resume all queued requests.

But, admittedly, I’m making a lot of assumptions and leaps by this 
point.  It all hinges on whether we can guarantee that in_flight won’t 
be increased while a drained section starts.

Hanna



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] block-backend: delay application of request queuing
  2023-04-12 11:54   ` Hanna Czenczek
@ 2023-04-12 12:03     ` Paolo Bonzini
  2023-04-12 14:33       ` Hanna Czenczek
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2023-04-12 12:03 UTC (permalink / raw)
  To: Hanna Czenczek; +Cc: qemu-devel, Fiona Ebner

On Wed, Apr 12, 2023 at 1:54 PM Hanna Czenczek <hreitz@redhat.com> wrote:
> On 05.04.23 18:31, Paolo Bonzini wrote:
> > +    if (busy || blk->in_flight) {
> > +        return true;
> > +    }
> > +
> > +    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_READY) {
> > +        qatomic_set(&blk->request_queuing, BLK_QUEUE_QUIESCENT);
> > +    }
> > +    return false;
> >   }
>
> This implicitly relies on nobody increasing blk->in_flight (or
> dev_ops->drained_poll() returning `true` again) while the BB is starting
> to be drained, because if the function were to be called again after it
> has returned `false` once per drained section (not sure if that’s
> possible![1]), then we’d end up in the original situation, with
> in_flight elevated and queuing enabled.

Yes, it does.

> Is that really strictly guaranteed somehow or is it rather a complex
> conglomerate of many cases that in the end happen to work out
> individually?  I mean, I could imagine that running
> BlockDevOps.drained_begin() is supposed to guarantee that, but it can’t,
> because only NBD seems to implement it.  The commit message talks about
> IDE being fine (by accident?) because it needs BQL availability to
> submit new requests.  But that’s very complex and I’d rather have a
> strict requirement to guarantee correctness.

It's a conglomerate of three cases each of which is sufficient (BQL,
aio_disable_external, bdrv_drained_begin---plus just not using
blk_inc_in_flight could be a fourth, of course). Of these,
aio_disable_external() is going away in favor of the
.bdrv_drained_begin callback; and blk_inc_in_flight() is used rarely
in the first place so I thought it'd be not too hard to have this
requirement.

> [1] If the blk_root_drained_poll() isn’t called anymore after returning
> `false`, all will be good, but I assume it will be, because we have a
> quiesce_counter, not a quiesce_bool.  We could kind of emulate this by
> continuing to return `false` after blk_root_drained_poll() has returned
> `false` once, until the quiesce_counter becomes 0.
> We could also have blk_root_drained_poll(), if it sees in_flight > 0 &&
> request_queuing == BLK_QUEUE_QUIESCENT, revert request_queuing to
> BLK_QUEUE_READY and resume all queued requests.

The intended long term fix is to remove request queuing and, if a
request is submitted while BLK_QUEUE_QUIESCENT, give an assertion
failure.

But since the hang requires blk_inc_in_flight() in the device, perhaps
in the short term documenting it in blk_inc_in_flight() may be enough?

Paolo



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] block-backend: delay application of request queuing
  2023-04-12 12:03     ` Paolo Bonzini
@ 2023-04-12 14:33       ` Hanna Czenczek
  0 siblings, 0 replies; 8+ messages in thread
From: Hanna Czenczek @ 2023-04-12 14:33 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, Fiona Ebner

On 12.04.23 14:03, Paolo Bonzini wrote:
> On Wed, Apr 12, 2023 at 1:54 PM Hanna Czenczek <hreitz@redhat.com> wrote:
>> On 05.04.23 18:31, Paolo Bonzini wrote:
>>> +    if (busy || blk->in_flight) {
>>> +        return true;
>>> +    }
>>> +
>>> +    if (qatomic_read(&blk->request_queuing) == BLK_QUEUE_READY) {
>>> +        qatomic_set(&blk->request_queuing, BLK_QUEUE_QUIESCENT);
>>> +    }
>>> +    return false;
>>>    }
>> This implicitly relies on nobody increasing blk->in_flight (or
>> dev_ops->drained_poll() returning `true` again) while the BB is starting
>> to be drained, because if the function were to be called again after it
>> has returned `false` once per drained section (not sure if that’s
>> possible![1]), then we’d end up in the original situation, with
>> in_flight elevated and queuing enabled.
> Yes, it does.
>
>> Is that really strictly guaranteed somehow or is it rather a complex
>> conglomerate of many cases that in the end happen to work out
>> individually?  I mean, I could imagine that running
>> BlockDevOps.drained_begin() is supposed to guarantee that, but it can’t,
>> because only NBD seems to implement it.  The commit message talks about
>> IDE being fine (by accident?) because it needs BQL availability to
>> submit new requests.  But that’s very complex and I’d rather have a
>> strict requirement to guarantee correctness.
> It's a conglomerate of three cases each of which is sufficient (BQL,
> aio_disable_external, bdrv_drained_begin---plus just not using
> blk_inc_in_flight could be a fourth, of course). Of these,
> aio_disable_external() is going away in favor of the
> .bdrv_drained_begin callback; and blk_inc_in_flight() is used rarely
> in the first place so I thought it'd be not too hard to have this
> requirement.

Does IDE’s BQL requirement work for nested drains, though, i.e. when you 
have a drained_begin, followed by another?  The commit message doesn’t 
say whether it’s impossible for IDE to create a new request in between 
the two.

I’m a bit afraid that these cases are too complicated for me to fully 
comprehend.

>> [1] If the blk_root_drained_poll() isn’t called anymore after returning
>> `false`, all will be good, but I assume it will be, because we have a
>> quiesce_counter, not a quiesce_bool.  We could kind of emulate this by
>> continuing to return `false` after blk_root_drained_poll() has returned
>> `false` once, until the quiesce_counter becomes 0.
>> We could also have blk_root_drained_poll(), if it sees in_flight > 0 &&
>> request_queuing == BLK_QUEUE_QUIESCENT, revert request_queuing to
>> BLK_QUEUE_READY and resume all queued requests.
> The intended long term fix is to remove request queuing and, if a
> request is submitted while BLK_QUEUE_QUIESCENT, give an assertion
> failure.

Yep, that would be a nice obvious requirement.

> But since the hang requires blk_inc_in_flight() in the device, perhaps
> in the short term documenting it in blk_inc_in_flight() may be enough?

Technically it needs a blk_inc_in_flight() whose blk_dec_in_flight() 
depends on a different request that can be queued (which is only the 
case in IDE), so I suppose we could document exactly that in those 
functions’ interfaces, i.e. that users must take care not to use 
blk_inc_flight() while the BlockBackend is (being) drained, when the 
associated blk_dec_in_flight() may depend on an I/O request to the BB.

I think that should be enough, yes.  Well, as long as you can guarantee 
that IDE will indeed fulfill that requirement, because I find it 
difficult to see/prove...

Hanna



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-04-12 14:34 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-05 16:17 [PATCH 0/3] block-backend: avoid deadlocks due to early queuing of request Paolo Bonzini
2023-04-05 16:17 ` [PATCH 1/3] aio-posix: disable polling after aio_disable_external() Paolo Bonzini
2023-04-05 16:17 ` [PATCH 2/3] block-backend: make global properties write-once Paolo Bonzini
2023-04-05 16:30 ` [PATCH] block-backend: delay application of request queuing Paolo Bonzini
2023-04-05 16:31 ` [PATCH 3/3] " Paolo Bonzini
2023-04-12 11:54   ` Hanna Czenczek
2023-04-12 12:03     ` Paolo Bonzini
2023-04-12 14:33       ` Hanna Czenczek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).