qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, hreitz@redhat.com, stefanha@redhat.com,
	pkrempa@redhat.com, peterx@redhat.com, farosas@suse.de,
	qemu-devel@nongnu.org
Subject: [PATCH v3 04/16] migration/block-active: Remove global active flag
Date: Tue,  4 Feb 2025 22:13:55 +0100	[thread overview]
Message-ID: <20250204211407.381505-5-kwolf@redhat.com> (raw)
In-Reply-To: <20250204211407.381505-1-kwolf@redhat.com>

Block devices have an individual active state, a single global flag
can't cover this correctly. This becomes more important as we allow
users to manually manage which nodes are active or inactive.

Now that it's allowed to call bdrv_inactivate_all() even when some
nodes are already inactive, we can remove the flag and just
unconditionally call bdrv_inactivate_all() and, more importantly,
bdrv_activate_all() before we make use of the nodes.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 migration/migration.h    |  3 ---
 migration/block-active.c | 46 ----------------------------------------
 migration/migration.c    |  8 -------
 3 files changed, 57 deletions(-)

diff --git a/migration/migration.h b/migration/migration.h
index 0df2a187af..9540e8f04e 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -553,7 +553,4 @@ void migration_bitmap_sync_precopy(bool last_stage);
 /* migration/block-dirty-bitmap.c */
 void dirty_bitmap_mig_init(void);
 
-/* migration/block-active.c */
-void migration_block_active_setup(bool active);
-
 #endif
diff --git a/migration/block-active.c b/migration/block-active.c
index d477cf8182..40e986aade 100644
--- a/migration/block-active.c
+++ b/migration/block-active.c
@@ -12,51 +12,12 @@
 #include "qemu/error-report.h"
 #include "trace.h"
 
-/*
- * Migration-only cache to remember the block layer activation status.
- * Protected by BQL.
- *
- * We need this because..
- *
- * - Migration can fail after block devices are invalidated (during
- *   switchover phase).  When that happens, we need to be able to recover
- *   the block drive status by re-activating them.
- *
- * - Currently bdrv_inactivate_all() is not safe to be invoked on top of
- *   invalidated drives (even if bdrv_activate_all() is actually safe to be
- *   called any time!).  It means remembering this could help migration to
- *   make sure it won't invalidate twice in a row, crashing QEMU.  It can
- *   happen when we migrate a PAUSED VM from host1 to host2, then migrate
- *   again to host3 without starting it.  TODO: a cleaner solution is to
- *   allow safe invoke of bdrv_inactivate_all() at anytime, like
- *   bdrv_activate_all().
- *
- * For freshly started QEMU, the flag is initialized to TRUE reflecting the
- * scenario where QEMU owns block device ownerships.
- *
- * For incoming QEMU taking a migration stream, the flag is initialized to
- * FALSE reflecting that the incoming side doesn't own the block devices,
- * not until switchover happens.
- */
-static bool migration_block_active;
-
-/* Setup the disk activation status */
-void migration_block_active_setup(bool active)
-{
-    migration_block_active = active;
-}
-
 bool migration_block_activate(Error **errp)
 {
     ERRP_GUARD();
 
     assert(bql_locked());
 
-    if (migration_block_active) {
-        trace_migration_block_activation("active-skipped");
-        return true;
-    }
-
     trace_migration_block_activation("active");
 
     bdrv_activate_all(errp);
@@ -65,7 +26,6 @@ bool migration_block_activate(Error **errp)
         return false;
     }
 
-    migration_block_active = true;
     return true;
 }
 
@@ -75,11 +35,6 @@ bool migration_block_inactivate(void)
 
     assert(bql_locked());
 
-    if (!migration_block_active) {
-        trace_migration_block_activation("inactive-skipped");
-        return true;
-    }
-
     trace_migration_block_activation("inactive");
 
     ret = bdrv_inactivate_all();
@@ -89,6 +44,5 @@ bool migration_block_inactivate(void)
         return false;
     }
 
-    migration_block_active = false;
     return true;
 }
diff --git a/migration/migration.c b/migration/migration.c
index 2d1da917c7..ae252f24e6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1838,12 +1838,6 @@ void qmp_migrate_incoming(const char *uri, bool has_channels,
         return;
     }
 
-    /*
-     * Newly setup incoming QEMU.  Mark the block active state to reflect
-     * that the src currently owns the disks.
-     */
-    migration_block_active_setup(false);
-
     once = false;
 }
 
@@ -3808,8 +3802,6 @@ static void migration_instance_init(Object *obj)
     ms->state = MIGRATION_STATUS_NONE;
     ms->mbps = -1;
     ms->pages_per_second = -1;
-    /* Freshly started QEMU owns all the block devices */
-    migration_block_active_setup(true);
     qemu_sem_init(&ms->pause_sem, 0);
     qemu_mutex_init(&ms->error_mutex);
 
-- 
2.48.1



  parent reply	other threads:[~2025-02-04 21:18 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-04 21:13 [PATCH v3 00/16] block: Managing inactive nodes (QSD migration) Kevin Wolf
2025-02-04 21:13 ` [PATCH v3 01/16] block: Add 'active' field to BlockDeviceInfo Kevin Wolf
2025-02-04 21:13 ` [PATCH v3 02/16] block: Allow inactivating already inactive nodes Kevin Wolf
2025-02-04 21:13 ` [PATCH v3 03/16] block: Inactivate external snapshot overlays when necessary Kevin Wolf
2025-02-04 21:13 ` Kevin Wolf [this message]
2025-02-04 21:13 ` [PATCH v3 05/16] block: Don't attach inactive child to active node Kevin Wolf
2025-02-04 21:13 ` [PATCH v3 06/16] block: Fix crash on block_resize on inactive node Kevin Wolf
2025-02-04 21:13 ` [PATCH v3 07/16] block: Add option to create inactive nodes Kevin Wolf
2025-02-04 21:13 ` [PATCH v3 08/16] block: Add blockdev-set-active QMP command Kevin Wolf
2025-02-04 21:14 ` [PATCH v3 09/16] block: Support inactive nodes in blk_insert_bs() Kevin Wolf
2025-02-04 21:14 ` [PATCH v3 10/16] block/export: Don't ignore image activation error in blk_exp_add() Kevin Wolf
2025-02-04 21:14 ` [PATCH v3 11/16] block: Drain nodes before inactivating them Kevin Wolf
2025-02-05 20:42   ` Eric Blake
2025-02-04 21:14 ` [PATCH v3 12/16] block/export: Add option to allow export of inactive nodes Kevin Wolf
2025-02-04 21:14 ` [PATCH v3 13/16] nbd/server: Support " Kevin Wolf
2025-02-05 20:43   ` Eric Blake
2025-02-04 21:14 ` [PATCH v3 14/16] iotests: Add filter_qtest() Kevin Wolf
2025-02-04 21:14 ` [PATCH v3 15/16] iotests: Add qsd-migrate case Kevin Wolf
2025-02-05 20:46   ` Eric Blake
2025-02-24 10:23   ` Thomas Huth
2025-02-24 13:13     ` Kevin Wolf
2025-02-04 21:14 ` [PATCH v3 16/16] iotests: Add (NBD-based) tests for inactive nodes Kevin Wolf
2025-02-05 20:49   ` Eric Blake
2025-02-05 15:35 ` [PATCH v3 00/16] block: Managing inactive nodes (QSD migration) Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250204211407.381505-5-kwolf@redhat.com \
    --to=kwolf@redhat.com \
    --cc=farosas@suse.de \
    --cc=hreitz@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pkrempa@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).