From: Juan Quintela <quintela@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Hyman Huang" <yong.huang@smartx.com>,
"Thomas Huth" <thuth@redhat.com>, "Peter Xu" <peterx@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Fabiano Rosas" <farosas@suse.de>,
"Laurent Vivier" <lvivier@redhat.com>,
"Juan Quintela" <quintela@redhat.com>,
"Leonardo Bras" <leobras@redhat.com>,
"Alex Bennée" <alex.bennee@linaro.org>
Subject: [PULL 7/7] migration: Unlock mutex in error case
Date: Fri, 3 Nov 2023 13:04:48 +0100 [thread overview]
Message-ID: <20231103120448.58428-8-quintela@redhat.com> (raw)
In-Reply-To: <20231103120448.58428-1-quintela@redhat.com>
We were not unlocking bitmap mutex on the error case. To fix it
forever change to enclose the code with WITH_QEMU_LOCK_GUARD().
Coverity CID 1523750.
Fixes: a2326705e5 ("migration: Stop migration immediately in RDMA error paths")
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-ID: <20231103074245.55166-1-quintela@redhat.com>
---
migration/ram.c | 106 ++++++++++++++++++++++++------------------------
1 file changed, 53 insertions(+), 53 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index a0f3b86663..8c7886ab79 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3030,71 +3030,71 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
* MAX_WAIT (if curious, further see commit 4508bd9ed8053ce) below, which
* guarantees that we'll at least released it in a regular basis.
*/
- qemu_mutex_lock(&rs->bitmap_mutex);
- WITH_RCU_READ_LOCK_GUARD() {
- if (ram_list.version != rs->last_version) {
- ram_state_reset(rs);
- }
-
- /* Read version before ram_list.blocks */
- smp_rmb();
-
- ret = rdma_registration_start(f, RAM_CONTROL_ROUND);
- if (ret < 0) {
- qemu_file_set_error(f, ret);
- goto out;
- }
-
- t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
- i = 0;
- while ((ret = migration_rate_exceeded(f)) == 0 ||
- postcopy_has_request(rs)) {
- int pages;
-
- if (qemu_file_get_error(f)) {
- break;
+ WITH_QEMU_LOCK_GUARD(&rs->bitmap_mutex) {
+ WITH_RCU_READ_LOCK_GUARD() {
+ if (ram_list.version != rs->last_version) {
+ ram_state_reset(rs);
}
- pages = ram_find_and_save_block(rs);
- /* no more pages to sent */
- if (pages == 0) {
- done = 1;
- break;
- }
+ /* Read version before ram_list.blocks */
+ smp_rmb();
- if (pages < 0) {
- qemu_file_set_error(f, pages);
- break;
+ ret = rdma_registration_start(f, RAM_CONTROL_ROUND);
+ if (ret < 0) {
+ qemu_file_set_error(f, ret);
+ goto out;
}
- rs->target_page_count += pages;
+ t0 = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
+ i = 0;
+ while ((ret = migration_rate_exceeded(f)) == 0 ||
+ postcopy_has_request(rs)) {
+ int pages;
- /*
- * During postcopy, it is necessary to make sure one whole host
- * page is sent in one chunk.
- */
- if (migrate_postcopy_ram()) {
- compress_flush_data();
- }
+ if (qemu_file_get_error(f)) {
+ break;
+ }
+
+ pages = ram_find_and_save_block(rs);
+ /* no more pages to sent */
+ if (pages == 0) {
+ done = 1;
+ break;
+ }
- /*
- * we want to check in the 1st loop, just in case it was the 1st
- * time and we had to sync the dirty bitmap.
- * qemu_clock_get_ns() is a bit expensive, so we only check each
- * some iterations
- */
- if ((i & 63) == 0) {
- uint64_t t1 = (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - t0) /
- 1000000;
- if (t1 > MAX_WAIT) {
- trace_ram_save_iterate_big_wait(t1, i);
+ if (pages < 0) {
+ qemu_file_set_error(f, pages);
break;
}
+
+ rs->target_page_count += pages;
+
+ /*
+ * During postcopy, it is necessary to make sure one whole host
+ * page is sent in one chunk.
+ */
+ if (migrate_postcopy_ram()) {
+ compress_flush_data();
+ }
+
+ /*
+ * we want to check in the 1st loop, just in case it was the 1st
+ * time and we had to sync the dirty bitmap.
+ * qemu_clock_get_ns() is a bit expensive, so we only check each
+ * some iterations
+ */
+ if ((i & 63) == 0) {
+ uint64_t t1 = (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - t0) /
+ 1000000;
+ if (t1 > MAX_WAIT) {
+ trace_ram_save_iterate_big_wait(t1, i);
+ break;
+ }
+ }
+ i++;
}
- i++;
}
}
- qemu_mutex_unlock(&rs->bitmap_mutex);
/*
* Must occur before EOS (or any QEMUFile operation)
--
2.41.0
next prev parent reply other threads:[~2023-11-03 12:06 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-03 12:04 [PULL 0/7] Migration 20231103 patches Juan Quintela
2023-11-03 12:04 ` [PULL 1/7] system/dirtylimit: Fix a race situation Juan Quintela
2023-11-03 12:04 ` [PULL 2/7] system/dirtylimit: Drop the reduplicative check Juan Quintela
2023-11-03 12:04 ` [PULL 3/7] tests: Add migration dirty-limit capability test Juan Quintela
2023-11-03 12:04 ` [PULL 4/7] tests/migration: Introduce dirty-ring-size option into guestperf Juan Quintela
2023-11-03 12:04 ` [PULL 5/7] tests/migration: Introduce dirty-limit " Juan Quintela
2023-11-03 12:04 ` [PULL 6/7] docs/migration: Add the dirty limit section Juan Quintela
2023-11-03 12:04 ` Juan Quintela [this message]
2023-11-06 14:23 ` [PULL 0/7] Migration 20231103 patches Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231103120448.58428-8-quintela@redhat.com \
--to=quintela@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=farosas@suse.de \
--cc=leobras@redhat.com \
--cc=lvivier@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=thuth@redhat.com \
--cc=yong.huang@smartx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).