From: Peter Xu <peterx@redhat.com>
To: qemu-devel@nongnu.org
Cc: Leonardo Bras Soares Passos <lsoaresp@redhat.com>,
peterx@redhat.com, "Daniel P . Berrange" <berrange@redhat.com>,
Juan Quintela <quintela@redhat.com>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
Manish Mishra <manish.mishra@nutanix.com>
Subject: [PATCH v3 3/4] migration: Add a semaphore to count PONGs
Date: Wed, 8 Feb 2023 15:28:12 -0500 [thread overview]
Message-ID: <20230208202813.1363225-4-peterx@redhat.com> (raw)
In-Reply-To: <20230208202813.1363225-1-peterx@redhat.com>
This is mostly useless, but useful for us to know whether the main channel
is correctly established without changing the migration protocol.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 3 +++
migration/migration.h | 6 ++++++
2 files changed, 9 insertions(+)
diff --git a/migration/migration.c b/migration/migration.c
index fb0ecf5649..a2e362541d 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3025,6 +3025,7 @@ retry:
case MIG_RP_MSG_PONG:
tmp32 = ldl_be_p(buf);
trace_source_return_path_thread_pong(tmp32);
+ qemu_sem_post(&ms->rp_state.rp_pong_acks);
break;
case MIG_RP_MSG_REQ_PAGES:
@@ -4524,6 +4525,7 @@ static void migration_instance_finalize(Object *obj)
qemu_sem_destroy(&ms->postcopy_pause_sem);
qemu_sem_destroy(&ms->postcopy_pause_rp_sem);
qemu_sem_destroy(&ms->rp_state.rp_sem);
+ qemu_sem_destroy(&ms->rp_state.rp_pong_acks);
qemu_sem_destroy(&ms->postcopy_qemufile_src_sem);
error_free(ms->error);
}
@@ -4570,6 +4572,7 @@ static void migration_instance_init(Object *obj)
qemu_sem_init(&ms->postcopy_pause_sem, 0);
qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);
qemu_sem_init(&ms->rp_state.rp_sem, 0);
+ qemu_sem_init(&ms->rp_state.rp_pong_acks, 0);
qemu_sem_init(&ms->rate_limit_sem, 0);
qemu_sem_init(&ms->wait_unplug_sem, 0);
qemu_sem_init(&ms->postcopy_qemufile_src_sem, 0);
diff --git a/migration/migration.h b/migration/migration.h
index c351872360..4cb1cb6fa8 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -276,6 +276,12 @@ struct MigrationState {
*/
bool rp_thread_created;
QemuSemaphore rp_sem;
+ /*
+ * We post to this when we got one PONG from dest. So far it's an
+ * easy way to know the main channel has successfully established
+ * on dest QEMU.
+ */
+ QemuSemaphore rp_pong_acks;
} rp_state;
double mbps;
--
2.37.3
next prev parent reply other threads:[~2023-02-08 20:28 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-08 20:28 [PATCH v3 0/4] migration: Fix disorder of channel creations Peter Xu
2023-02-08 20:28 ` [PATCH v3 1/4] migration: Rework multi-channel checks on URI Peter Xu
2023-02-08 20:28 ` [PATCH v3 2/4] migration: Cleanup postcopy_preempt_setup() Peter Xu
2023-02-09 19:34 ` Juan Quintela
2023-02-08 20:28 ` Peter Xu [this message]
2023-02-08 20:28 ` [PATCH v3 4/4] migration: Postpone postcopy preempt channel to be after main Peter Xu
2023-02-08 21:03 ` [PATCH v3 0/4] migration: Fix disorder of channel creations Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230208202813.1363225-4-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=berrange@redhat.com \
--cc=dgilbert@redhat.com \
--cc=lsoaresp@redhat.com \
--cc=manish.mishra@nutanix.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).