qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Lidong Chen <jemmy858585@gmail.com>
To: zhang.zhanghailiang@huawei.com, quintela@redhat.com, dgilbert@redhat.com
Cc: qemu-devel@nongnu.org, Lidong Chen <jemmy858585@gmail.com>,
	Lidong Chen <lidongchen@tencent.com>
Subject: [Qemu-devel] [PATCH v6 03/12] migration: avoid concurrent invoke channel_close by different threads
Date: Fri,  3 Aug 2018 17:13:41 +0800	[thread overview]
Message-ID: <1533287630-4221-4-git-send-email-lidongchen@tencent.com> (raw)
In-Reply-To: <1533287630-4221-1-git-send-email-lidongchen@tencent.com>

From: Lidong Chen <jemmy858585@gmail.com>

The channel_close maybe invoked by different threads. For example, source
qemu invokes qemu_fclose in main thread, migration thread and return path
thread. Destination qemu invokes qemu_fclose in main thread, listen thread
and COLO incoming thread.

Signed-off-by: Lidong Chen <lidongchen@tencent.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
---
 migration/migration.c | 2 ++
 migration/migration.h | 7 +++++++
 migration/qemu-file.c | 6 ++++--
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index b7d9854..a3a0756 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3200,6 +3200,7 @@ static void migration_instance_finalize(Object *obj)
     qemu_sem_destroy(&ms->postcopy_pause_sem);
     qemu_sem_destroy(&ms->postcopy_pause_rp_sem);
     qemu_sem_destroy(&ms->rp_state.rp_sem);
+    qemu_mutex_destroy(&ms->qemu_file_close_lock);
     error_free(ms->error);
 }
 
@@ -3236,6 +3237,7 @@ static void migration_instance_init(Object *obj)
     qemu_sem_init(&ms->rp_state.rp_sem, 0);
     qemu_sem_init(&ms->rate_limit_sem, 0);
     qemu_mutex_init(&ms->qemu_file_lock);
+    qemu_mutex_init(&ms->qemu_file_close_lock);
 }
 
 /*
diff --git a/migration/migration.h b/migration/migration.h
index 64a7b33..a50c2de 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -122,6 +122,13 @@ struct MigrationState
     QemuMutex qemu_file_lock;
 
     /*
+     * The to_src_file and from_dst_file point to one QIOChannelRDMA,
+     * And qemu_fclose maybe invoked by different threads. use this lock
+     * to avoid concurrent invoke channel_close by different threads.
+     */
+    QemuMutex qemu_file_close_lock;
+
+    /*
      * Used to allow urgent requests to override rate limiting.
      */
     QemuSemaphore rate_limit_sem;
diff --git a/migration/qemu-file.c b/migration/qemu-file.c
index 977b9ae..74c48e0 100644
--- a/migration/qemu-file.c
+++ b/migration/qemu-file.c
@@ -323,12 +323,14 @@ void qemu_update_position(QEMUFile *f, size_t size)
  */
 int qemu_fclose(QEMUFile *f)
 {
-    int ret;
+    int ret, ret2;
     qemu_fflush(f);
     ret = qemu_file_get_error(f);
 
     if (f->ops->close) {
-        int ret2 = f->ops->close(f->opaque);
+        qemu_mutex_lock(&migrate_get_current()->qemu_file_close_lock);
+        ret2 = f->ops->close(f->opaque);
+        qemu_mutex_unlock(&migrate_get_current()->qemu_file_close_lock);
         if (ret >= 0) {
             ret = ret2;
         }
-- 
1.8.3.1

  parent reply	other threads:[~2018-08-03  9:14 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-03  9:13 [Qemu-devel] [PATCH v6 00/12] Enable postcopy RDMA live migration Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 01/12] migration: disable RDMA WRITE after postcopy started Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 02/12] migration: create a dedicated connection for rdma return path Lidong Chen
2018-08-03  9:13 ` Lidong Chen [this message]
2018-08-06  9:07   ` [Qemu-devel] [PATCH v6 03/12] migration: avoid concurrent invoke channel_close by different threads 858585 jemmy
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 04/12] migration: implement bi-directional RDMA QIOChannel Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 05/12] migration: Stop rdma yielding during incoming postcopy Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 06/12] migration: implement io_set_aio_fd_handler function for RDMA QIOChannel Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 07/12] migration: invoke qio_channel_yield only when qemu_in_coroutine() Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 08/12] migration: poll the cm event while wait RDMA work request completion Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 09/12] migration: implement the shutdown for RDMA QIOChannel Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 10/12] migration: poll the cm event for destination qemu Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 11/12] migration: remove the unnecessary RDMA_CONTROL_ERROR message Lidong Chen
2018-08-03  9:13 ` [Qemu-devel] [PATCH v6 12/12] migration: create a dedicated thread to release rdma resource Lidong Chen
2018-08-03  9:42 ` [Qemu-devel] [PATCH v6 00/12] Enable postcopy RDMA live migration no-reply
2018-08-06  6:09 ` 858585 jemmy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1533287630-4221-4-git-send-email-lidongchen@tencent.com \
    --to=jemmy858585@gmail.com \
    --cc=dgilbert@redhat.com \
    --cc=lidongchen@tencent.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=zhang.zhanghailiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).