qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wang Xin via <qemu-devel@nongnu.org>
To: <qemu-devel@nongnu.org>
Cc: <dgilbert@redhat.com>, <quintela@redhat.com>,
	<weidong.huang@huawei.com>,
	 Wang Xin <wangxinxin.wang@huawei.com>,
	Huangyu Zhai <zhaihuanyu@huawei.com>
Subject: [PATCH] multifd: ensure multifd threads are terminated before cleanup params
Date: Sat, 12 Feb 2022 21:07:35 +0800	[thread overview]
Message-ID: <20220212130735.3236-1-wangxinxin.wang@huawei.com> (raw)

In multifd_save_cleanup(), we terminate all multifd threads and destroy
the 'p->mutex', while the mutex may still be held by multifd send thread,
this causes qemu to crash.

It's because the multifd_send_thread maybe scheduled out after setting
'p->running' to false. To reproduce the problem, we put
'multifd_send_thread' to sleep seconds before unlock 'p->mutex':

function multifd_send_thread()
{
    ...
    qemu_mutex_lock(&p->mutex);
    p->running = false;
    usleep(5000000);
    ^^^^^^^^^^^^^^^^
    qemu_mutex_unlock(&p->mutex);
    ...
}

As the 'p->running' is used to indicate whether the multifd_send/recv thread
is created, it should be set to false after the thread terminate.

Signed-off-by: Wang Xin <wangxinxin.wang@huawei.com>
Signed-off-by: Huangyu Zhai <zhaihuanyu@huawei.com>

diff --git a/migration/multifd.c b/migration/multifd.c
index 76b57a7177..d8fc7d319e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -526,6 +526,7 @@ void multifd_save_cleanup(void)
 
         if (p->running) {
             qemu_thread_join(&p->thread);
+            p->running = false;
         }
     }
     for (i = 0; i < migrate_multifd_channels(); i++) {
@@ -707,10 +708,6 @@ out:
         qemu_sem_post(&multifd_send_state->channels_ready);
     }
 
-    qemu_mutex_lock(&p->mutex);
-    p->running = false;
-    qemu_mutex_unlock(&p->mutex);
-
     rcu_unregister_thread();
     trace_multifd_send_thread_end(p->id, p->num_packets, p->total_normal_pages);
 
@@ -995,6 +992,7 @@ int multifd_load_cleanup(Error **errp)
              */
             qemu_sem_post(&p->sem_sync);
             qemu_thread_join(&p->thread);
+            p->running = false;
         }
     }
     for (i = 0; i < migrate_multifd_channels(); i++) {
@@ -1110,9 +1108,6 @@ static void *multifd_recv_thread(void *opaque)
         multifd_recv_terminate_threads(local_err);
         error_free(local_err);
     }
-    qemu_mutex_lock(&p->mutex);
-    p->running = false;
-    qemu_mutex_unlock(&p->mutex);
 
     rcu_unregister_thread();
     trace_multifd_recv_thread_end(p->id, p->num_packets, p->total_normal_pages);
-- 
2.26.0.windows.1



             reply	other threads:[~2022-02-12 13:11 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-12 13:07 Wang Xin via [this message]
2022-02-22  2:45 ` [PATCH] multifd: ensure multifd threads are terminated before cleanup params Wangxin (Alexander) via

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220212130735.3236-1-wangxinxin.wang@huawei.com \
    --to=qemu-devel@nongnu.org \
    --cc=dgilbert@redhat.com \
    --cc=quintela@redhat.com \
    --cc=wangxinxin.wang@huawei.com \
    --cc=weidong.huang@huawei.com \
    --cc=zhaihuanyu@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).