From: "Denis V. Lunev" <den@openvz.org>
To: qemu-devel@nongnu.org
Cc: den@openvz.org, Maxim Nestratov <mnestratov@virtuozzo.com>,
Juan Quintela <quintela@redhat.com>,
Amit Shah <amit.shah@redhat.com>
Subject: [Qemu-devel] [PATCH 1/1] migration: fix ram decompression race deadlock
Date: Fri, 13 May 2016 10:27:01 +0300 [thread overview]
Message-ID: <1463124421-16587-1-git-send-email-den@openvz.org> (raw)
From: Maxim Nestratov <mnestratov@virtuozzo.com>
There is a race in between do_data_decompress and start_decompression.
do_data_decompress()
while (!quit_decomp_thread) {
qemu_mutex_lock(¶m->mutex);
while (!param->start && !quit_decomp_thread) {
qemu_cond_wait(¶m->cond, ¶m->mutex);
...
param->start = false;
}
qemu_mutex_unlock(¶m->mutex);
[ preempted here, start_decompression() is executed ]
}
start_decompression()
{
qemu_mutex_lock(¶m->mutex);
param->start = true;
qemu_cond_signal(¶m->cond);
qemu_mutex_unlock(¶m->mutex);
}
In this case do_data_decompress will never enter inner loop again and
will eat 100% CPU. The patch fixes this problem by correcting while loop
where we wait for condition only and other actions are moved out of it.
Signed-off-by: Maxim Nestratov <mnestratov@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Juan Quintela <quintela@redhat.com>
CC: Amit Shah <amit.shah@redhat.com>
---
migration/ram.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 3f05738..579bfc0 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2193,18 +2193,18 @@ static void *do_data_decompress(void *opaque)
qemu_mutex_lock(¶m->mutex);
while (!param->start && !quit_decomp_thread) {
qemu_cond_wait(¶m->cond, ¶m->mutex);
- pagesize = TARGET_PAGE_SIZE;
- if (!quit_decomp_thread) {
- /* uncompress() will return failed in some case, especially
- * when the page is dirted when doing the compression, it's
- * not a problem because the dirty page will be retransferred
- * and uncompress() won't break the data in other pages.
- */
- uncompress((Bytef *)param->des, &pagesize,
- (const Bytef *)param->compbuf, param->len);
- }
- param->start = false;
}
+ pagesize = TARGET_PAGE_SIZE;
+ if (!quit_decomp_thread) {
+ /* uncompress() will return failed in some case, especially
+ * when the page is dirted when doing the compression, it's
+ * not a problem because the dirty page will be retransferred
+ * and uncompress() won't break the data in other pages.
+ */
+ uncompress((Bytef *)param->des, &pagesize,
+ (const Bytef *)param->compbuf, param->len);
+ }
+ param->start = false;
qemu_mutex_unlock(¶m->mutex);
}
--
2.1.4
next reply other threads:[~2016-05-13 7:27 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-13 7:27 Denis V. Lunev [this message]
2016-05-19 12:03 ` [Qemu-devel] [PATCH 1/1] migration: fix ram decompression race deadlock Denis V. Lunev
2016-05-23 7:19 ` Amit Shah
2016-05-24 2:07 ` Li, Liang Z
2016-05-24 5:35 ` Denis V. Lunev
2016-06-10 6:43 ` Amit Shah
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1463124421-16587-1-git-send-email-den@openvz.org \
--to=den@openvz.org \
--cc=amit.shah@redhat.com \
--cc=mnestratov@virtuozzo.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).