From: Paolo Bonzini <pbonzini@redhat.com>
To: "Longpeng(Mike)" <longpeng2@huawei.com>, rth@twiddle.net
Cc: Peter Maydell <peter.maydell@linaro.org>,
arei.gonglei@huawei.com, huangzhichao@huawei.com,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
"qemu-devel @ nongnu . org" <qemu-devel@nongnu.org>
Subject: Re: [RFC] cpus: avoid get stuck in pause_all_vcpus
Date: Thu, 12 Mar 2020 16:28:54 +0100 [thread overview]
Message-ID: <8ed76f64-1a24-a278-51f3-19515e65ff39@redhat.com> (raw)
In-Reply-To: <20200310091443.1326-1-longpeng2@huawei.com>
On 10/03/20 10:14, Longpeng(Mike) wrote:
> From: Longpeng <longpeng2@huawei.com>
>
> We find an issue when repeat reboot in guest during migration, it cause the
> migration thread never be waken up again.
>
> <main loop> |<migration_thread>
> |
> LOCK BQL |
> ... |
> main_loop_should_exit |
> pause_all_vcpus |
> 1. set all cpus ->stop=true |
> and then kick |
> 2. return if all cpus is paused |
> (by '->stopped == true'), else|
> 3. qemu_cond_wait [BQL UNLOCK] |
> |LOCK BQL
> |...
> |do_vm_stop
> | pause_all_vcpus
> | (A)set all cpus ->stop=true
> | and then kick
> | (B)return if all cpus is paused
> | (by '->stopped == true'), else
> | (C)qemu_cond_wait [BQL UNLOCK]
> 4. be waken up and LOCK BQL | (D)be waken up BUT wait for BQL
> 5. goto 2. |
> (BQL is still LOCKed) |
> resume_all_vcpus |
> 1. set all cpus ->stop=false |
> and ->stopped=false |
> ... |
> BQL UNLOCK | (E)LOCK BQL
> | (F)goto B. [but stopped is false now!]
> |Finally, sleep at step 3 forever.
>
>
> Note: This patch is just for discuss this issue, I'm looking forward to
> your suggestions, thanks!
Thanks Mike,
the above sketch is really helpful.
I think the problem is not that pause_all_vcpus() is not pausing hard
enough; the problem is rather than resume_all_vcpus(), when used outside
vm_start(), should know about the race and do nothing if it happens.
Fortunately resume_all_vcpus does not release the BQL so it should be
enough to test once; translated to code, this would be the patch to fix it:
diff --git a/cpus.c b/cpus.c
index b4f8b84b61..1eb7533a91 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1899,6 +1899,10 @@ void resume_all_vcpus(void)
{
CPUState *cpu;
+ if (!runstate_is_running()) {
+ return;
+ }
+
qemu_clock_enable(QEMU_CLOCK_VIRTUAL, true);
CPU_FOREACH(cpu) {
cpu_resume(cpu);
Thanks,
Paolo
next prev parent reply other threads:[~2020-03-12 15:29 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-10 9:14 [RFC] cpus: avoid get stuck in pause_all_vcpus Longpeng(Mike)
2020-03-10 10:20 ` no-reply
2020-03-10 12:09 ` Longpeng (Mike)
2020-03-12 15:28 ` Paolo Bonzini [this message]
2020-03-13 1:43 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2020-03-13 7:09 ` Paolo Bonzini
2020-03-13 8:36 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2020-03-13 9:22 ` Paolo Bonzini
2020-03-13 9:41 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8ed76f64-1a24-a278-51f3-19515e65ff39@redhat.com \
--to=pbonzini@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=dgilbert@redhat.com \
--cc=huangzhichao@huawei.com \
--cc=longpeng2@huawei.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).