From: Roman Kagan <rkagan@virtuozzo.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: [Qemu-devel] [RFC PATCH 1/2] cpus-common: nuke finish_safe_work
Date: Thu, 23 May 2019 10:54:48 +0000 [thread overview]
Message-ID: <20190523105440.27045-2-rkagan@virtuozzo.com> (raw)
In-Reply-To: <20190523105440.27045-1-rkagan@virtuozzo.com>
It was introduced in commit b129972c8b41e15b0521895a46fd9c752b68a5e,
with the following motivation:
Because start_exclusive uses CPU_FOREACH, merge exclusive_lock with
qemu_cpu_list_lock: together with a call to exclusive_idle (via
cpu_exec_start/end) in cpu_list_add, this protects exclusive work
against concurrent CPU addition and removal.
However, it seems to be redundant, because the cpu-exclusive
infrastructure provides suffificent protection against the newly added
CPU starting execution while the cpu-exclusive work is running, and the
aforementioned traversing of the cpu list is protected by
qemu_cpu_list_lock.
Besides, this appears to be the only place where the cpu-exclusive
section is entered with the BQL taken, which has been found to trigger
AB-BA deadlock as follows:
vCPU thread main thread
----------- -----------
async_safe_run_on_cpu(self,
async_synic_update)
... [cpu hot-add]
process_queued_cpu_work()
qemu_mutex_unlock_iothread()
[grab BQL]
start_exclusive() cpu_list_add()
async_synic_update() finish_safe_work()
qemu_mutex_lock_iothread() cpu_exec_start()
So remove it. This paves the way to establishing a strict nesting rule
of never entering the exclusive section with the BQL taken.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
---
cpus-common.c | 8 --------
1 file changed, 8 deletions(-)
diff --git a/cpus-common.c b/cpus-common.c
index 3ca58c64e8..023cfebfa3 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -69,12 +69,6 @@ static int cpu_get_free_index(void)
return cpu_index;
}
-static void finish_safe_work(CPUState *cpu)
-{
- cpu_exec_start(cpu);
- cpu_exec_end(cpu);
-}
-
void cpu_list_add(CPUState *cpu)
{
qemu_mutex_lock(&qemu_cpu_list_lock);
@@ -86,8 +80,6 @@ void cpu_list_add(CPUState *cpu)
}
QTAILQ_INSERT_TAIL_RCU(&cpus, cpu, node);
qemu_mutex_unlock(&qemu_cpu_list_lock);
-
- finish_safe_work(cpu);
}
void cpu_list_remove(CPUState *cpu)
--
2.21.0
next prev parent reply other threads:[~2019-05-23 11:00 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-23 10:54 [Qemu-devel] [RFC PATCH 0/2] establish nesting rule of BQL vs cpu-exclusive Roman Kagan
2019-05-23 10:54 ` Roman Kagan [this message]
2019-06-24 10:58 ` [Qemu-devel] [RFC PATCH 1/2] cpus-common: nuke finish_safe_work Alex Bennée
2019-06-24 11:50 ` Roman Kagan
2019-06-24 12:43 ` Alex Bennée
2019-05-23 10:54 ` [Qemu-devel] [RFC PATCH 2/2] cpus-common: assert BQL nesting within cpu-exclusive sections Roman Kagan
2019-05-23 11:31 ` [Qemu-devel] [RFC PATCH 0/2] establish nesting rule of BQL vs cpu-exclusive Alex Bennée
2019-05-27 11:05 ` Roman Kagan
2019-06-06 13:22 ` Roman Kagan
2019-06-21 12:49 ` Roman Kagan
2019-08-05 12:47 ` Roman Kagan
2019-08-05 15:56 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190523105440.27045-2-rkagan@virtuozzo.com \
--to=rkagan@virtuozzo.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).