From: "Alex Bennée" <alex.bennee@linaro.org>
To: pbonzini@redhat.com, boost.lists@gmail.com, pavel.dovgaluk@ispras.ru
Cc: cota@braap.org, qemu-devel@nongnu.org,
"Alex Bennée" <alex.bennee@linaro.org>,
"Peter Crosthwaite" <crosthwaite.peter@gmail.com>,
"Richard Henderson" <rth@twiddle.net>
Subject: [Qemu-devel] [RFC PATCH v1 3/9] cpus: only take BQL for sleeping threads
Date: Fri, 5 May 2017 11:38:16 +0100 [thread overview]
Message-ID: <20170505103822.20641-4-alex.bennee@linaro.org> (raw)
In-Reply-To: <20170505103822.20641-1-alex.bennee@linaro.org>
Now the only real need to hold the BQL is for when we sleep on the
cpu->halt conditional. The lock is actually dropped while the thread
sleeps so the actual window for contention is pretty small. This also
means we can remove the special case hack for exclusive work and
simply declare that work no longer has an implicit BQL held. This
isn't a major problem async work is generally only changing things in
the context of its own vCPU. If it needs to work across vCPUs it
should be using the exclusive mechanism or possibly taking the lock
itself.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
cpus-common.c | 13 +++++--------
cpus.c | 10 ++++------
2 files changed, 9 insertions(+), 14 deletions(-)
diff --git a/cpus-common.c b/cpus-common.c
index 59f751ecf9..64661c3193 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -310,6 +310,11 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
queue_work_on_cpu(cpu, wi);
}
+/* Work items run outside of the BQL. This is essential for avoiding a
+ * deadlock for exclusive work but also applies to non-exclusive work.
+ * If the work requires cross-vCPU changes then it should use the
+ * exclusive mechanism.
+ */
void process_queued_cpu_work(CPUState *cpu)
{
struct qemu_work_item *wi;
@@ -327,17 +332,9 @@ void process_queued_cpu_work(CPUState *cpu)
}
qemu_mutex_unlock(&cpu->work_mutex);
if (wi->exclusive) {
- /* Running work items outside the BQL avoids the following deadlock:
- * 1) start_exclusive() is called with the BQL taken while another
- * CPU is running; 2) cpu_exec in the other CPU tries to takes the
- * BQL, so it goes to sleep; start_exclusive() is sleeping too, so
- * neither CPU can proceed.
- */
- qemu_mutex_unlock_iothread();
start_exclusive();
wi->func(cpu, wi->data);
end_exclusive();
- qemu_mutex_lock_iothread();
} else {
wi->func(cpu, wi->data);
}
diff --git a/cpus.c b/cpus.c
index 89ae8cb30a..df279dd320 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1069,31 +1069,29 @@ static bool qemu_tcg_should_sleep(CPUState *cpu)
static void qemu_tcg_wait_io_event(CPUState *cpu)
{
- qemu_mutex_lock_iothread();
while (qemu_tcg_should_sleep(cpu)) {
+ qemu_mutex_lock_iothread();
stop_tcg_kick_timer();
qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+ qemu_mutex_unlock_iothread();
}
start_tcg_kick_timer();
qemu_wait_io_event_common(cpu);
-
- qemu_mutex_unlock_iothread();
}
static void qemu_kvm_wait_io_event(CPUState *cpu)
{
- qemu_mutex_lock_iothread();
while (cpu_thread_is_idle(cpu)) {
+ qemu_mutex_lock_iothread();
qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex);
+ qemu_mutex_unlock_iothread();
}
qemu_wait_io_event_common(cpu);
-
- qemu_mutex_unlock_iothread();
}
static void *qemu_kvm_cpu_thread_fn(void *arg)
--
2.11.0
next prev parent reply other threads:[~2017-05-05 10:37 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-05 10:38 [Qemu-devel] [RFC PATCH v1 0/9] BQL and Replay Lock changes Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 1/9] target/arm/arm-powertctl: drop BQL assertions Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 2/9] cpus: push BQL lock to qemu_*_wait_io_event Alex Bennée
2017-05-05 10:38 ` Alex Bennée [this message]
2017-05-05 15:15 ` [Qemu-devel] [RFC PATCH v1 3/9] cpus: only take BQL for sleeping threads Paolo Bonzini
2017-05-05 15:28 ` Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 4/9] replay/replay-internal.c: track holding of replay_lock Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 5/9] replay: make locking visible outside replay code Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 6/9] replay: push replay_mutex_lock up the call tree Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 7/9] scripts/qemu-gdb: add simple tcg lock status helper Alex Bennée
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 8/9] util/qemu-thread-*: add qemu_lock, locked and unlock trace events Alex Bennée
2017-05-05 15:17 ` Paolo Bonzini
2017-05-05 15:59 ` Alex Bennée
2017-05-06 8:19 ` Paolo Bonzini
2017-05-08 17:52 ` Stefan Hajnoczi
2017-05-09 13:50 ` Alex Bennée
2017-05-09 13:55 ` Paolo Bonzini
2017-05-05 10:38 ` [Qemu-devel] [RFC PATCH v1 9/9] scripts/analyse-locks-simpletrace.py: script to analyse lock times Alex Bennée
2017-05-05 15:38 ` [Qemu-devel] [RFC PATCH v1 0/9] BQL and Replay Lock changes no-reply
2017-05-10 13:51 ` Pavel Dovgalyuk
2017-05-10 15:24 ` Alex Bennée
2017-05-11 11:23 ` Pavel Dovgalyuk
2017-06-05 10:52 ` Pavel Dovgalyuk
2017-06-06 9:34 ` Alex Bennée
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170505103822.20641-4-alex.bennee@linaro.org \
--to=alex.bennee@linaro.org \
--cc=boost.lists@gmail.com \
--cc=cota@braap.org \
--cc=crosthwaite.peter@gmail.com \
--cc=pavel.dovgaluk@ispras.ru \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).