From: Sergey Fedorov <sergey.fedorov@linaro.org>
To: qemu-devel@nongnu.org
Cc: patches@linaro.org, "Sergey Fedorov" <serge.fdrv@gmail.com>,
mttcg@listserver.greensocs.com, fred.konrad@greensocs.com,
a.rigo@virtualopensystems.com, cota@braap.org,
bobby.prani@gmail.com, rth@twiddle.net,
mark.burton@greensocs.com, pbonzini@redhat.com,
jan.kiszka@siemens.com, peter.maydell@linaro.org,
claudio.fontana@huawei.com,
"Alex Bennée" <alex.bennee@linaro.org>,
"Sergey Fedorov" <sergey.fedorov@linaro.org>,
"Peter Crosthwaite" <crosthwaite.peter@gmail.com>
Subject: [Qemu-devel] [PATCH v4 09/12] tcg: cpu-exec: remove tb_lock from the hot-path
Date: Fri, 15 Jul 2016 20:58:49 +0300 [thread overview]
Message-ID: <20160715175852.30749-10-sergey.fedorov@linaro.org> (raw)
In-Reply-To: <20160715175852.30749-1-sergey.fedorov@linaro.org>
From: Alex Bennée <alex.bennee@linaro.org>
Lock contention in the hot path of moving between existing patched
TranslationBlocks is the main drag in multithreaded performance. This
patch pushes the tb_lock() usage down to the two places that really need
it:
- code generation (tb_gen_code)
- jump patching (tb_add_jump)
The rest of the code doesn't really need to hold a lock as it is either
using per-CPU structures, atomically updated or designed to be used in
concurrent read situations (qht_lookup).
To keep things simple I removed the #ifdef CONFIG_USER_ONLY stuff as the
locks become NOPs anyway until the MTTCG work is completed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---
v2 (hot path)
- Add r-b tags
v1 (hot path, split from base-patches series)
- revert name tweaking
- drop test jmp_list_next outside lock
- mention lock NOPs in comments
v3 (base-patches)
- fix merge conflicts with Sergey's patch
---
cpu-exec.c | 48 +++++++++++++++++++++---------------------------
1 file changed, 21 insertions(+), 27 deletions(-)
diff --git a/cpu-exec.c b/cpu-exec.c
index e16df762f50a..bbaed5bb1978 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -286,35 +286,29 @@ static TranslationBlock *tb_find_slow(CPUState *cpu,
TranslationBlock *tb;
tb = tb_find_physical(cpu, pc, cs_base, flags);
- if (tb) {
- goto found;
- }
+ if (!tb) {
-#ifdef CONFIG_USER_ONLY
- /* mmap_lock is needed by tb_gen_code, and mmap_lock must be
- * taken outside tb_lock. Since we're momentarily dropping
- * tb_lock, there's a chance that our desired tb has been
- * translated.
- */
- tb_unlock();
- mmap_lock();
- tb_lock();
- tb = tb_find_physical(cpu, pc, cs_base, flags);
- if (tb) {
- mmap_unlock();
- goto found;
- }
-#endif
+ /* mmap_lock is needed by tb_gen_code, and mmap_lock must be
+ * taken outside tb_lock. As system emulation is currently
+ * single threaded the locks are NOPs.
+ */
+ mmap_lock();
+ tb_lock();
- /* if no translated code available, then translate it now */
- tb = tb_gen_code(cpu, pc, cs_base, flags, 0);
+ /* There's a chance that our desired tb has been translated while
+ * taking the locks so we check again inside the lock.
+ */
+ tb = tb_find_physical(cpu, pc, cs_base, flags);
+ if (!tb) {
+ /* if no translated code available, then translate it now */
+ tb = tb_gen_code(cpu, pc, cs_base, flags, 0);
+ }
-#ifdef CONFIG_USER_ONLY
- mmap_unlock();
-#endif
+ tb_unlock();
+ mmap_unlock();
+ }
-found:
- /* we add the TB in the virtual pc hash table */
+ /* We add the TB in the virtual pc hash table for the fast lookup */
atomic_set(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)], tb);
return tb;
}
@@ -332,7 +326,6 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu,
always be the same before a given translated block
is executed. */
cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags);
- tb_lock();
tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]);
if (unlikely(!tb || atomic_read(&tb->pc) != pc ||
atomic_read(&tb->cs_base) != cs_base ||
@@ -350,14 +343,15 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu,
#endif
/* See if we can patch the calling TB. */
if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) {
+ tb_lock();
/* Check if translation buffer has been flushed */
if (cpu->tb_flushed) {
cpu->tb_flushed = false;
} else if (!tb_is_invalid(tb)) {
tb_add_jump(last_tb, tb_exit, tb);
}
+ tb_unlock();
}
- tb_unlock();
return tb;
}
--
2.9.1
next prev parent reply other threads:[~2016-07-15 17:59 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-15 17:58 [Qemu-devel] [PATCH v4 00/12] Reduce lock contention on TCG hot-path Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 01/12] util/qht: Document memory ordering assumptions Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 02/12] tcg: Pass last_tb by value to tb_find_fast() Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 03/12] tcg: Prepare safe tb_jmp_cache lookup out of tb_lock Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 04/12] tcg: Prepare safe access to tb_flushed " Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 05/12] target-i386: Remove redundant HF_SOFTMMU_MASK Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 06/12] tcg: Introduce tb_mark_invalid() and tb_is_invalid() Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 07/12] tcg: Prepare TB invalidation for lockless TB lookup Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 08/12] tcg: set up tb->page_addr before insertion Sergey Fedorov
2016-07-15 17:58 ` Sergey Fedorov [this message]
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 10/12] tcg: Avoid bouncing tb_lock between tb_gen_code() and tb_add_jump() Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 11/12] tcg: Merge tb_find_slow() and tb_find_fast() Sergey Fedorov
2016-07-15 17:58 ` [Qemu-devel] [PATCH v4 12/12] tcg: rename tb_find_physical() Sergey Fedorov
2016-07-16 13:51 ` [Qemu-devel] [PATCH v4 00/12] Reduce lock contention on TCG hot-path Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160715175852.30749-10-sergey.fedorov@linaro.org \
--to=sergey.fedorov@linaro.org \
--cc=a.rigo@virtualopensystems.com \
--cc=alex.bennee@linaro.org \
--cc=bobby.prani@gmail.com \
--cc=claudio.fontana@huawei.com \
--cc=cota@braap.org \
--cc=crosthwaite.peter@gmail.com \
--cc=fred.konrad@greensocs.com \
--cc=jan.kiszka@siemens.com \
--cc=mark.burton@greensocs.com \
--cc=mttcg@listserver.greensocs.com \
--cc=patches@linaro.org \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
--cc=serge.fdrv@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).