From: "Emilio G. Cota" <cota@braap.org>
To: QEMU Developers <qemu-devel@nongnu.org>,
MTTCG Devel <mttcg@listserver.greensocs.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Richard Henderson" <rth@twiddle.net>,
"Sergey Fedorov" <serge.fdrv@gmail.com>
Subject: [Qemu-devel] [PATCH v2 3/3] atomics: do not emit consume barrier for atomic_rcu_read
Date: Tue, 24 May 2016 16:06:14 -0400 [thread overview]
Message-ID: <1464120374-8950-4-git-send-email-cota@braap.org> (raw)
In-Reply-To: <1464120374-8950-1-git-send-email-cota@braap.org>
Currently we emit a consume-load in atomic_rcu_read. This is
overkill for non-Sparc hosts, and is only useful to make
things easier for Thread Sanitizer, which as far as I understand
works best without explicit fences.
The appended leaves the consume-load in atomic_rcu_read when
compiling with Thread Sanitizer enabled, and resorts to a
relaxed load + smp_read_barrier_depends otherwise.
On an RMO host architecture, such as aarch64, the performance
improvement of this change is easily measurable. For instance,
qht-bench performs an atomic_rcu_read on every lookup. Performance
before and after applying this patch:
$ tests/qht-bench -d 5 -n 1
Before: 9.78 MT/s
After: 10.96 MT/s
Signed-off-by: Emilio G. Cota <cota@braap.org>
---
include/qemu/atomic.h | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
index 4a4f2fb..c5b6c8d 100644
--- a/include/qemu/atomic.h
+++ b/include/qemu/atomic.h
@@ -63,13 +63,20 @@
__atomic_store(ptr, &_val, __ATOMIC_RELAXED); \
} while(0)
-/* Atomic RCU operations imply weak memory barriers */
+#ifdef __SANITIZE_THREAD__
+#define atomic_rcu_read__nocheck(ptr, valptr) \
+ __atomic_load(ptr, valptr, __ATOMIC_CONSUME);
+#else
+#define atomic_rcu_read__nocheck(ptr, valptr) \
+ __atomic_load(ptr, valptr, __ATOMIC_RELAXED); \
+ smp_read_barrier_depends();
+#endif
#define atomic_rcu_read(ptr) \
({ \
QEMU_BUILD_BUG_ON(sizeof(*ptr) > sizeof(void *)); \
typeof(*ptr) _val; \
- __atomic_load(ptr, &_val, __ATOMIC_CONSUME); \
+ atomic_rcu_read__nocheck(ptr, &_val); \
_val; \
})
--
2.5.0
next prev parent reply other threads:[~2016-05-24 20:06 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-24 20:06 [Qemu-devel] [PATCH v2 0/3] atomics: fix RCU perf. regression + update documentation Emilio G. Cota
2016-05-24 20:06 ` [Qemu-devel] [PATCH v2 1/3] docs/atomics: update atomic_read/set comparison with Linux Emilio G. Cota
2016-05-25 12:13 ` Paolo Bonzini
2016-05-24 20:06 ` [Qemu-devel] [PATCH v2 2/3] atomics: emit an smp_read_barrier_depends() barrier only for Sparc and Thread Sanitizer Emilio G. Cota
2016-05-24 20:09 ` Sergey Fedorov
2016-05-24 20:44 ` Emilio G. Cota
2016-05-25 12:16 ` Paolo Bonzini
2016-05-25 15:06 ` Emilio G. Cota
2016-05-24 20:06 ` Emilio G. Cota [this message]
2016-05-25 12:20 ` [Qemu-devel] [PATCH v2 3/3] atomics: do not emit consume barrier for atomic_rcu_read Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1464120374-8950-4-git-send-email-cota@braap.org \
--to=cota@braap.org \
--cc=alex.bennee@linaro.org \
--cc=mttcg@listserver.greensocs.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
--cc=serge.fdrv@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).