From: "Emilio G. Cota" <cota@braap.org>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 4/4] membarrier: add --enable-membarrier
Date: Wed, 21 Mar 2018 21:29:37 -0400 [thread overview]
Message-ID: <20180322012937.GB16453@flamenco> (raw)
In-Reply-To: <20180309132922.24211-5-pbonzini@redhat.com>
On Fri, Mar 09, 2018 at 14:29:22 +0100, Paolo Bonzini wrote:
> Actually enable the global memory barriers if supported by the OS.
> Because only recent versions of Linux include the support, they
> are disabled by default. Note that it also has to be disabled
> for QEMU to run under Wine.
>
> Before this patch, rcutorture reports 85 ns/read for my machine,
> after the patch it reports 12.5 ns/read. On the other hand updates
> go from 50 *micro*seconds to 20 *milli*seconds.
It is indeed hard to see a large impact on performance given the
large size of our critical sections. But hey, rcu_read_unlock
goes down from 0.24% to 0.08% of execution time when booting
aarch64 linux!
As we remove bottlenecks though we should be able to gain
more benefits from this, at least in MTTCG where vcpu threads
exit the execution loop quite often.
I did some tests on qht-bench, moving the rcu_read_lock/unlock
pair to wrap each lookup instead of wrapping the entire test.
The results are great; without membarrier lookup throughput
goes down by half; with it, throughput only goes down by 5%.
(snip)
> +##########################################
> +# check for usable membarrier system call
> +if test "$membarrier" = "yes"; then
> + have_membarrier=no
> + if test "$mingw32" = "yes" ; then
> + have_membarrier=yes
> + elif test "$linux" = "yes" ; then
> + cat > $TMPC << EOF
> + #include <linux/membarrier.h>
> + #include <sys/syscall.h>
> + #include <unistd.h>
> + int main(void) {
> + syscall(__NR_membarrier, MEMBARRIER_CMD_QUERY, 0);
> + syscall(__NR_membarrier, MEMBARRIER_CMD_SHARED, 0);
> + }
I think we should also check here that MEMBARRIER_CMD_SHARED is
actually supported; it is possible for a kernel to have
the system call yet not support it (e.g. when the
kernel is compiled with nohz_full). Instead of failing at run-time
(in smp_mb_global_init) we should perhaps bark at configure time
as well.
Other than that the patches look good. Thanks for doing this!
Emilio
next prev parent reply other threads:[~2018-03-22 1:29 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-09 13:29 [Qemu-devel] [PATCH 0/4] Optionally use membarrier system call for RCU Paolo Bonzini
2018-03-09 13:29 ` [Qemu-devel] [PATCH 1/4] rcutorture: remove synchronize_rcu from readers Paolo Bonzini
2018-03-09 13:29 ` [Qemu-devel] [PATCH 2/4] rcu: make memory barriers more explicit Paolo Bonzini
2018-03-09 13:29 ` [Qemu-devel] [PATCH 3/4] membarrier: introduce qemu/sys_membarrier.h Paolo Bonzini
2018-03-09 13:29 ` [Qemu-devel] [PATCH 4/4] membarrier: add --enable-membarrier Paolo Bonzini
2018-03-22 1:29 ` Emilio G. Cota [this message]
2018-03-22 8:57 ` Paolo Bonzini
2018-03-09 13:37 ` [Qemu-devel] [PATCH 0/4] Optionally use membarrier system call for RCU no-reply
2018-03-22 1:03 ` Emilio G. Cota
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180322012937.GB16453@flamenco \
--to=cota@braap.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).