qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Emilio G. Cota" <cota@braap.org>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: mttcg@greensocs.com,
	"Peter Crosthwaite" <crosthwaite.peter@gmail.com>,
	mark.burton@greensocs.com, qemu-devel@nongnu.org,
	a.rigo@virtualopensystems.com, serge.fdrv@gmail.com,
	"Richard Henderson" <rth@twiddle.net>,
	"Alex Bennée" <alex.bennee@linaro.org>,
	"Andreas Färber" <afaerber@suse.de>,
	fred.konrad@greensocs.com
Subject: Re: [Qemu-devel] [RFC v1 03/11] tcg: comment on which functions have to be called with tb_lock held
Date: Mon, 21 Mar 2016 17:50:44 -0400	[thread overview]
Message-ID: <20160321215044.GA5098@flamenco> (raw)
In-Reply-To: <56EC3402.1030007@redhat.com>

On Fri, Mar 18, 2016 at 17:59:46 +0100, Paolo Bonzini wrote:
> On 18/03/2016 17:18, Alex Bennée wrote:
> > +
> > +    /* Protected by tb_lock.  */
> 
> Only writes are protected by tb_lock.  Read happen outside the lock.
> 
> Reads are not quite thread safe yet, because of tb_flush.  In order to
> fix that, there's either the async_safe_run() mechanism from Fred or
> preferrably the code generation buffer could be moved under RCU.

A third approach (which I prefer) is to protect tb_jmp_cache with
a seqlock. That way invalidates (via tlb_flush from other CPUs, or
via tb_flush) are picked up if they're racing with concurrent reads.

> Because tb_flush is really rare, my suggestion is simply to allocate two
> code generation buffers and do something like
> 
> static int which_buffer_is_in_use_bit_mask = 1;
> ...
> 
>    /* in tb_flush */
>    assert (which_buffer_is_in_use_bit_mask != 3);
>    if (which_buffer_is_in_use_bit_mask == 1) {
>        which_buffer_is_in_use_bit_mask |= 2;
>        call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~1);
>        point TCG to second buffer
>     } else if (which_buffer_is_in_use_bit_mask == 2) {
>        which_buffer_is_in_use_bit_mask |= 1;
>        call_rcu(function doing which_buffer_is_in_use_bit_mask &= ~2);
>        point TCG to first buffer
>     }
> 
> Basically, we just assert that call_rcu makes at least one pass between
> two tb_flushes.
> 
> All this is also a prerequisite for patch 1.

The problem with this approach is that the "point TCG to second buffer"
is not just a question of pointing code_gen_buffer to a new address;
we'd have to create a new tcg_ctx struct, since tcg_ctx has quite a few
elements that are dependent on code_gen_buffer (e.g. s->code_ptr,
s->code_buf). And this could end up with readers reading a partially
up-to-date (i.e. corrupt) tcg_ctx.

I know you're not enthusiastic about it, but I think a mechanism to "stop
all CPUs and wait until they have indeed stopped" is in this case justified.

I'm preparing an RFC with these two changes (seqlock and stop all cpus mechanism)
on top of these base patches.

Thanks,

		Emilio

  reply	other threads:[~2016-03-21 21:48 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-18 16:18 [Qemu-devel] [RFC v1 00/11] Base enabling patches for MTTCG Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 01/11] tcg: move tb_find_fast outside the tb_lock critical section Alex Bennée
2016-03-18 16:54   ` Paolo Bonzini
2016-03-21 21:50   ` Emilio G. Cota
2016-03-21 22:08     ` Peter Maydell
2016-03-21 23:59       ` Emilio G. Cota
2016-03-22  8:29         ` Paolo Bonzini
2016-03-22 11:59         ` Alex Bennée
2016-03-22 11:55     ` Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 02/11] cpu-exec: elide more icount code if CONFIG_USER_ONLY Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 03/11] tcg: comment on which functions have to be called with tb_lock held Alex Bennée
2016-03-18 16:59   ` Paolo Bonzini
2016-03-21 21:50     ` Emilio G. Cota [this message]
2016-03-21 22:12       ` Paolo Bonzini
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 04/11] tcg: protect TBContext with tb_lock Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 05/11] target-arm/psci.c: wake up sleeping CPUs Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 06/11] tcg: cpus rm tcg_exec_all() Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 07/11] tcg: add options for enabling MTTCG Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 08/11] tcg: add kick timer for single-threaded vCPU emulation Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 09/11] tcg: drop global lock during TCG code execution Alex Bennée
2016-03-18 16:49   ` Paolo Bonzini
2016-03-23  9:19   ` KONRAD Frederic
2016-03-23 16:27     ` Alex Bennée
2016-03-23 20:36       ` Jan Kiszka
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 10/11] tcg: grab iothread lock in cpu-exec interrupt handling Alex Bennée
2016-03-18 16:48   ` Paolo Bonzini
2016-03-22 12:03     ` Alex Bennée
2016-03-18 16:18 ` [Qemu-devel] [RFC v1 11/11] tcg: enable thread-per-vCPU Alex Bennée

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160321215044.GA5098@flamenco \
    --to=cota@braap.org \
    --cc=a.rigo@virtualopensystems.com \
    --cc=afaerber@suse.de \
    --cc=alex.bennee@linaro.org \
    --cc=crosthwaite.peter@gmail.com \
    --cc=fred.konrad@greensocs.com \
    --cc=mark.burton@greensocs.com \
    --cc=mttcg@greensocs.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=serge.fdrv@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).