qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Frederic Konrad <fred.konrad@greensocs.com>
To: Alexander Graf <agraf@suse.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	mttcg@listserver.greensocs.com,
	qemu-devel <qemu-devel@nongnu.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Subject: Re: [Qemu-devel] global_mutex and multithread.
Date: Thu, 15 Jan 2015 14:30:55 +0100	[thread overview]
Message-ID: <54B7C10F.20504@greensocs.com> (raw)
In-Reply-To: <54B7A124.3020300@suse.de>

On 15/01/2015 12:14, Alexander Graf wrote:
>
> On 15.01.15 12:12, Paolo Bonzini wrote:
>> [now with correct listserver address]
>>
>> On 15/01/2015 11:25, Frederic Konrad wrote:
>>> Hi everybody,
>>>
>>> In case of multithread TCG what is the best way to handle
>>> qemu_global_mutex?
>>> We though to have one mutex per vcpu and then synchronize vcpu threads when
>>> they exit (eg: in tcg_exec_all).
>>>
>>> Is that making sense?
>> The basic ideas from Jan's patch in
>> http://article.gmane.org/gmane.comp.emulators.qemu/118807 still apply.
>>
>> RAM block reordering doesn't exist anymore, having been replaced with
>> mru_block.
>>
>> The patch reacquired the lock when entering MMIO or PIO emulation.
>> That's enough while there is only one VCPU thread.
>>
>> Once you have >1 VCPU thread you'll need the RCU work that I am slowly
>> polishing and sending out.  That's because one device can change the
>> memory map, and that will cause a tlb_flush for all CPUs in tcg_commit,
>> and that's not thread-safe.
> You'll have a similar problem for tb_flush() if you use a single tb
> cache. Just introduce a big hammer function for now that IPIs all the
> other threads, waits until they halted, do the atomic instruction (like
> change the memory map or flush the tb cache), then let them continue.
>
> We can later one-by-one get rid of all callers of this.
>
>
> Alex
Maybe we can put a flag in the tb to say it's being executed so tb_alloc 
won't try to
realloc it?

Maybe it's a bad idea and will be actually slower than exiting and 
waiting all the other
cpu.

Fred

  parent reply	other threads:[~2015-01-15 13:31 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-15 10:25 [Qemu-devel] global_mutex and multithread Frederic Konrad
2015-01-15 10:34 ` Peter Maydell
2015-01-15 10:41   ` Frederic Konrad
2015-01-15 10:44 ` Paolo Bonzini
2015-01-15 11:12 ` Paolo Bonzini
2015-01-15 11:14   ` Alexander Graf
2015-01-15 11:26     ` Paolo Bonzini
2015-01-15 13:30     ` Frederic Konrad [this message]
2015-01-15 13:34       ` Mark Burton
2015-01-15 12:51   ` Frederic Konrad
2015-01-15 12:56     ` Paolo Bonzini
2015-01-15 13:27       ` Frederic Konrad
2015-01-15 13:30         ` Peter Maydell
2015-01-15 19:07   ` Mark Burton
2015-01-15 20:27     ` Paolo Bonzini
2015-01-15 20:53       ` Mark Burton
2015-01-15 21:41         ` Paolo Bonzini
2015-01-15 21:41         ` Paolo Bonzini
2015-01-16  7:25           ` Mark Burton
2015-01-16  8:07             ` Jan Kiszka
2015-01-16  8:43               ` Frederic Konrad
2015-01-16  8:52               ` Mark Burton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54B7C10F.20504@greensocs.com \
    --to=fred.konrad@greensocs.com \
    --cc=agraf@suse.de \
    --cc=mttcg@listserver.greensocs.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).