From: Peter Maydell <peter.maydell@linaro.org>
To: "Emilio G. Cota" <cota@braap.org>
Cc: Riku Voipio <riku.voipio@iki.fi>,
"qemu-ppc@nongnu.org" <qemu-ppc@nongnu.org>,
Alexander Graf <agraf@suse.de>,
QEMU Developers <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] linux-user crashes on clone(2) when run on ppc host
Date: Thu, 18 Jun 2015 15:55:54 +0100 [thread overview]
Message-ID: <CAFEAcA8uUhiac86G1qchNQAEHJs0R6_s1BD5+qfMHU0Gw2ec5Q@mail.gmail.com> (raw)
In-Reply-To: <20150618142309.GA12305@flamenco>
On 18 June 2015 at 15:23, Emilio G. Cota <cota@braap.org> wrote:
> On Thu, Jun 18, 2015 at 08:42:40 +0100, Peter Maydell wrote:
>> > What data structures are you referring to? Are they ppc-specific?
>>
>> None of the code generation data structures are locked at all --
>> if two threads try to generate code at the same time they'll
>> tend to clobber each other.
>
> AFAICT tb_gen_code is called with a mutex held (the sequence is
> mutex->tb_find_fast->tb_find_slow->tb_gen_code in cpu-exec.c)
>
> The only call to tb_gen_code that in usermode is not holding
> the lock is in cpu_breakpoint_insert->breakpoint_invalidate->
> tb_invalidate_phys_page_range->tb_gen_code. I'm not using
> gdb so I guess I cannot trigger this.
>
> Am I missing something?
I'd forgotten we had that mutex. However it's not actually
a sufficient fix for the problem. What needs to happen is
that:
(a) somebody actually sits down and figures out what data
structures we have and what locking/per-cpuness/etc they need,
ie a design
(b) somebody implements that design
This is happening as port of the TCG multithreading work:
http://wiki.qemu.org/Features/tcg-multithread
This is the bug we've had kicking around for a while about
multithreading races:
https://bugs.launchpad.net/qemu/+bug/1098729
As just one example race, consider the possibility that
thread A calls tb_gen_code, which calls tb_alloc, which
calls tb_flush, which clears the whole code cache, and then
tb_gen_code starts generating code over the top of a TB
that thread B was in the middle of executing from...
>> On 17 June 2015 at 22:36, Emilio G. Cota <cota@braap.org> wrote:
>> > I don't think this is a race because it also breaks when
>> > run on a single core (with taskset -c 0).
>
> As I said, this problem doesn't seem to be a race.
The multiple threads will still all be racing with each
other on the single core.
In general I don't see much benefit in detailed investigation
into the precise reason why a guest program crashes when
the whole area is known to be fundamentally not designed
right...
thanks
-- PMM
next prev parent reply other threads:[~2015-06-18 14:56 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-17 0:52 [Qemu-devel] linux-user crashes on clone(2) when run on ppc host Emilio G. Cota
2015-06-17 8:58 ` Peter Maydell
2015-06-17 21:36 ` Emilio G. Cota
2015-06-18 7:42 ` Peter Maydell
2015-06-18 14:23 ` Emilio G. Cota
2015-06-18 14:55 ` Peter Maydell [this message]
2015-06-18 18:36 ` Emilio G. Cota
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFEAcA8uUhiac86G1qchNQAEHJs0R6_s1BD5+qfMHU0Gw2ec5Q@mail.gmail.com \
--to=peter.maydell@linaro.org \
--cc=agraf@suse.de \
--cc=cota@braap.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=riku.voipio@iki.fi \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).