From: Paolo Bonzini <pbonzini@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: Blue Swirl <blauwirbel@gmail.com>, TeLeMan <geleman@gmail.com>,
Jan Kiszka <jan.kiszka@web.de>,
qemu-devel <qemu-devel@nongnu.org>,
David Gilbert <david.gilbert@linaro.org>
Subject: Re: [Qemu-devel] [PATCH] tcg: Reload local variables after return from longjmp
Date: Thu, 11 Aug 2011 16:10:03 +0200 [thread overview]
Message-ID: <4E43E2BB.1020403@redhat.com> (raw)
In-Reply-To: <CAFEAcA-_xqNuXaOKV_tN-wac8RHZtKgeah=P=YJgRosjUZUcsg@mail.gmail.com>
On 08/11/2011 03:31 PM, Peter Maydell wrote:
>>>
>>> Then it's a compiler bug, not smartness. Making env volatile
>>> (or making a volatile copy if there is a performance impact)
>>> should still be enough to work around it.
> Yes. (It would have to be a volatile copy, I think, env is a function
> parameter and I don't think you can make those volatile.)
> https://bugs.launchpad.net/qemu/+bug/823902 includes some discussion
> of the effects on the test of adding the volatile copy.
I'm not sure about what to read from there:
> If I make cpu_single_env thread local with __thread and leave
> 0d101... in, then again it works reliably on 32bit Lucid, and is
> flaky on 64 bit Oneiric (5/10 2 hangs, 3 segs)
>
> I've also tried using a volatile local variable in cpu_exec to hold
> a copy of env and restore that rather than cpu_single_env. With this
> it's solid on 32bit lucid and flaky on 64bit Oneirc; these failures
> on 64bit OO look like it running off the end of the code buffer (all
> 0 code), jumping to non-existent code addresses and a seg in
> tb_reset_jump_recursive2.
It looks like neither a thread-local cpu_single_env nor a volatile copy
fix the bug?!?
I cannot think off-hand of a reason why thread-local cpu_single_env
should not work for iothread under Unix, BTW. Since cpu_single_env is
only set/used by a thread at a time (under the global lock), its users
cannot distinguish between a thread-local variable and a global.
The only problem would be Windows, which runs cpu_signal in a thread
different than the CPU thread. But that can be fixed easily in
qemu_cpu_kick_thread.
Paolo
next prev parent reply other threads:[~2011-08-11 14:10 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-30 16:47 [Qemu-devel] "cpu-exec.c: avoid AREG0 use" breaks x86 emulation on x86-64 Jan Kiszka
2011-06-30 21:17 ` Stefan Weil
2011-07-01 1:44 ` TeLeMan
2011-07-01 20:15 ` Blue Swirl
2011-07-02 7:50 ` [Qemu-devel] [PATCH] tcg: Reload local variables after return from longjmp Jan Kiszka
2011-07-02 9:08 ` Blue Swirl
2011-07-02 9:43 ` Jan Kiszka
2011-07-03 14:09 ` Paolo Bonzini
2011-07-12 20:56 ` Blue Swirl
2011-08-11 11:30 ` Peter Maydell
2011-08-11 12:16 ` Paolo Bonzini
2011-08-11 12:40 ` Peter Maydell
2011-08-11 13:13 ` Paolo Bonzini
2011-08-11 13:31 ` Peter Maydell
2011-08-11 14:10 ` Paolo Bonzini [this message]
2011-08-11 14:12 ` David Gilbert
2011-08-11 14:24 ` Peter Maydell
2011-08-11 14:32 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4E43E2BB.1020403@redhat.com \
--to=pbonzini@redhat.com \
--cc=blauwirbel@gmail.com \
--cc=david.gilbert@linaro.org \
--cc=geleman@gmail.com \
--cc=jan.kiszka@web.de \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).