From: "Alex Bennée" <alex.bennee@linaro.org>
To: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>,
David Gibson <david@gibson.dropbear.id.au>,
qemu-ppc@nongnu.org, Qemu Developers <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] Holding the BQL for emulate_ppc_hypercall
Date: Tue, 25 Oct 2016 09:39:07 +0100 [thread overview]
Message-ID: <8760ogkel0.fsf@linaro.org> (raw)
In-Reply-To: <87zilt6ql1.fsf@abhimanyu.i-did-not-set--mail-host-address--so-tickle-me>
Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> writes:
> Alex Bennée <alex.bennee@linaro.org> writes:
>
>> Hi,
>>
>> In the MTTCG patch set one of the big patches is to remove the
>> requirement to hold the BQL while running code:
>>
>> tcg: drop global lock during TCG code execution
>>
>> And this broke the PPC code because emulate_ppc_hypercall can cause
>> changes to the global state. This function just calls spapr_hypercall()
>> and puts the results into the TCG register file. Normally
>> spapr_hypercall() is called under the BQL in KVM as
>> kvm_arch_handle_exit() does things with the BQL held.
>>
>> I blithely wrapped the called in a lock/unlock pair only to find the
>> ppc64 check builds failed as the hypercall was made during the
>> cc->do_interrupt() code which also holds the BQL.
>>
>> I'm a little confused by the nature of PPC hypercalls in TCG? Are they
>> not all detectable at code generation time? What is the case that causes
>> an exception to occur rather than the helper function doing the
>> hypercall?
>>
>> I guess it comes down to can I avoid doing:
>>
>> /* If we come via cc->do_interrupt BQL may already be held */
>> if (!qemu_mutex_iothread_locked()) {
>> g_mutex_lock_iothread();
>> env->gpr[3] = spapr_hypercall(cpu, env->gpr[3], &env->gpr[4]);
>> g_muetx_unlock_iothread();
>> } else {
>> env->gpr[3] = spapr_hypercall(cpu, env->gpr[3], &env->gpr[4]);
>> }
>>
>> Any thoughts?
>
> Similar discussions happened on this patch:
> https://lists.gnu.org/archive/html/qemu-ppc/2016-09/msg00015.html
>
> This was just working for TCG case, need to fix for KVM. I would need to
> handle KVM case to avoid a deadlock.
Thanks for the pointer I missed that before.
But I think the fix here is too far down the call stack as the
spapr_hypercall is called by both TCG and KVM paths. But as discussed
on my reply to Dave I think the correct fix is to ensure
cpu-exec.c:cpu_handle_exception also takes the BQL when delivering
exceptions:
if (replay_exception()) {
CPUClass *cc = CPU_GET_CLASS(cpu);
qemu_mutex_lock_iothread();
cc->do_interrupt(cpu);
qemu_mutex_unlock_iothread();
cpu->exception_index = -1;
} else if (!replay_has_interrupt()) {
I got confused by the if(replay_exception()) which is a bit non-obvious.
>
> Regards
> Nikunj
--
Alex Bennée
prev parent reply other threads:[~2016-10-25 8:39 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-24 14:41 [Qemu-devel] Holding the BQL for emulate_ppc_hypercall Alex Bennée
2016-10-24 14:44 ` Alex Bennée
2016-10-25 2:47 ` David Gibson
2016-10-25 8:36 ` Alex Bennée
2016-10-25 3:43 ` Nikunj A Dadhania
2016-10-25 8:39 ` Alex Bennée [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8760ogkel0.fsf@linaro.org \
--to=alex.bennee@linaro.org \
--cc=bharata@linux.vnet.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=nikunj@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).