* PPC64 TCG problem.. MSR[SF] switching.
@ 2021-01-24 2:03 Ivan Warren
2021-01-24 3:22 ` Richard Henderson
0 siblings, 1 reply; 2+ messages in thread
From: Ivan Warren @ 2021-01-24 2:03 UTC (permalink / raw)
To: qemu-ppc; +Cc: qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1661 bytes --]
Hello people,
I have the following issue : I'm using an OS (not linux) on a
qemu-system-ppc64. (in my case a Power8 qemu target with a x86_64 TCG
target)
This OS provides a set of NARROW/WIDE (MSR[SF]) agnostic code snippets
in the 1st 64K of addresses (so they can be called using the PPC 'bla'
instruction). Possibly this is kernel provided code so that it can
provide the best strategy for the current runtime environment depending
on CPU model or whatever the SPAPR Hypervisor says.
One of the routine is 1st being called in NARROW mode, and the TCG
generated code reflects that. For example it (seems) to generate address
folding in the output TCG target code (looking at log out_asm output)
and/or possibly uses the 32 bit soft mmu helper (but can't be sure of
this)..
Later the vCPU is switched to WIDE mode (MSR[SF]==1) and invokes the
code again. No new code is being generated because it is already in the
TCG cache, but that code is still the NARROW mode generated TCG target
code so it fails miserably (address incorrectly truncated to 32 bit
and/or wrong MMU strategy).
The solutions (if my assumptions are correct) I believe is either to
flush the TCG output cache upon MSR[SF] switching (but that could kill
performances if there is a lot of NARROW/WIDE switches... or have 2 TCG
caches (one for narrow code and one for wide code).
It may also affect other architectures that can switch addressing modes
(for example, s390x has 3 possible different modes that can be switched
directly from problem state although it doesn't affect the MMU)..
Ideas ? Comments ?
Thanks,
--Ivan
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3997 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: PPC64 TCG problem.. MSR[SF] switching.
2021-01-24 2:03 PPC64 TCG problem.. MSR[SF] switching Ivan Warren
@ 2021-01-24 3:22 ` Richard Henderson
0 siblings, 0 replies; 2+ messages in thread
From: Richard Henderson @ 2021-01-24 3:22 UTC (permalink / raw)
To: Ivan Warren, qemu-ppc; +Cc: qemu-devel
On 1/23/21 4:03 PM, Ivan Warren wrote:
> Hello people,
>
> I have the following issue : I'm using an OS (not linux) on a
> qemu-system-ppc64. (in my case a Power8 qemu target with a x86_64 TCG target)
>
> This OS provides a set of NARROW/WIDE (MSR[SF]) agnostic code snippets in the
> 1st 64K of addresses (so they can be called using the PPC 'bla' instruction).
> Possibly this is kernel provided code so that it can provide the best strategy
> for the current runtime environment depending on CPU model or whatever the
> SPAPR Hypervisor says.
>
> One of the routine is 1st being called in NARROW mode, and the TCG generated
> code reflects that. For example it (seems) to generate address folding in the
> output TCG target code (looking at log out_asm output) and/or possibly uses the
> 32 bit soft mmu helper (but can't be sure of this)..
>
> Later the vCPU is switched to WIDE mode (MSR[SF]==1) and invokes the code
> again. No new code is being generated because it is already in the TCG cache,
> but that code is still the NARROW mode generated TCG target code so it fails
> miserably (address incorrectly truncated to 32 bit and/or wrong MMU strategy).
You are correct, this is a bug in the ppc translator.
The bug is in ppc_tr_init_disas_context:
ctx->sf_mode = msr_is_64bit(env, env->msr);
this is an incorrect read of env state within the translator.
It looks like ppc is attempting to do this correctly, by computing a value into
env->hflags, which includes MSR[SF].
However, this doesn't quite work out because in cpu_get_tb_cpu_state,
*flags = env->hflags;
truncates the value from target_ulong to uint32_t.
So the setting of the MSR[SF] bit gets lost.
> The solutions (if my assumptions are correct) I believe is either to flush the
> TCG output cache upon MSR[SF] switching (but that could kill performances if
> there is a lot of NARROW/WIDE switches... or have 2 TCG caches (one for narrow
> code and one for wide code).
The values stored by cpu_get_tb_cpu_state are saved, and we will only reuse an
entry in the TCG output cache when all of the values are the same. So fixing
the truncation issue will fix this bug.
The easiest way to fix this is to (ab)use tb->cs_base to store env->hflags,
because they are both target_ulong values.
I will follow up with a partial patch for this general class of bug, which
should fix your specific case.
r~
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-01-24 3:23 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-01-24 2:03 PPC64 TCG problem.. MSR[SF] switching Ivan Warren
2021-01-24 3:22 ` Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).