qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Graf <agraf@suse.de>
To: malc <av1474@comtv.ru>
Cc: Blue Swirl <blauwirbel@gmail.com>,
	qemu-ppc@nongnu.org, qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 20/22] ppc: move load and store helpers, switch to AREG0 free mode
Date: Wed, 02 May 2012 15:00:21 +0200	[thread overview]
Message-ID: <4FA12FE5.8070305@suse.de> (raw)
In-Reply-To: <alpine.LNX.2.00.1204301933220.2648@linmac>

On 04/30/2012 05:34 PM, malc wrote:
> On Mon, 30 Apr 2012, Alexander Graf wrote:
>
>> On 30.04.2012, at 12:45, Alexander Graf wrote:
>>
>>> On 22.04.2012, at 15:26, Blue Swirl wrote:
>>>
>>>> Add an explicit CPUPPCState parameter instead of relying on AREG0
>>>> and rename op_helper.c (which only contains load and store helpers)
>>>> to mem_helper.c. Remove AREG0 swapping in
>>>> tlb_fill().
>>>>
>>>> Switch to AREG0 free mode. Use cpu_ld{l,uw}_code in translation
>>>> and interrupt handling, cpu_{ld,st}{l,uw}_data in loads and stores.
>>> This patch breaks qemu-system-ppc64 on ppc32 host user space for me. I'm trying to debug it down, but worst case I'll omit this patch set for 1.1.
>> Ok, so apparently nobody ever tested TCG_AREG0 mode with the ppc tcg
>> target. It looks as if the
>> 64-bit-guest-registers-in-32-bit-host-registers code path is missing
>> completely.
>>
>> This actually makes me less confident that this is a change we want for
>> 1.1. I'll remove the patches from the queue.
>>
>>
>> Alex
>>
>>
>> TCG register swizzling code:
>>
>> #ifdef CONFIG_TCG_PASS_AREG0
>>      /* XXX/FIXME: suboptimal */
>>      tcg_out_mov(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[3],
>>                  tcg_target_call_iarg_regs[2]);
>>      tcg_out_mov(s, TCG_TYPE_I64, tcg_target_call_iarg_regs[2],
>>                  tcg_target_call_iarg_regs[1]);
>>      tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1],
>>                  tcg_target_call_iarg_regs[0]);
>>      tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0],
>>                  TCG_AREG0);
>> #endif
>>      tcg_out_call (s, (tcg_target_long) qemu_st_helpers[opc], 1);
>>
> The above snippet is incorrect for SysV ppc32 ABI, due to misalignment
> of long long argument in register file.

Hmm - so what would be the correct version? :)


Alex

      parent reply	other threads:[~2012-05-02 13:00 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAAu8pHsxreCwtRftLOaw8LYwjoUquODcQqDigkytMME6Jz7+7Q@mail.gmail.com>
     [not found] ` <EF93EC8A-17FA-416D-9BCF-D7AE1C7FC1C9@suse.de>
     [not found]   ` <5B21E504-3B25-45B3-8951-5AB66D7833B1@suse.de>
2012-05-01  9:15     ` [Qemu-devel] [Qemu-ppc] [PATCH 20/22] ppc: move load and store helpers, switch to AREG0 free mode Blue Swirl
2012-05-01 14:25       ` Alexander Graf
2012-05-01 16:58         ` Blue Swirl
     [not found]     ` <alpine.LNX.2.00.1204301933220.2648@linmac>
2012-05-02 13:00       ` Alexander Graf [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FA12FE5.8070305@suse.de \
    --to=agraf@suse.de \
    --cc=av1474@comtv.ru \
    --cc=blauwirbel@gmail.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).