qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Richard Henderson <rth@twiddle.net>
To: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 1/4] target-tricore: target-tricore: Add instructions of RR1 opcode format, that have 0x93 as first opcode
Date: Wed, 21 Jan 2015 10:14:20 -0800	[thread overview]
Message-ID: <54BFEC7C.1000400@twiddle.net> (raw)
In-Reply-To: <1421863489-7716-2-git-send-email-kbastian@mail.uni-paderborn.de>

On 01/21/2015 10:04 AM, Bastian Koppelmann wrote:
> Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
> ---
>  target-tricore/translate.c | 276 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 276 insertions(+)
> 
> diff --git a/target-tricore/translate.c b/target-tricore/translate.c
> index def7f4a..da8ecbc 100644
> --- a/target-tricore/translate.c
> +++ b/target-tricore/translate.c
> @@ -4778,6 +4778,279 @@ static void decode_rr1_mul(CPUTriCoreState *env, DisasContext *ctx)
>      tcg_temp_free(n);
>  }
>  
> +static void decode_rr1_mulq(CPUTriCoreState *env, DisasContext *ctx)
> +{
> +    uint32_t op2;
> +    int r1, r2, r3;
> +    uint32_t n;
> +
> +    TCGv temp, temp2;
> +
> +    r1 = MASK_OP_RR1_S1(ctx->opcode);
> +    r2 = MASK_OP_RR1_S2(ctx->opcode);
> +    r3 = MASK_OP_RR1_D(ctx->opcode);
> +    n  = MASK_OP_RR1_N(ctx->opcode);
> +    op2 = MASK_OP_RR1_OP2(ctx->opcode);
> +
> +    temp = tcg_temp_new();
> +    temp2 = tcg_temp_new();
> +
> +    switch (op2) {
> +    case OPC2_32_RR1_MUL_Q_32:
> +        if (n == 0) {
> +            tcg_gen_muls2_tl(temp, cpu_gpr_d[r3], cpu_gpr_d[r1], cpu_gpr_d[r2]);
> +            /* reset v bit */
> +            tcg_gen_movi_tl(cpu_PSW_V, 0);
> +        } else {
> +            tcg_gen_muls2_tl(temp, temp2, cpu_gpr_d[r1], cpu_gpr_d[r2]);
> +            tcg_gen_shli_tl(temp2, temp2, n);
> +            tcg_gen_shri_tl(temp, temp, 31);

Yes, n is supposed to be either 0 or 1.
But mixing n with a constant is confusing.

Either hard-code 1 here (perhaps preferred?),
or write 32-n.

> +    case OPC2_32_RR1_MUL_Q_64:
> +        if (n == 0) {
> +            tcg_gen_muls2_tl(cpu_gpr_d[r3], cpu_gpr_d[r3+1], cpu_gpr_d[r1],
> +                             cpu_gpr_d[r2]);
> +            /* reset v bit */
> +            tcg_gen_movi_tl(cpu_PSW_V, 0);
> +        } else {
> +            tcg_gen_muls2_tl(temp, temp2, cpu_gpr_d[r1], cpu_gpr_d[r2]);
> +            tcg_gen_shli_tl(temp2, temp2, n);
> +            tcg_gen_shli_tl(cpu_gpr_d[r3], temp, n);
> +            tcg_gen_shri_tl(temp, temp, 31);
> +            tcg_gen_or_tl(cpu_gpr_d[r3+1], temp, temp2);

I do wonder about just using 64-bit arithmetic here, instead of
emulating a 64-bit shift.

> +            /* overflow only occours if r1 = r2 = 0x8000 */
> +            tcg_gen_setcondi_tl(TCG_COND_EQ, cpu_PSW_V, cpu_gpr_d[r3+1],
> +                                0x80000000);
> +            tcg_gen_shli_tl(cpu_PSW_V, cpu_PSW_V, 31);
> +        }
> +        /* calc sv overflow bit */
> +        tcg_gen_or_tl(cpu_PSW_SV, cpu_PSW_SV, cpu_PSW_V);
> +        /* calc av overflow bit */
> +        tcg_gen_add_tl(cpu_PSW_AV, cpu_gpr_d[r3+1], cpu_gpr_d[r3+1]);
> +        tcg_gen_xor_tl(cpu_PSW_AV, cpu_gpr_d[r3+1], cpu_PSW_AV);
> +        /* calc sav overflow bit */
> +        tcg_gen_or_tl(cpu_PSW_SAV, cpu_PSW_SAV, cpu_PSW_AV);
> +        break;
> +    case OPC2_32_RR1_MUL_Q_32_L:
> +        tcg_gen_ext16s_tl(temp, cpu_gpr_d[r2]);
> +        if (n == 0) {
> +            tcg_gen_muls2_tl(temp, temp2, temp, cpu_gpr_d[r1]);
> +            tcg_gen_shli_tl(cpu_gpr_d[r3], temp2, 16);
> +            tcg_gen_shri_tl(temp, temp, 16);
> +            tcg_gen_or_tl(cpu_gpr_d[r3], cpu_gpr_d[r3], temp);

Similarly.

> +            /* reset v bit */
> +            tcg_gen_movi_tl(cpu_PSW_V, 0);
> +        } else {
> +            tcg_gen_muls2_tl(temp, temp2, temp, cpu_gpr_d[r1]);
> +            tcg_gen_shli_tl(cpu_gpr_d[r3], temp2, 17);
> +            tcg_gen_shri_tl(temp, temp, 15);
> +            tcg_gen_or_tl(cpu_gpr_d[r3], cpu_gpr_d[r3], temp);
> +            /* overflow only occours if r1 = r2 = 0x8000 */
> +            tcg_gen_setcondi_tl(TCG_COND_EQ, cpu_PSW_V, cpu_gpr_d[r3],
> +                                0x80000000);
> +            tcg_gen_shli_tl(cpu_PSW_V, cpu_PSW_V, 31);
> +        }
> +        /* calc sv overflow bit */
> +        tcg_gen_or_tl(cpu_PSW_SV, cpu_PSW_SV, cpu_PSW_V);
> +        /* calc av overflow bit */
> +        tcg_gen_add_tl(cpu_PSW_AV, cpu_gpr_d[r3], cpu_gpr_d[r3]);
> +        tcg_gen_xor_tl(cpu_PSW_AV, cpu_gpr_d[r3], cpu_PSW_AV);
> +        /* calc sav overflow bit */
> +        tcg_gen_or_tl(cpu_PSW_SAV, cpu_PSW_SAV, cpu_PSW_AV);
> +        break;
> +    case OPC2_32_RR1_MUL_Q_64_L:
> +        tcg_gen_ext16s_tl(temp, cpu_gpr_d[r2]);
> +        if (n == 0) {
> +            tcg_gen_muls2_tl(cpu_gpr_d[r3], cpu_gpr_d[r3+1], cpu_gpr_d[r1],
> +                             temp);
> +            /* reset v bit */
> +            tcg_gen_movi_tl(cpu_PSW_V, 0);
> +        } else {
> +            tcg_gen_muls2_tl(temp, temp2, cpu_gpr_d[r1], temp);
> +            tcg_gen_shli_tl(temp2, temp2, n);
> +            tcg_gen_shli_tl(cpu_gpr_d[r3], temp, n);
> +            tcg_gen_shri_tl(temp, temp, 31);
> +            tcg_gen_or_tl(cpu_gpr_d[r3+1], temp, temp2);
> +            /* overflow only occours if r1 = r2 = 0x8000 */
> +            tcg_gen_setcondi_tl(TCG_COND_EQ, cpu_PSW_V, cpu_gpr_d[r3+1],
> +                                0x80000000);
> +            tcg_gen_shli_tl(cpu_PSW_V, cpu_PSW_V, 31);
> +        }
> +        /* calc sv overflow bit */
> +        tcg_gen_or_tl(cpu_PSW_SV, cpu_PSW_SV, cpu_PSW_V);
> +        /* calc av overflow bit */
> +        tcg_gen_add_tl(cpu_PSW_AV, cpu_gpr_d[r3+1], cpu_gpr_d[r3+1]);
> +        tcg_gen_xor_tl(cpu_PSW_AV, cpu_gpr_d[r3+1], cpu_PSW_AV);
> +        /* calc sav overflow bit */
> +        tcg_gen_or_tl(cpu_PSW_SAV, cpu_PSW_SAV, cpu_PSW_AV);
> +        break;
> +    case OPC2_32_RR1_MUL_Q_32_U:
> +        tcg_gen_shri_tl(temp, cpu_gpr_d[r2], 16);
> +        tcg_gen_ext16s_tl(temp, temp);

Use an arithmetic shift and you don't need the sign-extend.

There's an awful lot of replication in here.  I think a few
different subroutines are warrented.


r~

  reply	other threads:[~2015-01-21 18:14 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-21 18:04 [Qemu-devel] [PATCH 0/4] TriCore add instructions of RR1, RR2, RRPW and RRR opcode format Bastian Koppelmann
2015-01-21 18:04 ` [Qemu-devel] [PATCH 1/4] target-tricore: target-tricore: Add instructions of RR1 opcode format, that have 0x93 as first opcode Bastian Koppelmann
2015-01-21 18:14   ` Richard Henderson [this message]
2015-01-21 18:04 ` [Qemu-devel] [PATCH 2/4] target-tricore: Add instructions of RR2 opcode format Bastian Koppelmann
2015-01-21 18:16   ` Richard Henderson
2015-01-22 14:58     ` Bastian Koppelmann
2015-01-21 18:04 ` [Qemu-devel] [PATCH 3/4] target-tricore: Add instructions of RRPW " Bastian Koppelmann
2015-01-21 18:57   ` Richard Henderson
2015-01-21 18:04 ` [Qemu-devel] [PATCH 4/4] target-tricore: Add instructions of RRR " Bastian Koppelmann
2015-01-21 19:45   ` Richard Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54BFEC7C.1000400@twiddle.net \
    --to=rth@twiddle.net \
    --cc=kbastian@mail.uni-paderborn.de \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).