qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wei Li <lw945lw945@yahoo.com>
To: "pbonzini@redhat.com" <pbonzini@redhat.com>,
	 "eduardo@habkost.net" <eduardo@habkost.net>,
	 Richard Henderson <richard.henderson@linaro.org>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [PATCH 2/2] fix lock cmpxchg instruction
Date: Mon, 21 Mar 2022 08:50:32 +0000 (UTC)	[thread overview]
Message-ID: <37170609.693719.1647852632111@mail.yahoo.com> (raw)
In-Reply-To: <413ffde7-9e24-8047-7d77-f14769808d73@linaro.org>

[-- Attachment #1: Type: text/plain, Size: 1935 bytes --]

>This is better addressed with a movcond:
OK. a movcond is better than a branch. : ) I will update in patch v2.
Wei Li 

    On Monday, March 21, 2022, 03:21:27 AM GMT+8, Richard Henderson <richard.henderson@linaro.org> wrote:  
 
 On 3/19/22 09:06, Wei Li wrote:
> For lock cmpxchg, the situation is more complex. After the instruction
> is completed by tcg_gen_atomic_cmpxchg_tl, it needs a branch to judge
> if oldv == cmpv or not. The instruction only touches accumulator when
> oldv != cmpv.
> 
> Signed-off-by: Wei Li <lw945lw945@yahoo.com>
> ---
>  target/i386/tcg/translate.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/target/i386/tcg/translate.c b/target/i386/tcg/translate.c
> index 05be8d08e6..4fd9c03cb7 100644
> --- a/target/i386/tcg/translate.c
> +++ b/target/i386/tcg/translate.c
> @@ -5360,7 +5360,12 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>                  gen_lea_modrm(env, s, modrm);
>                  tcg_gen_atomic_cmpxchg_tl(oldv, s->A0, cmpv, newv,
>                                            s->mem_index, ot | MO_LE);
> +                label1 = gen_new_label();
> +                gen_extu(ot, oldv);
> +                gen_extu(ot, cmpv);
> +                tcg_gen_brcond_tl(TCG_COND_EQ, oldv, cmpv, label1);
>                  gen_op_mov_reg_v(s, ot, R_EAX, oldv);

This is better addressed with a movcond:

    TCGv temp = tcg_temp_new();
    tcg_gen_mov_tl(temp, cpu_regs[R_EAX]);
    /* Perform the merge into %al or %ax as required by ot. */
    gen_op_mov_reg_v(s, ot, R_EAX, oldv);
    /* Undo the entire modification to %rax if comparison equal. */
    tcg_gen_movcond_tl(TCG_COND_EQ, cpu_regs[R_EAX], oldv, cmpv,
                        cpu_regs[R_EAX], temp);
    tcg_temp_free(temp);


r~
  

[-- Attachment #2: Type: text/html, Size: 4090 bytes --]

      reply	other threads:[~2022-03-21  8:51 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20220319160658.336882-1-lw945lw945.ref@yahoo.com>
2022-03-19 16:06 ` [PATCH 0/2] cmpxchg and lock cmpxchg should not touch accumulator Wei Li
2022-03-19 16:06   ` [PATCH 1/2] fix cmpxchg instruction Wei Li
2022-03-20 19:07     ` Richard Henderson
2022-03-19 16:06   ` [PATCH 2/2] fix lock " Wei Li
2022-03-20 19:21     ` Richard Henderson
2022-03-21  8:50       ` Wei Li [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=37170609.693719.1647852632111@mail.yahoo.com \
    --to=lw945lw945@yahoo.com \
    --cc=eduardo@habkost.net \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).