From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=56192 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PaRiU-0003Wq-MS for qemu-devel@nongnu.org; Wed, 05 Jan 2011 06:40:06 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PaRi8-0003L0-Ho for qemu-devel@nongnu.org; Wed, 05 Jan 2011 06:39:46 -0500 Received: from mail-qw0-f45.google.com ([209.85.216.45]:40056) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PaRKm-0008D4-5g for qemu-devel@nongnu.org; Wed, 05 Jan 2011 06:15:16 -0500 Received: by qwk4 with SMTP id 4so15881601qwk.4 for ; Wed, 05 Jan 2011 03:15:15 -0800 (PST) MIME-Version: 1.0 In-Reply-To: <1293906328-8984-1-git-send-email-aurelien@aurel32.net> References: <1293906328-8984-1-git-send-email-aurelien@aurel32.net> Date: Wed, 5 Jan 2011 11:15:15 +0000 Message-ID: Subject: Re: [Qemu-devel] [PATCH] target-arm: fix SMMLA/SMMLS instructions From: Peter Maydell Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Aurelien Jarno Cc: qemu-devel@nongnu.org On 1 January 2011 18:25, Aurelien Jarno wrote: > SMMLA and SMMLS are broken on both in normal and thumb mode, that is > both (different) implementations are wrong. They try to avoid a 64-bit > add for the rounding, which is not trivial if you want to support both > SMMLA and SMMLS with the same code. > > The code below uses the same implementation for both modes, using the > code from the ARM manual. It also fixes the thumb decoding that was a > mix between normal and thumb mode. > > This fixes the issues reported in > https://bugs.launchpad.net/qemu/+bug/629298 I've tested this patch with my random-sequence-generator for SMMLA/SMMLS/SMMUL for ARM and Thumb, and it does fix the bug. I have a few minor nitpicks about some comments, though. > -/* Round the top 32 bits of a 64-bit value. =C2=A0*/ > -static void gen_roundqd(TCGv a, TCGv b) > +/* Add a to the msw of b. Mark inputs as dead */ > +static TCGv_i64 gen_addq_msw(TCGv_i64 a, TCGv b) > =C2=A0{ > - =C2=A0 =C2=A0tcg_gen_shri_i32(a, a, 31); > - =C2=A0 =C2=A0tcg_gen_add_i32(a, a, b); > + =C2=A0 =C2=A0TCGv_i64 tmp64 =3D tcg_temp_new_i64(); > + > + =C2=A0 =C2=A0tcg_gen_extu_i32_i64(tmp64, b); > + =C2=A0 =C2=A0dead_tmp(b); > + =C2=A0 =C2=A0tcg_gen_shli_i64(tmp64, tmp64, 32); > + =C2=A0 =C2=A0tcg_gen_add_i64(a, tmp64, a); > + > + =C2=A0 =C2=A0tcg_temp_free_i64(tmp64); > + =C2=A0 =C2=A0return a; > +} Isn't this adding b to the msw of a, rather than the other way round as the comment claims? > +/* Subtract a from the msw of b. Mark inputs as dead. */ Ditto. > @@ -6953,23 +6958,25 @@ static void disas_arm_insn(CPUState * env, DisasC= ontext *s) > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 tmp= =3D load_reg(s, rm); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 tmp= 2 =3D load_reg(s, rs); > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if = (insn & (1 << 20)) { > - =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0/* Signed multiply most significant [accumulate]. =C2=A0*/ > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0/* Signed multiply most significant [accumulate]. > + =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 (SMMUL, SMLA, SMMLS) */ SMMLA, not SMLA. -- PMM