From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1K95GR-0000Sc-8j for qemu-devel@nongnu.org; Wed, 18 Jun 2008 17:32:23 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1K95GN-0000QA-Kz for qemu-devel@nongnu.org; Wed, 18 Jun 2008 17:32:21 -0400 Received: from [199.232.76.173] (port=34869 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1K95GN-0000Q4-H4 for qemu-devel@nongnu.org; Wed, 18 Jun 2008 17:32:19 -0400 Received: from honiara.magic.fr ([195.154.193.36]:43420) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1K95GN-0005NO-3A for qemu-devel@nongnu.org; Wed, 18 Jun 2008 17:32:19 -0400 Received: from [192.168.0.2] (ppp-36.net-123.static.magiconline.fr [80.118.184.36]) by honiara.magic.fr (8.13.1/8.13.1) with ESMTP id m5ILWFL1003002 for ; Wed, 18 Jun 2008 23:32:16 +0200 Subject: Re: [Qemu-devel] [PATCH] ppc32 guests: fix computation of XER.{CA, OV} in addme, subfme, mullwo From: "J. Mayer" In-Reply-To: <200806180123.49284.jseward@acm.org> References: <200805110204.47184.jseward@acm.org> <200806180006.51954.jseward@acm.org> <1213743222.19143.22.camel@localhost> <200806180123.49284.jseward@acm.org> Content-Type: text/plain Date: Wed, 18 Jun 2008 23:32:15 +0200 Message-Id: <1213824735.19143.35.camel@localhost> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org On Wed, 2008-06-18 at 01:23 +0200, Julian Seward wrote: > On Wednesday 18 June 2008 00:53, J. Mayer wrote: > > > This patch looks really ugly, does not respect the coding style and > > addme/subme changes seem very suspicious to me... > > Thanks for the encouragement. I guess I won't become a diplomat soon... Looking at the (relativelly) recent changes in this code, I can see that I did an optimization that seems correct at first sight but... The goal was to avoid tests for xer_ov & xer_so computation. The previous version was OK, I still have tests results which validate the cases you describe. @@ -319,15 +149,12 @@ void do_addmeo (void) { T1 = T0; T0 += xer_ca + (-1); - if (likely(!((uint32_t)T1 & - ((uint32_t)T1 ^ (uint32_t)T0) & (1UL << 31)))) { - xer_ov = 0; - } else { - xer_ov = 1; - xer_so = 1; - } + xer_ov = ((uint32_t)T1 & ((uint32_t)T1 ^ (uint32_t)T0)) >> 31; + xer_so |= xer_ov; if (likely(T1 != 0)) xer_ca = 1; @@ -335,43 +162,40 @@ void do_addmeo_64 (void) { T1 = T0; T0 += xer_ca + (-1); - if (likely(!((uint64_t)T1 & - ((uint64_t)T1 ^ (uint64_t)T0) & (1ULL << 63)))) { - xer_ov = 0; - } else { - xer_ov = 1; - xer_so = 1; - } + xer_ov = ((uint64_t)T1 & ((uint64_t)T1 ^ (uint64_t)T0)) >> 63; + xer_so |= xer_ov; if (likely(T1 != 0)) xer_ca = 1; @@ -483,15 +308,12 @@ void do_subfmeo (void) { T1 = T0; T0 = ~T0 + xer_ca - 1; - if (likely(!((uint32_t)~T1 & ((uint32_t)~T1 ^ (uint32_t)T0) & - (1UL << 31)))) { - xer_ov = 0; - } else { - xer_ov = 1; - xer_so = 1; - } + xer_ov = ((uint32_t)~T1 & ((uint32_t)~T1 ^ (uint32_t)T0)) >> 31; + xer_so |= xer_ov; if (likely((uint32_t)T1 != UINT32_MAX)) xer_ca = 1; @@ -499,15 +321,12 @@ void do_subfmeo_64 (void) { T1 = T0; T0 = ~T0 + xer_ca - 1; - if (likely(!((uint64_t)~T1 & ((uint64_t)~T1 ^ (uint64_t)T0) & - (1ULL << 63)))) { - xer_ov = 0; - } else { - xer_ov = 1; - xer_so = 1; - } + xer_ov = ((uint64_t)~T1 & ((uint64_t)~T1 ^ (uint64_t)T0)) >> 63; + xer_so |= xer_ov; if (likely((uint64_t)T1 != UINT64_MAX)) xer_ca = 1; @@ -515,13 +334,9 @@ void do_subfzeo (void) { T1 = T0; T0 = ~T0 + xer_ca; - if (likely(!(((uint32_t)~T1 ^ UINT32_MAX) & - ((uint32_t)(~T1) ^ (uint32_t)T0) & (1UL << 31)))) { - xer_ov = 0; - } else { - xer_ov = 1; - xer_so = 1; - } + xer_ov = (((uint32_t)~T1 ^ UINT32_MAX) & + ((uint32_t)(~T1) ^ (uint32_t)T0)) >> 31; + xer_so |= xer_ov; if (likely((uint32_t)T0 >= (uint32_t)~T1)) { xer_ca = 0; } else { @@ -534,13 +349,9 @@ void do_subfzeo_64 (void) { T1 = T0; T0 = ~T0 + xer_ca; - if (likely(!(((uint64_t)~T1 ^ UINT64_MAX) & - ((uint64_t)(~T1) ^ (uint64_t)T0) & (1ULL << 63)))) { - xer_ov = 0; - } else { - xer_ov = 1; - xer_so = 1; - } + xer_ov = (((uint64_t)~T1 ^ UINT64_MAX) & + ((uint64_t)(~T1) ^ (uint64_t)T0)) >> 63; + xer_so |= xer_ov; if (likely((uint64_t)T0 >= (uint64_t)~T1)) { xer_ca = 0; } else { Do you feel like this patch is buggy ? > > > Could you please exactly describe the test conditions (host, ...) ? > > Core 2 Duo, openSUSE 10.2, 64-bit mode, gcc-3.4.6, qemu svn trunk from > 2 days ago. 64 bits only bug, for mullwo ? Well, I'm running amd64 for years, but.... -- J. Mayer Never organized