From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LBuUg-000486-IW for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:11:02 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LBuUf-00045z-RG for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:11:02 -0500 Received: from [199.232.76.173] (port=40304 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LBuUf-00045d-O8 for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:11:01 -0500 Received: from mtaout03-winn.ispmail.ntl.com ([81.103.221.49]:39988) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LBuUf-0005Pf-Ag for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:11:01 -0500 Received: from aamtaout02-winn.ispmail.ntl.com ([81.103.221.35]) by mtaout03-winn.ispmail.ntl.com (InterMail vM.7.08.04.00 201-2186-134-20080326) with ESMTP id <20081214171100.ZIYL1691.mtaout03-winn.ispmail.ntl.com@aamtaout02-winn.ispmail.ntl.com> for ; Sun, 14 Dec 2008 17:11:00 +0000 Received: from miranda.arrow ([213.107.23.205]) by aamtaout02-winn.ispmail.ntl.com (InterMail vG.2.02.00.01 201-2161-120-102-20060912) with ESMTP id <20081214171100.QSNG21638.aamtaout02-winn.ispmail.ntl.com@miranda.arrow> for ; Sun, 14 Dec 2008 17:11:00 +0000 Received: from sdb by miranda.arrow with local (Exim 4.63) (envelope-from ) id 1LBuWN-0003dz-1G for qemu-devel@nongnu.org; Sun, 14 Dec 2008 17:12:47 +0000 Date: Sun, 14 Dec 2008 17:12:46 +0000 From: Stuart Brady Message-ID: <20081214171246.GA13983@miranda.arrow> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Subject: [Qemu-devel] [PATCH] Compile-time checking for shift operations Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org This patch implements compile-time checking for the immediate versions of TCG's shift operations, ensuring that number of places shifted is within the correct range (i.e. greater than or equal to zero, but less than the TCGv's width), provided that the number of places shifted is a constant expression. Signed-off-by: Stuart Brady Index: tcg/tcg-op.h =================================================================== --- tcg/tcg-op.h (revision 6028) +++ tcg/tcg-op.h (working copy) @@ -25,6 +25,22 @@ int gen_new_label(void); +#if QEMU_GNUC_PREREQ(4, 3) +extern void __attribute__((error("Invalid shift"))) +shift_error(void); +#endif + +static inline void tcg_check_shift(int shift, int limit) +{ +#if QEMU_GNUC_PREREQ(4, 3) + if (__builtin_constant_p(shift)) { + if (shift < 0 || shift >= limit) { + shift_error(); + } + } +#endif +} + static inline void tcg_gen_op1_i32(int opc, TCGv_i32 arg1) { *gen_opc_ptr++ = opc; @@ -496,6 +512,8 @@ static inline void tcg_gen_shli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) { + tcg_check_shift(arg2, 32); + if (arg2 == 0) { tcg_gen_mov_i32(ret, arg1); } else { @@ -512,6 +530,8 @@ static inline void tcg_gen_shri_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) { + tcg_check_shift(arg2, 32); + if (arg2 == 0) { tcg_gen_mov_i32(ret, arg1); } else { @@ -528,6 +548,8 @@ static inline void tcg_gen_sari_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) { + tcg_check_shift(arg2, 32); + if (arg2 == 0) { tcg_gen_mov_i32(ret, arg1); } else { @@ -782,6 +804,8 @@ static inline void tcg_gen_shli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) { + tcg_check_shift(arg2, 64); + tcg_gen_shifti_i64(ret, arg1, arg2, 0, 0); } @@ -792,6 +816,8 @@ static inline void tcg_gen_shri_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) { + tcg_check_shift(arg2, 64); + tcg_gen_shifti_i64(ret, arg1, arg2, 1, 0); } @@ -802,6 +828,8 @@ static inline void tcg_gen_sari_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) { + tcg_check_shift(arg2, 64); + tcg_gen_shifti_i64(ret, arg1, arg2, 1, 1); } @@ -1601,6 +1629,8 @@ static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) { + tcg_check_shift(arg2, 32); + /* some cases can be optimized here */ if (arg2 == 0) { tcg_gen_mov_i32(ret, arg1); @@ -1618,6 +1648,8 @@ static inline void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) { + tcg_check_shift(arg2, 64); + /* some cases can be optimized here */ if (arg2 == 0) { tcg_gen_mov_i64(ret, arg1); @@ -1663,6 +1695,8 @@ static inline void tcg_gen_rotri_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) { + tcg_check_shift(arg2, 32); + /* some cases can be optimized here */ if (arg2 == 0) { tcg_gen_mov_i32(ret, arg1); @@ -1673,6 +1707,8 @@ static inline void tcg_gen_rotri_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) { + tcg_check_shift(arg2, 64); + /* some cases can be optimized here */ if (arg2 == 0) { tcg_gen_mov_i64(ret, arg1); -- Stuart Brady