From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752246AbbH3IiR (ORCPT ); Sun, 30 Aug 2015 04:38:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:58788 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751375AbbH3IiO (ORCPT ); Sun, 30 Aug 2015 04:38:14 -0400 Date: Sun, 30 Aug 2015 11:38:09 +0300 From: "Michael S. Tsirkin" To: linux-kernel@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , x86@kernel.org, Rusty Russell Subject: [PATCH 1/2] x86/bitops: implement __test_bit Message-ID: <1440776707-22016-1-git-send-email-mst@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Mutt-Fcc: =sent Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org One little known side effect of test_bit is that it adds a kind of a compiler barrier since the pointer parameter is volatile. It seems risky to change the semantics of test_bit so let's just add __test_bit (matching __set_bit and __clear_bit) that does not add such a barrier. Will be used by kvm on x86, where it shaves several bytes off the binary size. Small win, but comes at no cost, so why not. Signed-off-by: Michael S. Tsirkin --- x86 maintainers - please specify whether you are ok with adding this to arch/x86/include/asm/bitops.h An alternative is to add this to kvm/x86 only. It might be worth it to add this to all architectures, though I haven't explored too much. arch/x86/include/asm/bitops.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/arch/x86/include/asm/bitops.h b/arch/x86/include/asm/bitops.h index cfe3b95..9229334 100644 --- a/arch/x86/include/asm/bitops.h +++ b/arch/x86/include/asm/bitops.h @@ -323,6 +323,24 @@ static inline int variable_test_bit(long nr, volatile const unsigned long *addr) return oldbit; } +static __always_inline int __constant_test_bit(long nr, const unsigned long *addr) +{ + return ((1UL << (nr & (BITS_PER_LONG-1))) & + (addr[nr >> _BITOPS_LONG_SHIFT])) != 0; +} + +static inline int __variable_test_bit(long nr, const unsigned long *addr) +{ + int oldbit; + + asm volatile("bt %2,%1\n\t" + "sbb %0,%0" + : "=r" (oldbit) + : "m" (*addr), "Ir" (nr)); + + return oldbit; +} + #if 0 /* Fool kernel-doc since it doesn't do macros yet */ /** * test_bit - Determine whether a bit is set @@ -330,6 +348,13 @@ static inline int variable_test_bit(long nr, volatile const unsigned long *addr) * @addr: Address to start counting from */ static int test_bit(int nr, const volatile unsigned long *addr); + +/** + * __test_bit - Determine whether a bit is set + * @nr: bit number to test + * @addr: Address to start counting from + */ +static int __test_bit(int nr, const volatile unsigned long *addr); #endif #define test_bit(nr, addr) \ @@ -337,6 +362,11 @@ static int test_bit(int nr, const volatile unsigned long *addr); ? constant_test_bit((nr), (addr)) \ : variable_test_bit((nr), (addr))) +#define __test_bit(nr, addr) \ + (__builtin_constant_p((nr)) \ + ? __constant_test_bit((nr), (addr)) \ + : __variable_test_bit((nr), (addr))) + /** * __ffs - find first set bit in word * @word: The word to search -- MST