From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:38692) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UJ1c6-0003YA-Ma for qemu-devel@nongnu.org; Fri, 22 Mar 2013 09:02:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UJ1c3-0002U3-TZ for qemu-devel@nongnu.org; Fri, 22 Mar 2013 09:02:30 -0400 Received: from [2a02:248:0:30:223:aeff:fefe:7f1c] (port=45984 helo=dns.kamp-intra.net) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UJ1N4-0005ZU-G1 for qemu-devel@nongnu.org; Fri, 22 Mar 2013 08:46:58 -0400 From: Peter Lieven Date: Fri, 22 Mar 2013 13:46:05 +0100 Message-Id: <1363956370-23681-5-git-send-email-pl@kamp.de> In-Reply-To: <1363956370-23681-1-git-send-email-pl@kamp.de> References: <1363956370-23681-1-git-send-email-pl@kamp.de> Subject: [Qemu-devel] [PATCHv4 4/9] bitops: use vector algorithm to optimize find_next_bit() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: quintela@redhat.com, Stefan Hajnoczi , Peter Lieven , Orit Wasserman , Paolo Bonzini this patch adds the usage of buffer_find_nonzero_offset() to skip large areas of zeroes. compared to loop unrolling presented in an earlier patch this adds another 50% performance benefit for skipping large areas of zeroes. loop unrolling alone added close to 100% speedup. Signed-off-by: Peter Lieven Reviewed-by: Eric Blake --- util/bitops.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) diff --git a/util/bitops.c b/util/bitops.c index e72237a..9bb61ff 100644 --- a/util/bitops.c +++ b/util/bitops.c @@ -42,10 +42,28 @@ unsigned long find_next_bit(const unsigned long *addr, unsigned long size, size -= BITS_PER_LONG; result += BITS_PER_LONG; } - while (size & ~(BITS_PER_LONG-1)) { - if ((tmp = *(p++))) { - goto found_middle; + while (size >= BITS_PER_LONG) { + tmp = *p; + if (tmp) { + goto found_middle; + } + if (can_use_buffer_find_nonzero_offset(p, size / BITS_PER_BYTE)) { + size_t tmp2 = + buffer_find_nonzero_offset(p, size / BITS_PER_BYTE); + result += tmp2 * BITS_PER_BYTE; + size -= tmp2 * BITS_PER_BYTE; + p += tmp2 / sizeof(unsigned long); + if (!size) { + return result; + } + if (tmp2) { + tmp = *p; + if (tmp) { + goto found_middle; + } + } } + p++; result += BITS_PER_LONG; size -= BITS_PER_LONG; } -- 1.7.9.5