From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56817) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VswY6-0002kS-70 for qemu-devel@nongnu.org; Tue, 17 Dec 2013 10:27:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VswY0-000165-5f for qemu-devel@nongnu.org; Tue, 17 Dec 2013 10:27:06 -0500 Received: from mx1.redhat.com ([209.132.183.28]:9490) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VswXz-00015L-Tg for qemu-devel@nongnu.org; Tue, 17 Dec 2013 10:27:00 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id rBHFQwGZ028707 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Tue, 17 Dec 2013 10:26:59 -0500 From: Juan Quintela Date: Tue, 17 Dec 2013 16:26:11 +0100 Message-Id: <1387293974-24718-36-git-send-email-quintela@redhat.com> In-Reply-To: <1387293974-24718-1-git-send-email-quintela@redhat.com> References: <1387293974-24718-1-git-send-email-quintela@redhat.com> Subject: [Qemu-devel] [PATCH 35/38] memory: syncronize kvm bitmap using bitmaps operations List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org If bitmaps are aligned properly, use bitmap operations. If they are not, just use old bit at a time code. Signed-off-by: Juan Quintela --- include/exec/ram_addr.h | 54 ++++++++++++++++++++++++++++++++----------------- 1 file changed, 36 insertions(+), 18 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index c6736ed..33c8acc 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -83,29 +83,47 @@ static inline void cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap, ram_addr_t start, ram_addr_t pages) { - unsigned int i, j; + unsigned long i, j; unsigned long page_number, c; hwaddr addr; ram_addr_t ram_addr; - unsigned int len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS; + unsigned long len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS; unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE; + unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS); - /* - * bitmap-traveling is faster than memory-traveling (for addr...) - * especially when most of the memory is not dirty. - */ - for (i = 0; i < len; i++) { - if (bitmap[i] != 0) { - c = leul_to_cpu(bitmap[i]); - do { - j = ffsl(c) - 1; - c &= ~(1ul << j); - page_number = (i * HOST_LONG_BITS + j) * hpratio; - addr = page_number * TARGET_PAGE_SIZE; - ram_addr = start + addr; - cpu_physical_memory_set_dirty_range(ram_addr, - TARGET_PAGE_SIZE * hpratio); - } while (c != 0); + /* start address is aligned at the start of a word? */ + if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) { + long k; + long nr = BITS_TO_LONGS(pages); + + for (k = 0; k < nr; k++) { + if (bitmap[k]) { + unsigned long temp = leul_to_cpu(bitmap[k]); + + ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION][page + k] |= temp; + ram_list.dirty_memory[DIRTY_MEMORY_VGA][page + k] |= temp; + ram_list.dirty_memory[DIRTY_MEMORY_CODE][page + k] |= temp; + } + } + xen_modified_memory(start, pages); + } else { + /* + * bitmap-traveling is faster than memory-traveling (for addr...) + * especially when most of the memory is not dirty. + */ + for (i = 0; i < len; i++) { + if (bitmap[i] != 0) { + c = leul_to_cpu(bitmap[i]); + do { + j = ffsl(c) - 1; + c &= ~(1ul << j); + page_number = (i * HOST_LONG_BITS + j) * hpratio; + addr = page_number * TARGET_PAGE_SIZE; + ram_addr = start + addr; + cpu_physical_memory_set_dirty_range(ram_addr, + TARGET_PAGE_SIZE * hpratio); + } while (c != 0); + } } } } -- 1.8.3.1