From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MVst6-0000N6-E3 for qemu-devel@nongnu.org; Tue, 28 Jul 2009 16:03:04 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MVst0-0000Jn-P2 for qemu-devel@nongnu.org; Tue, 28 Jul 2009 16:03:03 -0400 Received: from [199.232.76.173] (port=57220 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MVst0-0000JM-A4 for qemu-devel@nongnu.org; Tue, 28 Jul 2009 16:02:58 -0400 Received: from mx2.redhat.com ([66.187.237.31]:48982) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MVssz-0001nG-NL for qemu-devel@nongnu.org; Tue, 28 Jul 2009 16:02:57 -0400 From: Glauber Costa Date: Tue, 28 Jul 2009 16:02:55 -0400 Message-Id: <1248811375-6504-1-git-send-email-glommer@redhat.com> Subject: [Qemu-devel] [PATCH] use logging count for individual regions List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Jan Kiszka , aliguori@us.ibm.com qemu-kvm use this scheme of logging count of individual regions, which is, IMHO, more flexible which the one we have right now. I'm proposing we use it. Thanks! Signed-off-by: Glauber Costa CC: Jan Kiszka --- kvm-all.c | 21 ++++++++++++--------- 1 files changed, 12 insertions(+), 9 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index f669c3a..1f1da4c 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -46,6 +46,7 @@ typedef struct KVMSlot ram_addr_t phys_offset; int slot; int flags; + uint32_t logging_count; } KVMSlot; typedef struct kvm_dirty_log KVMDirtyLog; @@ -59,7 +60,6 @@ struct KVMState int vmfd; int coalesced_mmio; int broken_set_mem_region; - int migration_log; #ifdef KVM_CAP_SET_GUEST_DEBUG struct kvm_sw_breakpoint_head kvm_sw_breakpoints; #endif @@ -139,9 +139,7 @@ static int kvm_set_user_memory_region(KVMState *s, KVMSlot *slot) mem.memory_size = slot->memory_size; mem.userspace_addr = (unsigned long)qemu_get_ram_ptr(slot->phys_offset); mem.flags = slot->flags; - if (s->migration_log) { - mem.flags |= KVM_MEM_LOG_DIRTY_PAGES; - } + return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); } @@ -243,15 +241,22 @@ static int kvm_dirty_pages_log_change(target_phys_addr_t phys_addr, return -EINVAL; } + if (flags & KVM_MEM_LOG_DIRTY_PAGES) { + if (mem->logging_count++) { + return 0; + } + } else { + if (--mem->logging_count) { + return 0; + } + } + old_flags = mem->flags; flags = (mem->flags & ~mask) | flags; mem->flags = flags; /* If nothing changed effectively, no need to issue ioctl */ - if (s->migration_log) { - flags |= KVM_MEM_LOG_DIRTY_PAGES; - } if (flags == old_flags) { return 0; } @@ -279,8 +284,6 @@ int kvm_set_migration_log(int enable) KVMSlot *mem; int i, err; - s->migration_log = enable; - for (i = 0; i < ARRAY_SIZE(s->slots); i++) { mem = &s->slots[i]; -- 1.6.2.2