From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47024) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YfnVa-00010g-Nv for qemu-devel@nongnu.org; Wed, 08 Apr 2015 06:47:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YfnVV-0006os-OB for qemu-devel@nongnu.org; Wed, 08 Apr 2015 06:46:58 -0400 Received: from mx1.redhat.com ([209.132.183.28]:38515) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YfnVV-0006oc-GJ for qemu-devel@nongnu.org; Wed, 08 Apr 2015 06:46:53 -0400 Message-ID: <55250714.1050400@redhat.com> Date: Wed, 08 Apr 2015 12:46:44 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <5524CC0E.8020208@linux.intel.com> In-Reply-To: <5524CC0E.8020208@linux.intel.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH] kvm: fix slot flags sync between Qemu and KVM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Xiao Guangrong Cc: "Li, Wanpeng" , Marcelo Tosatti , "qemu-devel@nongnu.org" , kvm@vger.kernel.org On 08/04/2015 08:34, Xiao Guangrong wrote: > We noticed that KVM keeps tracking dirty for the memslots when > live migration failed which causes bad performance due to huge > page mapping disallowed for this kind of memslot > > It is caused by slot flags does not properly sync-ed between Qemu > and KVM. Current code doing slot update depends on slot->flags > which hopes to omit unnecessary ioctl. However, slot->flags only > reflects the stauts of corresponding memory region, vmsave and > live migration do dirty tracking which overset > KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status > recorded in the flags does not exactly match the stauts in kernel. > > We fixed it by introducing slot->is_dirty_logging which indicates > the dirty status in kernel so that it helps us to sync the status > between userspace and kernel > > Wanpeng Li > Signed-off-by: Xiao Guangrong Hi Xiao, the patch looks good. However, I am planning to remove s->migration_log completely from QEMU 2.4 and have slot->flags also track the migration state. This has the side effect of fixing this bug. I'll Cc you on the patches when I post them (next week probably). Thanks! Paolo > --- > kvm-all.c | 19 ++++++++++++++++++- > 1 file changed, 18 insertions(+), 1 deletion(-) > > diff --git a/kvm-all.c b/kvm-all.c > index dd44f8c..69fa233 100644 > --- a/kvm-all.c > +++ b/kvm-all.c > @@ -60,6 +60,15 @@ > > #define KVM_MSI_HASHTAB_SIZE 256 > > +/* > + * @flags only reflects the stauts of corresponding memory region, > however, > + * vmsave and live migration do dirty tracking which overset > + * KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status > recorded > + * in @flags does not exactly match the stauts in kernel. > + * > + * @is_dirty_logging indicating the dirty status in kernel helps us to > sync > + * the status between userspace and kernel. > + */ > typedef struct KVMSlot > { > hwaddr start_addr; > @@ -67,6 +76,7 @@ typedef struct KVMSlot > void *ram; > int slot; > int flags; > + bool is_dirty_logging; > } KVMSlot; > > typedef struct kvm_dirty_log KVMDirtyLog; > @@ -245,6 +255,7 @@ static int kvm_set_user_memory_region(KVMState *s, > KVMSlot *slot) > kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); > } > mem.memory_size = slot->memory_size; > + slot->is_dirty_logging = !!(mem.flags & KVM_MEM_LOG_DIRTY_PAGES); > return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); > } > > @@ -312,6 +323,7 @@ static int kvm_slot_dirty_pages_log_change(KVMSlot > *mem, bool log_dirty) > int old_flags; > > old_flags = mem->flags; > + old_flags |= mem->is_dirty_logging ? KVM_MEM_LOG_DIRTY_PAGES : 0; > > flags = (mem->flags & ~mask) | kvm_mem_flags(s, log_dirty, false); > mem->flags = flags; > @@ -376,12 +388,17 @@ static int kvm_set_migration_log(bool enable) > s->migration_log = enable; > > for (i = 0; i < s->nr_slots; i++) { > + int dirty_enable; > + > mem = &s->slots[i]; > > if (!mem->memory_size) { > continue; > } > - if (!!(mem->flags & KVM_MEM_LOG_DIRTY_PAGES) == enable) { > + > + /* Keep the dirty bit if it is tracked by the memory region. */ > + dirty_enable = enable | (mem->flags & KVM_MEM_LOG_DIRTY_PAGES); > + if (mem->is_dirty_logging == dirty_enable) { > continue; > } > err = kvm_set_user_memory_region(s, mem);