From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: Re: [PATCH 3/3] migration dirtybitmap support ARMv7 Date: Tue, 15 Apr 2014 18:24:40 -0700 Message-ID: <534DDBD8.30502@samsung.com> References: <534C8A4C.5040008@samsung.com> <534CF6B2.6020606@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: "kvmarm@lists.cs.columbia.edu" , "christoffer.dall@linaro.org" , =?UTF-8?B?7J207KCV7ISd?= , =?UTF-8?B?7KCV7ISx7KeE?= , "kvm@vger.kernel.org" To: Marc Zyngier , eric.auger@linaro.org Return-path: Received: from mailout4.w2.samsung.com ([211.189.100.14]:47029 "EHLO usmailout4.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750893AbaDPBYm (ORCPT ); Tue, 15 Apr 2014 21:24:42 -0400 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by usmailout4.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N4300AZQNX5H340@usmailout4.samsung.com> for kvm@vger.kernel.org; Tue, 15 Apr 2014 21:24:41 -0400 (EDT) In-reply-to: <534CF6B2.6020606@arm.com> Sender: kvm-owner@vger.kernel.org List-ID: Hi Eric, Mark - what repository should I use to pick up Eric patches? For kvm_vm_ioctl_get_dirty_log() not sure what to make generic it appears generic enough and it does what it needs to do? Thanks, Mario On 04/15/2014 02:06 AM, Marc Zyngier wrote: > On 15/04/14 02:24, Mario Smarduch wrote: >> >> - support QEMU interface for initial VM Write Protect >> - QEMU Dirty bit map log retrieval >> >> >> Signed-off-by: Mario Smarduch >> --- >> arch/arm/kvm/arm.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++- >> 1 file changed, 61 insertions(+), 1 deletion(-) >> >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c >> index bd18bb8..9076e3d 100644 >> --- a/arch/arm/kvm/arm.c >> +++ b/arch/arm/kvm/arm.c >> @@ -241,6 +241,8 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, >> const struct kvm_memory_slot *old, >> enum kvm_mr_change change) >> { >> + if ((change != KVM_MR_DELETE) && (mem->flags & KVM_MEM_LOG_DIRTY_PAGES)) >> + kvm_mmu_slot_remove_write_access(kvm, mem->slot); >> } > > There is a patch by Eric Auger doing the same thing. Please use it as a > dependency. > >> void kvm_arch_flush_shadow_all(struct kvm *kvm) >> @@ -773,9 +775,67 @@ long kvm_arch_vcpu_ioctl(struct file *filp, >> } >> } >> >> +/* >> + * Walks the memslot dirty bitmap, write protects dirty pages for next rount, >> + * and stores the dirty bitmap fo QEMU retrieval. >> + * >> + */ >> int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log) >> { >> - return -EINVAL; >> + int r; >> + struct kvm_memory_slot *memslot; >> + unsigned long n, i; >> + unsigned long *dirty_bitmap; >> + unsigned long *dirty_bitmap_buffer; >> + bool is_dirty = false; >> + gfn_t offset; >> + >> + mutex_lock(&kvm->slots_lock); >> + r = -EINVAL; >> + >> + if (log->slot >= KVM_USER_MEM_SLOTS) >> + goto out; >> + >> + memslot = id_to_memslot(kvm->memslots, log->slot); >> + dirty_bitmap = memslot->dirty_bitmap; >> + >> + r = -ENOENT; >> + if (!dirty_bitmap) >> + goto out; >> + >> + n = kvm_dirty_bitmap_bytes(memslot); >> + dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long); >> + memset(dirty_bitmap_buffer, 0, n); >> + >> + spin_lock(&kvm->mmu_lock); >> + for (i = 0; i < n / sizeof(long); i++) { >> + unsigned long mask; >> + >> + if (!dirty_bitmap[i]) >> + continue; >> + >> + is_dirty = true; >> + offset = i * BITS_PER_LONG; >> + kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, >> + dirty_bitmap[i]); >> + mask = dirty_bitmap[i]; >> + dirty_bitmap_buffer[i] = mask; >> + dirty_bitmap[i] = 0; >> + } >> + >> + if (is_dirty) >> + kvm_tlb_flush_vmid(kvm); >> + >> + spin_unlock(&kvm->mmu_lock); >> + r = -EFAULT; >> + >> + if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n)) >> + goto out; >> + >> + r = 0; >> +out: >> + mutex_unlock(&kvm->slots_lock); >> + return r; >> } > > This is a direct copy of the x86 code. Please make it generic. > >> static int kvm_vm_ioctl_set_device_addr(struct kvm *kvm, >> > > Thanks, > > M. >