From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: Re: [PATCH 2/3] migration dirtybitmap support ARMv7 Date: Tue, 15 Apr 2014 18:18:14 -0700 Message-ID: <534DDA56.1020803@samsung.com> References: <534C8A47.2050401@samsung.com> <534CF4CE.2060600@arm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: "kvmarm@lists.cs.columbia.edu" , "christoffer.dall@linaro.org" , =?UTF-8?B?7J207KCV7ISd?= , =?UTF-8?B?7KCV7ISx7KeE?= , "kvm@vger.kernel.org" To: Marc Zyngier Return-path: Received: from mailout3.w2.samsung.com ([211.189.100.13]:51669 "EHLO usmailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750913AbaDPBSR (ORCPT ); Tue, 15 Apr 2014 21:18:17 -0400 Received: from uscpsbgex4.samsung.com (u125.gpu85.samsung.co.kr [203.254.195.125]) by usmailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N430018ENMGYL30@usmailout3.samsung.com> for kvm@vger.kernel.org; Tue, 15 Apr 2014 21:18:16 -0400 (EDT) In-reply-to: <534CF4CE.2060600@arm.com> Sender: kvm-owner@vger.kernel.org List-ID: On 04/15/2014 01:58 AM, Marc Zyngier wrote: > > Why do you nuke the whole TLBs for this VM? I assume you're going to > repeatedly call this for all the huge pages, aren't you? Can you delay > this flush to do it only once? > >> + get_page(virt_to_page(pte)); >> + return true; >> +} >> + >> +/* >> + * Called from QEMU when migration dirty logging is started. Write the protect >> + * current set. Future faults writes are tracked through WP of when dirty log >> + * log. > > Same as above. > >> + */ >> + >> +void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot) >> +{ >> + pgd_t *pgd; >> + pud_t *pud; >> + pmd_t *pmd; >> + pte_t *pte, new_pte; >> + pgd_t *pgdp = kvm->arch.pgd; >> + struct kvm_memory_slot *memslot = id_to_memslot(kvm->memslots, slot); >> + u64 start = memslot->base_gfn << PAGE_SHIFT; >> + u64 end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; >> + u64 addr = start, addr1; >> + >> + spin_lock(&kvm->mmu_lock); >> + kvm->arch.migration_in_progress = true; >> + while (addr < end) { >> + if (need_resched() || spin_needbreak(&kvm->mmu_lock)) { >> + kvm_tlb_flush_vmid(kvm); > > Looks like you're extremely flush happy. If you're holding the lock, why > do you need all the extra flushes in the previous function? Reduced it to one flush, upon termination of the write protect loop. >> + >> + if (kvm_pmd_huge(*pmd)) { >> + if (!split_pmd(kvm, pmd, addr)) { >> + kvm->arch.migration_in_progress = false; >> + return; > > Bang, you're dead. Yes added the unlock, also added return code in get dirty log function to abort migration. > > >> pte_t new_pte = pfn_pte(pfn, PAGE_S2); >> if (writable) { >> + if (migration_active && hugetlb) { >> + /* get back pfn from fault_ipa */ >> + pfn += (fault_ipa >> PAGE_SHIFT) & >> + ((1 << (PMD_SHIFT - PAGE_SHIFT))-1); >> + new_pte = pfn_pte(pfn, PAGE_S2); > > Please explain this. Next patch series will update this, there was another problem of handling pmd huge pages and directing them to pte handling.