From mboxrd@z Thu Jan 1 00:00:00 1970 From: Avi Kivity Subject: Re: [patch 1/3] KVM: MMU: collapse remote TLB flushes on root sync Date: Thu, 30 Oct 2008 12:04:33 +0200 Message-ID: <490986B1.2050509@redhat.com> References: <20081025223111.498934405@localhost.localdomain> <20081025223243.782692567@localhost.localdomain> <490451BA.8040800@redhat.com> <20081029232658.GA27937@dmt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, Chris Wright To: Marcelo Tosatti Return-path: Received: from mx2.redhat.com ([66.187.237.31]:35765 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752879AbYJ3KE1 (ORCPT ); Thu, 30 Oct 2008 06:04:27 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id m9UA4Rhw015096 for ; Thu, 30 Oct 2008 06:04:27 -0400 In-Reply-To: <20081029232658.GA27937@dmt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: Marcelo Tosatti wrote: > There is significant overhead now in comparison to the early indexing > scheme with a list per root. It must be optimized. > What's the typical number of (1) unsynced pages and (2) unsynced pages belonging to next cr3 when switching cr3? I'm guessing (1) and (2) are almost equal, and both fairly large? (one nice thing is that pages which are no longer used as pagetables will not be resynced, so our forky workload preformance should be good) > A problem with your suggestion is how to clean the unsync bitmap bit in > the upper pagetables. True. > The advantage however is that the bitmaps and spte > entries can be cached in L1, while currently the cache is blown at > every page resync. > > What i'm testing now is: > > #define KVM_PAGE_ARRAY_NR 16 > > struct kvm_mmu_pages { > struct kvm_mmu_page *pages[KVM_PAGE_ARRAY_NR]; > struct kvm_mmu_page *parent_pages[KVM_PAGE_ARRAY_NR]; > unsigned int offset[KVM_PAGE_ARRAY_NR]; > struct { ... } ...[KVM_PAGE_ARRAY_NR]; // SCNR > unsigned int nr; > }; > > static void mmu_sync_children(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) > { > int i; > struct kvm_mmu_pages pages; > > kvm_mmu_pages_init(&pages); > while (mmu_unsync_walk(sp, &pages)) { > So mmu_unsync_walk() collects unsynced pages, write protects them, flushes the tlb, returns 0 if none found? Perhaps it can simply collect the pages, then do a write-protect pass, tlb flush, and resync pass. The two passes are now over L1 cached data so they're not too expensive. > for_each_sp(pages, i) { > struct kvm_mmu_page *parent = pages.parent_pages[i]; > > kvm_sync_page(vcpu, pages.pages[i]); > __clear_bit(pages.offset[i], > parent->unsync_child_bitmap); > > } > kvm_mmu_pages_init(&pages); > cond_resched_lock(&vcpu->kvm->mmu_lock); > } > } > > But a second pass is still needed for levels 3 and 4 (the second pass > could be postponed for the next CR3 switch, but i'm not sure its > worthwhile). > > We could: - keep all three parents in the array - for the bitmap, keep a count of how many bits are set - when we clear a bit, we dec the count, and if zero, we clear the bit in the parent's parent. > Also, the array method poorly handles cases with a large number of > unsync pages, which are common with 2.4/kscand for example. > > Well, depends how large the array is. If it's large enough, the tlb flush cost is overwhelmed by the cost of the actual resync. > Hum, Chris suggests a list_head per level instead of the bitmap. > > Doesn't work, a page can have multiple parents, so it would need to be linked to multiple lists. We can have union { bitmap; array; } and use the array for sparse bitmaps (or use the bitmap for overflowed arrays) -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.