From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 09/10] KVM: MMU: out of sync shadow core v2 Date: Sat, 20 Sep 2008 21:45:15 -0300 Message-ID: <20080921004515.GC10120@dmt.cnet> References: <20080918212749.800177179@localhost.localdomain> <20080918213337.148804603@localhost.localdomain> <48D4506C.5070804@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Avi Kivity , kvm@vger.kernel.org, "David S. Ahern" To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:40269 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751507AbYIUAq0 (ORCPT ); Sat, 20 Sep 2008 20:46:26 -0400 Content-Disposition: inline In-Reply-To: <48D4506C.5070804@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, Sep 19, 2008 at 06:22:52PM -0700, Avi Kivity wrote: > Instead of private, have an object contain both callback and private > data, and use container_of(). Reduces the chance of type errors. OK. >> + while (parent->unsync_children) { >> + for (i = 0; i < PT64_ENT_PER_PAGE; ++i) { >> + u64 ent = sp->spt[i]; >> + >> + if (is_shadow_present_pte(ent)) { >> + struct kvm_mmu_page *child; >> + child = page_header(ent & PT64_BASE_ADDR_MASK); > > What does this do? Walks all children of given page with no efficiency. Its replaced later by the bitmap version. >> +static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) >> +{ >> + if (sp->role.glevels != vcpu->arch.mmu.root_level) { >> + kvm_mmu_zap_page(vcpu->kvm, sp); >> + return 1; >> + } >> > > Suppose we switch to real mode, touch a pte, switch back. Is this handled? The shadow page will go unsync on pte touch and resynced as soon as its visible (after return to paging). Or, while still in real mode, it might be zapped by kvm_mmu_get_page->kvm_sync_page. Am I missing something? >> @@ -991,8 +1066,18 @@ static struct kvm_mmu_page *kvm_mmu_get_ >> gfn, role.word); >> index = kvm_page_table_hashfn(gfn); >> bucket = &vcpu->kvm->arch.mmu_page_hash[index]; >> - hlist_for_each_entry(sp, node, bucket, hash_link) >> - if (sp->gfn == gfn && sp->role.word == role.word) { >> + hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link) >> + if (sp->gfn == gfn) { >> + if (sp->unsync) >> + if (kvm_sync_page(vcpu, sp)) >> + continue; >> + >> + if (sp->role.word != role.word) >> + continue; >> + >> + if (sp->unsync_children) >> + vcpu->arch.mmu.need_root_sync = 1; >> > > mmu_reload() maybe? Hum, will think about it. >> static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp) >> - return 0; >> + return ret; >> } >> > > Why does the caller care if zap also zapped some other random pages? To > restart walking the list? Yes. The next element for_each_entry_safe saved could have been zapped. >> + /* don't unsync if pagetable is shadowed with multiple roles */ >> + hlist_for_each_entry_safe(s, node, n, bucket, hash_link) { >> + if (s->gfn != sp->gfn || s->role.metaphysical) >> + continue; >> + if (s->role.word != sp->role.word) >> + return 1; >> + } >> > > This will happen for nonpae paging. But why not allow it? Zap all > unsynced pages on mode switch. > > Oh, if a page is both a page directory and page table, yes. Yes. > So to allow nonpae oos, check the level instead. Windows 2008 64-bit has all sorts of sharing a pagetable at multiple levels too.