From mboxrd@z Thu Jan 1 00:00:00 1970 From: Izik Eidus Subject: fast fix for mmu aliasing troubles Date: Fri, 03 Oct 2008 17:45:46 +0300 Message-ID: <48E6301A.2010308@redhat.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="------------060808000008000205050301" To: kvm@vger.kernel.org Return-path: Received: from mx2.redhat.com ([66.187.237.31]:40825 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751754AbYJCOpt (ORCPT ); Fri, 3 Oct 2008 10:45:49 -0400 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id m93EjnUY018949 for ; Fri, 3 Oct 2008 10:45:49 -0400 Received: from ns3.rdu.redhat.com (ns3.rdu.redhat.com [10.11.255.199]) by int-mx2.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m93EjmrA008386 for ; Fri, 3 Oct 2008 10:45:49 -0400 Received: from [10.32.4.86] (vpn-4-86.str.redhat.com [10.32.4.86]) by ns3.rdu.redhat.com (8.13.8/8.13.8) with ESMTP id m93EjlIG008537 for ; Fri, 3 Oct 2008 10:45:47 -0400 Sender: kvm-owner@vger.kernel.org List-ID: This is a multi-part message in MIME format. --------------060808000008000205050301 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit i have sent patch that remove the aliasing and i will resend it again but untill then this patch should be applied as it fix kernel panic. --------------060808000008000205050301 Content-Type: text/x-patch; name="0001-KVM-mmu-fix-aliased-gfns-treated-as-unaliased-ones.patch" Content-Transfer-Encoding: 7bit Content-Disposition: inline; filename*0="0001-KVM-mmu-fix-aliased-gfns-treated-as-unaliased-ones.patc"; filename*1="h" >>From 61a13744e2367572f3e27ab5c0cce6e080e94d67 Mon Sep 17 00:00:00 2001 From: Izik Eidus Date: Fri, 3 Oct 2008 17:40:32 +0300 Subject: [PATCH] KVM: mmu: fix aliased gfns treated as unaliased ones. some areas of kvm x86 mmu were calcualting gfn offset inside a slot without unaliasing the gfn, this patch make sure that the gfn will be unaliased and add gfn_to_memslot_unaliased() to save the calculating of the gfn unaliasing in case we have it unaliased already. Signed-off-by: Izik Eidus --- arch/x86/kvm/mmu.c | 14 ++++++++++---- include/asm-x86/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 9 +++++---- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 99c239c..a84b2ed 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -384,7 +384,9 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn) { int *write_count; - write_count = slot_largepage_idx(gfn, gfn_to_memslot(kvm, gfn)); + gfn = unalias_gfn(kvm, gfn); + write_count = slot_largepage_idx(gfn, + gfn_to_memslot_unaliased(kvm, gfn)); *write_count += 1; } @@ -392,16 +394,20 @@ static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn) { int *write_count; - write_count = slot_largepage_idx(gfn, gfn_to_memslot(kvm, gfn)); + gfn = unalias_gfn(kvm, gfn); + write_count = slot_largepage_idx(gfn, + gfn_to_memslot_unaliased(kvm, gfn)); *write_count -= 1; WARN_ON(*write_count < 0); } static int has_wrprotected_page(struct kvm *kvm, gfn_t gfn) { - struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); + struct kvm_memory_slot *slot; int *largepage_idx; + gfn = unalias_gfn(kvm, gfn); + slot = gfn_to_memslot_unaliased(kvm, gfn); if (slot) { largepage_idx = slot_largepage_idx(gfn, slot); return *largepage_idx; @@ -2859,8 +2865,8 @@ static void audit_write_protection(struct kvm_vcpu *vcpu) if (sp->role.metaphysical) continue; - slot = gfn_to_memslot(vcpu->kvm, sp->gfn); gfn = unalias_gfn(vcpu->kvm, sp->gfn); + slot = gfn_to_memslot_unaliased(vcpu->kvm, sp->gfn); rmapp = &slot->rmap[gfn - slot->base_gfn]; if (*rmapp) printk(KERN_ERR "%s: (%s) shadow page has writable" diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h index 635f50e..037f1b3 100644 --- a/include/asm-x86/kvm_host.h +++ b/include/asm-x86/kvm_host.h @@ -607,6 +607,8 @@ void kvm_disable_tdp(void); int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3); int complete_pio(struct kvm_vcpu *vcpu); +struct kvm_memory_slot *gfn_to_memslot_unaliased(struct kvm *kvm, gfn_t gfn); + static inline struct kvm_mmu_page *page_header(hpa_t shadow_page) { struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6cf0427..b99f196 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -674,7 +674,7 @@ int kvm_is_error_hva(unsigned long addr) } EXPORT_SYMBOL_GPL(kvm_is_error_hva); -static struct kvm_memory_slot *__gfn_to_memslot(struct kvm *kvm, gfn_t gfn) +struct kvm_memory_slot *gfn_to_memslot_unaliased(struct kvm *kvm, gfn_t gfn) { int i; @@ -687,11 +687,12 @@ static struct kvm_memory_slot *__gfn_to_memslot(struct kvm *kvm, gfn_t gfn) } return NULL; } +EXPORT_SYMBOL_GPL(gfn_to_memslot_unaliased); struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn) { gfn = unalias_gfn(kvm, gfn); - return __gfn_to_memslot(kvm, gfn); + return gfn_to_memslot_unaliased(kvm, gfn); } int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn) @@ -715,7 +716,7 @@ unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn) struct kvm_memory_slot *slot; gfn = unalias_gfn(kvm, gfn); - slot = __gfn_to_memslot(kvm, gfn); + slot = gfn_to_memslot_unaliased(kvm, gfn); if (!slot) return bad_hva(); return (slot->userspace_addr + (gfn - slot->base_gfn) * PAGE_SIZE); @@ -966,7 +967,7 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) struct kvm_memory_slot *memslot; gfn = unalias_gfn(kvm, gfn); - memslot = __gfn_to_memslot(kvm, gfn); + memslot = gfn_to_memslot_unaliased(kvm, gfn); if (memslot && memslot->dirty_bitmap) { unsigned long rel_gfn = gfn - memslot->base_gfn; -- 1.5.6.3 --------------060808000008000205050301--