* [PATCH 0/3] KVM SoftMMU fixes
@ 2009-02-18 13:08 Joerg Roedel
2009-02-18 13:08 ` [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn Joerg Roedel
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: Joerg Roedel @ 2009-02-18 13:08 UTC (permalink / raw)
To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, linux-kernel
Hi Avi, Marcelo,
this small patch series fixes two issues and include one cleanup I ran
into hacking in the KVM SoftMMU code. Please consider to apply.
Joerg
diffstat:
arch/x86/kvm/mmu.c | 10 +++-------
virt/kvm/kvm_main.c | 6 ++++--
2 files changed, 7 insertions(+), 9 deletions(-)
^ permalink raw reply [flat|nested] 12+ messages in thread* [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn 2009-02-18 13:08 [PATCH 0/3] KVM SoftMMU fixes Joerg Roedel @ 2009-02-18 13:08 ` Joerg Roedel 2009-02-18 18:10 ` Marcelo Tosatti 2009-02-18 13:08 ` [PATCH 2/3] kvm mmu: remove redundant check in mmu_set_spte Joerg Roedel 2009-02-18 13:09 ` [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO Joerg Roedel 2 siblings, 1 reply; 12+ messages in thread From: Joerg Roedel @ 2009-02-18 13:08 UTC (permalink / raw) To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, linux-kernel, Joerg Roedel The function kvm_is_mmio_pfn is called before put_page is called on a page by KVM. This is a problem when when this function is called on some struct page which is part of a compund page. It does not test the reserved flag of the compound page but of the struct page within the compount page. This is a problem when KVM works with hugepages allocated at boot time. These pages have the reserved bit set in all tail pages. Only the flag in the compount head is cleared. KVM would not put such a page which results in a memory leak. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> --- virt/kvm/kvm_main.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 266bdaf..0ed662d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -535,8 +535,10 @@ static inline int valid_vcpu(int n) inline int kvm_is_mmio_pfn(pfn_t pfn) { - if (pfn_valid(pfn)) - return PageReserved(pfn_to_page(pfn)); + if (pfn_valid(pfn)) { + struct page *page = compound_head(pfn_to_page(pfn)); + return PageReserved(page); + } return true; } -- 1.5.6.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn 2009-02-18 13:08 ` [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn Joerg Roedel @ 2009-02-18 18:10 ` Marcelo Tosatti 0 siblings, 0 replies; 12+ messages in thread From: Marcelo Tosatti @ 2009-02-18 18:10 UTC (permalink / raw) To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel BTW some page bits are erroneously transferred to the struct page's within the compound page. We've got away with that so far because these bits (such as dirty and accessed) are not used by the limited hugetlb/hugetlbfs implementation ATM. Acked-by: Marcelo Tosatti <mtosatti@redhat.com> On Wed, Feb 18, 2009 at 02:08:58PM +0100, Joerg Roedel wrote: > The function kvm_is_mmio_pfn is called before put_page is called on a > page by KVM. This is a problem when when this function is called on some > struct page which is part of a compund page. It does not test the > reserved flag of the compound page but of the struct page within the > compount page. This is a problem when KVM works with hugepages allocated > at boot time. These pages have the reserved bit set in all tail pages. > Only the flag in the compount head is cleared. KVM would not put such a > page which results in a memory leak. > > Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > --- > virt/kvm/kvm_main.c | 6 ++++-- > 1 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 266bdaf..0ed662d 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -535,8 +535,10 @@ static inline int valid_vcpu(int n) > > inline int kvm_is_mmio_pfn(pfn_t pfn) > { > - if (pfn_valid(pfn)) > - return PageReserved(pfn_to_page(pfn)); > + if (pfn_valid(pfn)) { > + struct page *page = compound_head(pfn_to_page(pfn)); > + return PageReserved(page); > + } > > return true; > } ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/3] kvm mmu: remove redundant check in mmu_set_spte 2009-02-18 13:08 [PATCH 0/3] KVM SoftMMU fixes Joerg Roedel 2009-02-18 13:08 ` [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn Joerg Roedel @ 2009-02-18 13:08 ` Joerg Roedel 2009-02-18 18:38 ` Marcelo Tosatti 2009-02-18 13:09 ` [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO Joerg Roedel 2 siblings, 1 reply; 12+ messages in thread From: Joerg Roedel @ 2009-02-18 13:08 UTC (permalink / raw) To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, linux-kernel, Joerg Roedel The following code flow is unnecessary: if (largepage) was_rmapped = is_large_pte(*shadow_pte); else was_rmapped = 1; The is_large_pte() function will always evaluate to one here because the (largepage && !is_large_pte) case is already handled in the first if-clause. So we can remove this check and set was_rmapped to one always here. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> --- arch/x86/kvm/mmu.c | 8 ++------ 1 files changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ef060ec..c90b4b2 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1791,12 +1791,8 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte, pgprintk("hfn old %lx new %lx\n", spte_to_pfn(*shadow_pte), pfn); rmap_remove(vcpu->kvm, shadow_pte); - } else { - if (largepage) - was_rmapped = is_large_pte(*shadow_pte); - else - was_rmapped = 1; - } + } else + was_rmapped = 1; } if (set_spte(vcpu, shadow_pte, pte_access, user_fault, write_fault, dirty, largepage, global, gfn, pfn, speculative, true)) { -- 1.5.6.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/3] kvm mmu: remove redundant check in mmu_set_spte 2009-02-18 13:08 ` [PATCH 2/3] kvm mmu: remove redundant check in mmu_set_spte Joerg Roedel @ 2009-02-18 18:38 ` Marcelo Tosatti 0 siblings, 0 replies; 12+ messages in thread From: Marcelo Tosatti @ 2009-02-18 18:38 UTC (permalink / raw) To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel The following code flow is unnecessary: if (largepage) was_rmapped = is_large_pte(*shadow_pte); else was_rmapped = 1; The is_large_pte() function will always evaluate to one here because the (largepage && !is_large_pte) case is already handled in the first if-clause. So we can remove this check and set was_rmapped to one always here. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Acked-by: Marcelo Tosatti <mtosatti@redhat.com> --- arch/x86/kvm/mmu.c | 8 ++------ 1 files changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ef060ec..c90b4b2 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1791,12 +1791,8 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte, pgprintk("hfn old %lx new %lx\n", spte_to_pfn(*shadow_pte), pfn); rmap_remove(vcpu->kvm, shadow_pte); - } else { - if (largepage) - was_rmapped = is_large_pte(*shadow_pte); - else - was_rmapped = 1; - } + } else + was_rmapped = 1; } if (set_spte(vcpu, shadow_pte, pte_access, user_fault, write_fault, dirty, largepage, global, gfn, pfn, speculative, true)) { ^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 13:08 [PATCH 0/3] KVM SoftMMU fixes Joerg Roedel 2009-02-18 13:08 ` [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn Joerg Roedel 2009-02-18 13:08 ` [PATCH 2/3] kvm mmu: remove redundant check in mmu_set_spte Joerg Roedel @ 2009-02-18 13:09 ` Joerg Roedel 2009-02-18 13:47 ` Avi Kivity 2 siblings, 1 reply; 12+ messages in thread From: Joerg Roedel @ 2009-02-18 13:09 UTC (permalink / raw) To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, linux-kernel, Joerg Roedel Not using __GFP_ZERO when allocating shadow pages triggers the assertion in the kvm_mmu_alloc_page() when MMU debugging is enabled. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> --- arch/x86/kvm/mmu.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index c90b4b2..d93ecec 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -301,7 +301,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, if (cache->nobjs >= min) return 0; while (cache->nobjs < ARRAY_SIZE(cache->objects)) { - page = alloc_page(GFP_KERNEL); + page = alloc_page(GFP_KERNEL | __GFP_ZERO); if (!page) return -ENOMEM; set_page_private(page, 0); -- 1.5.6.4 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 13:09 ` [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO Joerg Roedel @ 2009-02-18 13:47 ` Avi Kivity 2009-02-18 13:54 ` Joerg Roedel 0 siblings, 1 reply; 12+ messages in thread From: Avi Kivity @ 2009-02-18 13:47 UTC (permalink / raw) To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm, linux-kernel Joerg Roedel wrote: > Not using __GFP_ZERO when allocating shadow pages triggers the > assertion in the kvm_mmu_alloc_page() when MMU debugging is enabled. > > Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > --- > arch/x86/kvm/mmu.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index c90b4b2..d93ecec 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -301,7 +301,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, > if (cache->nobjs >= min) > return 0; > while (cache->nobjs < ARRAY_SIZE(cache->objects)) { > - page = alloc_page(GFP_KERNEL); > + page = alloc_page(GFP_KERNEL | __GFP_ZERO); > if (!page) > return -ENOMEM; > set_page_private(page, 0); > What is the warning? Adding __GFP_ZERO here will cause us to clear the page twice, which is wasteful. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 13:47 ` Avi Kivity @ 2009-02-18 13:54 ` Joerg Roedel 2009-02-18 14:03 ` Avi Kivity 2009-02-18 18:42 ` Marcelo Tosatti 0 siblings, 2 replies; 12+ messages in thread From: Joerg Roedel @ 2009-02-18 13:54 UTC (permalink / raw) To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, linux-kernel On Wed, Feb 18, 2009 at 01:47:04PM +0000, Avi Kivity wrote: > Joerg Roedel wrote: > >Not using __GFP_ZERO when allocating shadow pages triggers the > >assertion in the kvm_mmu_alloc_page() when MMU debugging is enabled. > > > >Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > >--- > > arch/x86/kvm/mmu.c | 2 +- > > 1 files changed, 1 insertions(+), 1 deletions(-) > > > >diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > >index c90b4b2..d93ecec 100644 > >--- a/arch/x86/kvm/mmu.c > >+++ b/arch/x86/kvm/mmu.c > >@@ -301,7 +301,7 @@ static int mmu_topup_memory_cache_page(struct kvm_mmu_memory_cache *cache, > > if (cache->nobjs >= min) > > return 0; > > while (cache->nobjs < ARRAY_SIZE(cache->objects)) { > >- page = alloc_page(GFP_KERNEL); > >+ page = alloc_page(GFP_KERNEL | __GFP_ZERO); > > if (!page) > > return -ENOMEM; > > set_page_private(page, 0); > > > > What is the warning? > > Adding __GFP_ZERO here will cause us to clear the page twice, which is wasteful. The assertion which the attached patch removes fails sometimes. Removing this assertion is the alternative solution to this problem ;-) From ca45f3a2e45cd7e76ca624bb1098329db8ff83ab Mon Sep 17 00:00:00 2001 From: Joerg Roedel <joerg.roedel@amd.com> Date: Wed, 18 Feb 2009 14:51:13 +0100 Subject: [PATCH] kvm mmu: remove assertion in kvm_mmu_alloc_page Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> --- arch/x86/kvm/mmu.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d93ecec..b226973 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -802,7 +802,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, set_page_private(virt_to_page(sp->spt), (unsigned long)sp); list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); INIT_LIST_HEAD(&sp->oos_link); - ASSERT(is_empty_shadow_page(sp->spt)); bitmap_zero(sp->slot_bitmap, KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS); sp->multimapped = 0; sp->parent_pte = parent_pte; -- 1.5.6.4 -- | Advanced Micro Devices GmbH Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München System | Research | Geschäftsführer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis München | Registergericht München, HRB Nr. 43632 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 13:54 ` Joerg Roedel @ 2009-02-18 14:03 ` Avi Kivity 2009-02-18 14:10 ` Joerg Roedel 2009-02-18 18:42 ` Marcelo Tosatti 1 sibling, 1 reply; 12+ messages in thread From: Avi Kivity @ 2009-02-18 14:03 UTC (permalink / raw) To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm, linux-kernel Joerg Roedel wrote: > The assertion which the attached patch removes fails sometimes. Removing > this assertion is the alternative solution to this problem ;-) > > From ca45f3a2e45cd7e76ca624bb1098329db8ff83ab Mon Sep 17 00:00:00 2001 > From: Joerg Roedel <joerg.roedel@amd.com> > Date: Wed, 18 Feb 2009 14:51:13 +0100 > Subject: [PATCH] kvm mmu: remove assertion in kvm_mmu_alloc_page > > Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > --- > arch/x86/kvm/mmu.c | 1 - > 1 files changed, 0 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index d93ecec..b226973 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -802,7 +802,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, > set_page_private(virt_to_page(sp->spt), (unsigned long)sp); > list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); > INIT_LIST_HEAD(&sp->oos_link); > - ASSERT(is_empty_shadow_page(sp->spt)); > bitmap_zero(sp->slot_bitmap, KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS); > sp->multimapped = 0; > sp->parent_pte = parent_pte; > sp->spt is allocated using mmu_memory_cache_alloc(), which zeros the page. How can the assertion fail? -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 14:03 ` Avi Kivity @ 2009-02-18 14:10 ` Joerg Roedel 2009-02-18 14:14 ` Avi Kivity 0 siblings, 1 reply; 12+ messages in thread From: Joerg Roedel @ 2009-02-18 14:10 UTC (permalink / raw) To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, linux-kernel On Wed, Feb 18, 2009 at 02:03:34PM +0000, Avi Kivity wrote: > Joerg Roedel wrote: > >The assertion which the attached patch removes fails sometimes. Removing > >this assertion is the alternative solution to this problem ;-) > > > >From ca45f3a2e45cd7e76ca624bb1098329db8ff83ab Mon Sep 17 00:00:00 2001 > >From: Joerg Roedel <joerg.roedel@amd.com> > >Date: Wed, 18 Feb 2009 14:51:13 +0100 > >Subject: [PATCH] kvm mmu: remove assertion in kvm_mmu_alloc_page > > > >Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > >--- > > arch/x86/kvm/mmu.c | 1 - > > 1 files changed, 0 insertions(+), 1 deletions(-) > > > >diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > >index d93ecec..b226973 100644 > >--- a/arch/x86/kvm/mmu.c > >+++ b/arch/x86/kvm/mmu.c > >@@ -802,7 +802,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, > > set_page_private(virt_to_page(sp->spt), (unsigned long)sp); > > list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); > > INIT_LIST_HEAD(&sp->oos_link); > >- ASSERT(is_empty_shadow_page(sp->spt)); > > bitmap_zero(sp->slot_bitmap, KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS); > > sp->multimapped = 0; > > sp->parent_pte = parent_pte; > > > > sp->spt is allocated using mmu_memory_cache_alloc(), which zeros the page. How can the assertion fail? In the code I see (current kvm-git) mmu_memory_cache_alloc() does zero nothing. It takes the page from the preallocated pool and returns it. The pool itself is filled with mmu_topup_memory_caches() which calls mmu_topup_memory_cache_page() to fill the mmu_page_cache (from which the sp->spt page is allocated later). And the mmu_topup_memory_cache_page() function calls alloc_page() and does not zero the result. This let the assertion trigger. Joerg -- | Advanced Micro Devices GmbH Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München System | Research | Geschäftsführer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis München | Registergericht München, HRB Nr. 43632 ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 14:10 ` Joerg Roedel @ 2009-02-18 14:14 ` Avi Kivity 0 siblings, 0 replies; 12+ messages in thread From: Avi Kivity @ 2009-02-18 14:14 UTC (permalink / raw) To: Joerg Roedel; +Cc: Marcelo Tosatti, kvm, linux-kernel Joerg Roedel wrote: >> sp->spt is allocated using mmu_memory_cache_alloc(), which zeros the page. How can the assertion fail? >> > > In the code I see (current kvm-git) mmu_memory_cache_alloc() does zero > nothing. It takes the page from the preallocated pool and returns it. > The pool itself is filled with mmu_topup_memory_caches() which calls > mmu_topup_memory_cache_page() to fill the mmu_page_cache (from which the > sp->spt page is allocated later). And the mmu_topup_memory_cache_page() > function calls alloc_page() and does not zero the result. This let the > assertion trigger. > Right, I was looking at the 2.6.29 tree. The patch is correct (and the others look good as well). As usual, I'd like Marcelo to take a look as well. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO 2009-02-18 13:54 ` Joerg Roedel 2009-02-18 14:03 ` Avi Kivity @ 2009-02-18 18:42 ` Marcelo Tosatti 1 sibling, 0 replies; 12+ messages in thread From: Marcelo Tosatti @ 2009-02-18 18:42 UTC (permalink / raw) To: Joerg Roedel; +Cc: Avi Kivity, kvm, linux-kernel On Wed, Feb 18, 2009 at 02:54:37PM +0100, Joerg Roedel wrote: > > Adding __GFP_ZERO here will cause us to clear the page twice, which is wasteful. > > The assertion which the attached patch removes fails sometimes. Removing > this assertion is the alternative solution to this problem ;-) > From: Joerg Roedel <joerg.roedel@amd.com> Date: Wed, 18 Feb 2009 14:51:13 +0100 Subject: [PATCH] kvm mmu: remove assertion in kvm_mmu_alloc_page Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Acked-by: Marcelo Tosatti <mtosatti@redhat.com> --- arch/x86/kvm/mmu.c | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index d93ecec..b226973 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -802,7 +802,6 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, set_page_private(virt_to_page(sp->spt), (unsigned long)sp); list_add(&sp->link, &vcpu->kvm->arch.active_mmu_pages); INIT_LIST_HEAD(&sp->oos_link); - ASSERT(is_empty_shadow_page(sp->spt)); bitmap_zero(sp->slot_bitmap, KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS); sp->multimapped = 0; sp->parent_pte = parent_pte; ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2009-02-18 18:43 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-02-18 13:08 [PATCH 0/3] KVM SoftMMU fixes Joerg Roedel 2009-02-18 13:08 ` [PATCH 1/3] kvm mmu: handle compound pages in kvm_is_mmio_pfn Joerg Roedel 2009-02-18 18:10 ` Marcelo Tosatti 2009-02-18 13:08 ` [PATCH 2/3] kvm mmu: remove redundant check in mmu_set_spte Joerg Roedel 2009-02-18 18:38 ` Marcelo Tosatti 2009-02-18 13:09 ` [PATCH 3/3] kvm mmu: alloc shadow pages with __GFP_ZERO Joerg Roedel 2009-02-18 13:47 ` Avi Kivity 2009-02-18 13:54 ` Joerg Roedel 2009-02-18 14:03 ` Avi Kivity 2009-02-18 14:10 ` Joerg Roedel 2009-02-18 14:14 ` Avi Kivity 2009-02-18 18:42 ` Marcelo Tosatti
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox