* [PATCH] do not free active mmu pages in free_mmu_pages()
@ 2009-03-11 10:07 Gleb Natapov
2009-03-15 12:59 ` Avi Kivity
2009-03-16 20:15 ` Marcelo Tosatti
0 siblings, 2 replies; 8+ messages in thread
From: Gleb Natapov @ 2009-03-11 10:07 UTC (permalink / raw)
To: avi, marcelo; +Cc: kvm
free_mmu_pages() should only undo what alloc_mmu_pages() does.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2a36f7f..b625ed4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2638,14 +2638,6 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp);
static void free_mmu_pages(struct kvm_vcpu *vcpu)
{
- struct kvm_mmu_page *sp;
-
- while (!list_empty(&vcpu->kvm->arch.active_mmu_pages)) {
- sp = container_of(vcpu->kvm->arch.active_mmu_pages.next,
- struct kvm_mmu_page, link);
- kvm_mmu_zap_page(vcpu->kvm, sp);
- cond_resched();
- }
free_page((unsigned long)vcpu->arch.mmu.pae_root);
}
--
Gleb.
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-11 10:07 [PATCH] do not free active mmu pages in free_mmu_pages() Gleb Natapov
@ 2009-03-15 12:59 ` Avi Kivity
2009-03-16 20:15 ` Marcelo Tosatti
1 sibling, 0 replies; 8+ messages in thread
From: Avi Kivity @ 2009-03-15 12:59 UTC (permalink / raw)
To: Gleb Natapov; +Cc: marcelo, kvm
Gleb Natapov wrote:
> free_mmu_pages() should only undo what alloc_mmu_pages() does.
>
> Signed-off-by: Gleb Natapov <gleb@redhat.com>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2a36f7f..b625ed4 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2638,14 +2638,6 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp);
>
> static void free_mmu_pages(struct kvm_vcpu *vcpu)
> {
> - struct kvm_mmu_page *sp;
> -
> - while (!list_empty(&vcpu->kvm->arch.active_mmu_pages)) {
> - sp = container_of(vcpu->kvm->arch.active_mmu_pages.next,
> - struct kvm_mmu_page, link);
> - kvm_mmu_zap_page(vcpu->kvm, sp);
> - cond_resched();
> - }
> free_page((unsigned long)vcpu->arch.mmu.pae_root);
> }
>
>
I think this is correct, but the patch leaves the function name wrong.
Rename, or perhaps just open code into callers?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-11 10:07 [PATCH] do not free active mmu pages in free_mmu_pages() Gleb Natapov
2009-03-15 12:59 ` Avi Kivity
@ 2009-03-16 20:15 ` Marcelo Tosatti
2009-03-16 20:34 ` Gleb Natapov
1 sibling, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2009-03-16 20:15 UTC (permalink / raw)
To: Gleb Natapov; +Cc: avi, marcelo, kvm
On Wed, Mar 11, 2009 at 12:07:55PM +0200, Gleb Natapov wrote:
> free_mmu_pages() should only undo what alloc_mmu_pages() does.
>
> Signed-off-by: Gleb Natapov <gleb@redhat.com>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 2a36f7f..b625ed4 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -2638,14 +2638,6 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp);
>
> static void free_mmu_pages(struct kvm_vcpu *vcpu)
> {
> - struct kvm_mmu_page *sp;
> -
> - while (!list_empty(&vcpu->kvm->arch.active_mmu_pages)) {
> - sp = container_of(vcpu->kvm->arch.active_mmu_pages.next,
> - struct kvm_mmu_page, link);
> - kvm_mmu_zap_page(vcpu->kvm, sp);
> - cond_resched();
> - }
> free_page((unsigned long)vcpu->arch.mmu.pae_root);
> }
Doesnt the vm shutdown path rely on the while loop you removed to free
all shadow pages before freeing the mmu kmem caches, if mmu notifiers
is disabled?
And how harmful is that loop? Zaps the entire cache on cpu hotunplug?
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-16 20:15 ` Marcelo Tosatti
@ 2009-03-16 20:34 ` Gleb Natapov
2009-03-16 21:01 ` Marcelo Tosatti
0 siblings, 1 reply; 8+ messages in thread
From: Gleb Natapov @ 2009-03-16 20:34 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: avi, marcelo, kvm
On Mon, Mar 16, 2009 at 05:15:33PM -0300, Marcelo Tosatti wrote:
> On Wed, Mar 11, 2009 at 12:07:55PM +0200, Gleb Natapov wrote:
> > free_mmu_pages() should only undo what alloc_mmu_pages() does.
> >
> > Signed-off-by: Gleb Natapov <gleb@redhat.com>
> > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> > index 2a36f7f..b625ed4 100644
> > --- a/arch/x86/kvm/mmu.c
> > +++ b/arch/x86/kvm/mmu.c
> > @@ -2638,14 +2638,6 @@ EXPORT_SYMBOL_GPL(kvm_disable_tdp);
> >
> > static void free_mmu_pages(struct kvm_vcpu *vcpu)
> > {
> > - struct kvm_mmu_page *sp;
> > -
> > - while (!list_empty(&vcpu->kvm->arch.active_mmu_pages)) {
> > - sp = container_of(vcpu->kvm->arch.active_mmu_pages.next,
> > - struct kvm_mmu_page, link);
> > - kvm_mmu_zap_page(vcpu->kvm, sp);
> > - cond_resched();
> > - }
> > free_page((unsigned long)vcpu->arch.mmu.pae_root);
> > }
>
> Doesnt the vm shutdown path rely on the while loop you removed to free
> all shadow pages before freeing the mmu kmem caches, if mmu notifiers
> is disabled?
>
Shouldn't mmu_free_roots() on all vcpus clear all mmu pages?
> And how harmful is that loop? Zaps the entire cache on cpu hotunplug?
>
KVM doesn't support vcpu destruction, but destruction is called anyway
on various error conditions. The one that easy to trigger is to create
vcpu with the same id simultaneously from two threads. The result is
OOPs in random places.
--
Gleb.
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-16 20:34 ` Gleb Natapov
@ 2009-03-16 21:01 ` Marcelo Tosatti
2009-03-16 21:20 ` Gleb Natapov
0 siblings, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2009-03-16 21:01 UTC (permalink / raw)
To: Gleb Natapov; +Cc: avi, marcelo, kvm
On Mon, Mar 16, 2009 at 10:34:01PM +0200, Gleb Natapov wrote:
> > Doesnt the vm shutdown path rely on the while loop you removed to free
> > all shadow pages before freeing the mmu kmem caches, if mmu notifiers
> > is disabled?
> >
> Shouldn't mmu_free_roots() on all vcpus clear all mmu pages?
No. It only zaps the present root on every vcpu, but not
the children.
> > And how harmful is that loop? Zaps the entire cache on cpu hotunplug?
> >
> KVM doesn't support vcpu destruction, but destruction is called anyway
> on various error conditions. The one that easy to trigger is to create
> vcpu with the same id simultaneously from two threads. The result is
> OOPs in random places.
mmu_lock should be held there, and apparently it is not.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-16 21:01 ` Marcelo Tosatti
@ 2009-03-16 21:20 ` Gleb Natapov
2009-03-16 21:33 ` Marcelo Tosatti
0 siblings, 1 reply; 8+ messages in thread
From: Gleb Natapov @ 2009-03-16 21:20 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: avi, marcelo, kvm
On Mon, Mar 16, 2009 at 06:01:52PM -0300, Marcelo Tosatti wrote:
> On Mon, Mar 16, 2009 at 10:34:01PM +0200, Gleb Natapov wrote:
> > > Doesnt the vm shutdown path rely on the while loop you removed to free
> > > all shadow pages before freeing the mmu kmem caches, if mmu notifiers
> > > is disabled?
> > >
> > Shouldn't mmu_free_roots() on all vcpus clear all mmu pages?
>
> No. It only zaps the present root on every vcpu, but not
> the children.
>
> > > And how harmful is that loop? Zaps the entire cache on cpu hotunplug?
> > >
> > KVM doesn't support vcpu destruction, but destruction is called anyway
> > on various error conditions. The one that easy to trigger is to create
> > vcpu with the same id simultaneously from two threads. The result is
> > OOPs in random places.
>
> mmu_lock should be held there, and apparently it is not.
>
Yeah, my first solution was to add mmu_lock, but why function that gets
vcpu as an input should destroy data structure that is global for the VM.
There is kvm_mmu_zap_all() that does same thing (well almost) and also does
proper locking. Shouldn't it be called during VM destruction instead?
--
Gleb.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-16 21:20 ` Gleb Natapov
@ 2009-03-16 21:33 ` Marcelo Tosatti
2009-03-16 21:32 ` Gleb Natapov
0 siblings, 1 reply; 8+ messages in thread
From: Marcelo Tosatti @ 2009-03-16 21:33 UTC (permalink / raw)
To: Gleb Natapov; +Cc: avi, marcelo, kvm
On Mon, Mar 16, 2009 at 11:20:10PM +0200, Gleb Natapov wrote:
> > mmu_lock should be held there, and apparently it is not.
> >
> Yeah, my first solution was to add mmu_lock, but why function that gets
> vcpu as an input should destroy data structure that is global for the VM.
Point.
> There is kvm_mmu_zap_all() that does same thing (well almost) and also does
> proper locking. Shouldn't it be called during VM destruction instead?
Yes, that would better (which happens implicitly with mmu notifiers
->release).
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH] do not free active mmu pages in free_mmu_pages()
2009-03-16 21:33 ` Marcelo Tosatti
@ 2009-03-16 21:32 ` Gleb Natapov
0 siblings, 0 replies; 8+ messages in thread
From: Gleb Natapov @ 2009-03-16 21:32 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: avi, marcelo, kvm
On Mon, Mar 16, 2009 at 06:33:22PM -0300, Marcelo Tosatti wrote:
> On Mon, Mar 16, 2009 at 11:20:10PM +0200, Gleb Natapov wrote:
> > > mmu_lock should be held there, and apparently it is not.
> > >
> > Yeah, my first solution was to add mmu_lock, but why function that gets
> > vcpu as an input should destroy data structure that is global for the VM.
>
> Point.
>
> > There is kvm_mmu_zap_all() that does same thing (well almost) and also does
> > proper locking. Shouldn't it be called during VM destruction instead?
>
> Yes, that would better (which happens implicitly with mmu notifiers
> ->release).
OK. I'll send new patch.
--
Gleb.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-03-16 21:36 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-03-11 10:07 [PATCH] do not free active mmu pages in free_mmu_pages() Gleb Natapov
2009-03-15 12:59 ` Avi Kivity
2009-03-16 20:15 ` Marcelo Tosatti
2009-03-16 20:34 ` Gleb Natapov
2009-03-16 21:01 ` Marcelo Tosatti
2009-03-16 21:20 ` Gleb Natapov
2009-03-16 21:33 ` Marcelo Tosatti
2009-03-16 21:32 ` Gleb Natapov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox