* [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages @ 2011-12-02 17:35 Jan Kiszka 2011-12-04 16:48 ` Avi Kivity 2011-12-06 9:56 ` Takuya Yoshikawa 0 siblings, 2 replies; 6+ messages in thread From: Jan Kiszka @ 2011-12-02 17:35 UTC (permalink / raw) To: Avi Kivity, Marcelo Tosatti; +Cc: kvm freed_pages is never evaluated, so remove it as well as the return code kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> --- arch/x86/kvm/mmu.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index b1178d1..efb8576 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3898,14 +3898,14 @@ restart: spin_unlock(&kvm->mmu_lock); } -static int kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, - struct list_head *invalid_list) +static void kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, + struct list_head *invalid_list) { struct kvm_mmu_page *page; page = container_of(kvm->arch.active_mmu_pages.prev, struct kvm_mmu_page, link); - return kvm_mmu_prepare_zap_page(kvm, page, invalid_list); + kvm_mmu_prepare_zap_page(kvm, page, invalid_list); } static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) @@ -3920,15 +3920,15 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) raw_spin_lock(&kvm_lock); list_for_each_entry(kvm, &vm_list, vm_list) { - int idx, freed_pages; + int idx; LIST_HEAD(invalid_list); idx = srcu_read_lock(&kvm->srcu); spin_lock(&kvm->mmu_lock); if (!kvm_freed && nr_to_scan > 0 && kvm->arch.n_used_mmu_pages > 0) { - freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm, - &invalid_list); + kvm_mmu_remove_some_alloc_mmu_pages(kvm, + &invalid_list); kvm_freed = kvm; } nr_to_scan--; -- 1.7.3.4 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages 2011-12-02 17:35 [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages Jan Kiszka @ 2011-12-04 16:48 ` Avi Kivity 2011-12-06 9:56 ` Takuya Yoshikawa 1 sibling, 0 replies; 6+ messages in thread From: Avi Kivity @ 2011-12-04 16:48 UTC (permalink / raw) To: Jan Kiszka; +Cc: Marcelo Tosatti, kvm On 12/02/2011 07:35 PM, Jan Kiszka wrote: > freed_pages is never evaluated, so remove it as well as the return code > kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user. > Applied, thanks. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages 2011-12-02 17:35 [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages Jan Kiszka 2011-12-04 16:48 ` Avi Kivity @ 2011-12-06 9:56 ` Takuya Yoshikawa 2011-12-06 10:40 ` Avi Kivity 1 sibling, 1 reply; 6+ messages in thread From: Takuya Yoshikawa @ 2011-12-06 9:56 UTC (permalink / raw) To: Jan Kiszka; +Cc: Avi Kivity, Marcelo Tosatti, kvm, Takuya Yoshikawa Hi, I was looking at the same place differently. (2011/12/03 2:35), Jan Kiszka wrote: > freed_pages is never evaluated, so remove it as well as the return code > kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user. > > Signed-off-by: Jan Kiszka<jan.kiszka@siemens.com> > --- > arch/x86/kvm/mmu.c | 12 ++++++------ > 1 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index b1178d1..efb8576 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -3898,14 +3898,14 @@ restart: > spin_unlock(&kvm->mmu_lock); > } > > -static int kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, > - struct list_head *invalid_list) > +static void kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, > + struct list_head *invalid_list) > { > struct kvm_mmu_page *page; > > page = container_of(kvm->arch.active_mmu_pages.prev, > struct kvm_mmu_page, link); > - return kvm_mmu_prepare_zap_page(kvm, page, invalid_list); > + kvm_mmu_prepare_zap_page(kvm, page, invalid_list); > } > > static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) > @@ -3920,15 +3920,15 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) > raw_spin_lock(&kvm_lock); > > list_for_each_entry(kvm,&vm_list, vm_list) { > - int idx, freed_pages; > + int idx; > LIST_HEAD(invalid_list); > > idx = srcu_read_lock(&kvm->srcu); > spin_lock(&kvm->mmu_lock); > if (!kvm_freed&& nr_to_scan> 0&& > kvm->arch.n_used_mmu_pages> 0) { > - freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm, > - &invalid_list); > + kvm_mmu_remove_some_alloc_mmu_pages(kvm, > + &invalid_list); > kvm_freed = kvm; > } > nr_to_scan--; I think mmu_shrink() is doing meaningless things. nr_to_scan should be treated as the number of objects to scan. Here, the objects we are trying to free is not kvm instances but shadow pages and their related objects. So, decrementing it for each iteration is not at all what the caller expected. Furthermore this code just frees from one VM and breaks the loop. So nr_to_scan is not functioning well. I was thinking how to improve this shrinker, but now, I also feel that registering mmu_shrink() might not worth it: it may free some memory in the case of shadow paging, but otherwise, there is little we can free by this. Is there any need for mmu_shrink()? Takuya ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages 2011-12-06 9:56 ` Takuya Yoshikawa @ 2011-12-06 10:40 ` Avi Kivity 2011-12-06 10:41 ` Avi Kivity 0 siblings, 1 reply; 6+ messages in thread From: Avi Kivity @ 2011-12-06 10:40 UTC (permalink / raw) To: Takuya Yoshikawa; +Cc: Jan Kiszka, Marcelo Tosatti, kvm, Takuya Yoshikawa On 12/06/2011 11:56 AM, Takuya Yoshikawa wrote: > Hi, I was looking at the same place differently. > > (2011/12/03 2:35), Jan Kiszka wrote: >> freed_pages is never evaluated, so remove it as well as the return code >> kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user. >> >> Signed-off-by: Jan Kiszka<jan.kiszka@siemens.com> >> --- >> arch/x86/kvm/mmu.c | 12 ++++++------ >> 1 files changed, 6 insertions(+), 6 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index b1178d1..efb8576 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -3898,14 +3898,14 @@ restart: >> spin_unlock(&kvm->mmu_lock); >> } >> >> -static int kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, >> - struct list_head *invalid_list) >> +static void kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm, >> + struct list_head *invalid_list) >> { >> struct kvm_mmu_page *page; >> >> page = container_of(kvm->arch.active_mmu_pages.prev, >> struct kvm_mmu_page, link); >> - return kvm_mmu_prepare_zap_page(kvm, page, invalid_list); >> + kvm_mmu_prepare_zap_page(kvm, page, invalid_list); >> } >> >> static int mmu_shrink(struct shrinker *shrink, struct >> shrink_control *sc) >> @@ -3920,15 +3920,15 @@ static int mmu_shrink(struct shrinker >> *shrink, struct shrink_control *sc) >> raw_spin_lock(&kvm_lock); >> >> list_for_each_entry(kvm,&vm_list, vm_list) { >> - int idx, freed_pages; >> + int idx; >> LIST_HEAD(invalid_list); >> >> idx = srcu_read_lock(&kvm->srcu); >> spin_lock(&kvm->mmu_lock); >> if (!kvm_freed&& nr_to_scan> 0&& >> kvm->arch.n_used_mmu_pages> 0) { >> - freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm, >> - &invalid_list); >> + kvm_mmu_remove_some_alloc_mmu_pages(kvm, >> + &invalid_list); >> kvm_freed = kvm; >> } >> nr_to_scan--; > > I think mmu_shrink() is doing meaningless things. > > nr_to_scan should be treated as the number of objects to scan. Here, > the objects we > are trying to free is not kvm instances but shadow pages and their > related objects. > > So, decrementing it for each iteration is not at all what the caller > expected. > > Furthermore this code just frees from one VM and breaks the loop. So > nr_to_scan is > not functioning well. True, we should make kvm_mmu_remove_some_alloc_mmu_pages() manage nr_to_scan. Also have better LRU management. In practice the shrinker is caller rarely so the problems don't bite. > > > I was thinking how to improve this shrinker, but now, I also feel that > registering > mmu_shrink() might not worth it: it may free some memory in the case > of shadow paging, > but otherwise, there is little we can free by this. > > Is there any need for mmu_shrink()? Without it a user can easily pin large amounts of kernel memory by filling guest memory with page tables and shadowing them all. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages 2011-12-06 10:40 ` Avi Kivity @ 2011-12-06 10:41 ` Avi Kivity 2011-12-06 10:52 ` Takuya Yoshikawa 0 siblings, 1 reply; 6+ messages in thread From: Avi Kivity @ 2011-12-06 10:41 UTC (permalink / raw) To: Takuya Yoshikawa; +Cc: Jan Kiszka, Marcelo Tosatti, kvm, Takuya Yoshikawa On 12/06/2011 12:40 PM, Avi Kivity wrote: > > > > Is there any need for mmu_shrink()? > > Without it a user can easily pin large amounts of kernel memory by > filling guest memory with page tables and shadowing them all. It would make an interesting test case, btw, to try to crash the kernel this way, and exercise the shrinker. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages 2011-12-06 10:41 ` Avi Kivity @ 2011-12-06 10:52 ` Takuya Yoshikawa 0 siblings, 0 replies; 6+ messages in thread From: Takuya Yoshikawa @ 2011-12-06 10:52 UTC (permalink / raw) To: Avi Kivity; +Cc: Jan Kiszka, Marcelo Tosatti, kvm, Takuya Yoshikawa (2011/12/06 19:41), Avi Kivity wrote: > On 12/06/2011 12:40 PM, Avi Kivity wrote: >>> >>> Is there any need for mmu_shrink()? >> >> Without it a user can easily pin large amounts of kernel memory by >> filling guest memory with page tables and shadowing them all. > > It would make an interesting test case, btw, to try to crash the kernel > this way, and exercise the shrinker. > If nobody is working on this mmu_shrink() issues, I will do something. But what I saw from the code is that the current zapper is effective for the cases there are some unsync children. So we may need to zap rather forcibly for tdp. Takuya ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2011-12-06 10:51 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-12-02 17:35 [PATCH] KVM: x86: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pages Jan Kiszka 2011-12-04 16:48 ` Avi Kivity 2011-12-06 9:56 ` Takuya Yoshikawa 2011-12-06 10:40 ` Avi Kivity 2011-12-06 10:41 ` Avi Kivity 2011-12-06 10:52 ` Takuya Yoshikawa
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).