From: Dave Hansen <dave@linux.vnet.ibm.com>
To: Avi Kivity <avi@redhat.com>
Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [RFC][PATCH 0/9] rework KVM mmu_shrink() code
Date: Wed, 16 Jun 2010 08:03:16 -0700 [thread overview]
Message-ID: <1276700596.6437.16867.camel@nimitz> (raw)
In-Reply-To: <4C188D8B.40508@redhat.com>
On Wed, 2010-06-16 at 11:38 +0300, Avi Kivity wrote:
> On 06/15/2010 04:55 PM, Dave Hansen wrote:
> > These seem to boot and run fine. I'm running about 40 VMs at
> > once, while doing "echo 3> /proc/sys/vm/drop_caches", and
> > killing/restarting VMs constantly.
> >
>
> Will drop_caches actually shrink the kvm caches too? If so we probably
> need to add that to autotest since it's a really good stress test for
> the mmu.
I'm completely sure. I crashed my machines several times this way
during testing.
> > Seems to be relatively stable, and seems to keep the numbers
> > of kvm_mmu_page_header objects down.
> >
>
> That's no necessarily a good thing, those things are expensive to
> recreate. Of course, when we do need to reclaim them, that should be
> efficient.
Oh, I meant that I didn't break the shrinker completely.
> We also do a very bad job of selecting which page to reclaim. We need
> to start using the accessed bit on sptes that point to shadow page
> tables, and then look those up and reclaim unreferenced pages sooner.
> With shadow paging there can be tons of unsync pages that are basically
> unused and can be reclaimed at no cost to future runtime.
Sounds like a good next step.
-- Dave
next prev parent reply other threads:[~2010-06-16 15:03 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-15 13:55 [RFC][PATCH 0/9] rework KVM mmu_shrink() code Dave Hansen
2010-06-15 13:55 ` [RFC][PATCH 1/9] abstract kvm x86 mmu->n_free_mmu_pages Dave Hansen
2010-06-16 8:40 ` Avi Kivity
2010-06-15 13:55 ` [RFC][PATCH 2/9] rename x86 kvm->arch.n_alloc_mmu_pages Dave Hansen
2010-06-15 13:55 ` [RFC][PATCH 3/9] replace x86 kvm n_free_mmu_pages with n_used_mmu_pages Dave Hansen
2010-06-16 14:25 ` Marcelo Tosatti
2010-06-16 15:42 ` Dave Hansen
2010-06-15 13:55 ` [RFC][PATCH 4/9] create aggregate kvm_total_used_mmu_pages value Dave Hansen
2010-06-16 8:48 ` Avi Kivity
2010-06-16 15:06 ` Dave Hansen
2010-06-17 8:43 ` Avi Kivity
2010-06-16 16:55 ` Dave Hansen
2010-06-17 8:23 ` Avi Kivity
2010-06-15 13:55 ` [RFC][PATCH 5/9] break out some mmu_skrink() code Dave Hansen
2010-06-15 13:55 ` [RFC][PATCH 6/9] remove kvm_freed variable Dave Hansen
2010-06-15 13:55 ` [RFC][PATCH 7/9] make kvm_get_kvm() more robust Dave Hansen
2010-06-15 13:55 ` [RFC][PATCH 8/9] reduce kvm_lock hold times in mmu_skrink() Dave Hansen
2010-06-16 8:54 ` Avi Kivity
2010-06-15 13:55 ` [RFC][PATCH 9/9] make kvm mmu shrinker more aggressive Dave Hansen
2010-06-16 9:24 ` Avi Kivity
2010-06-16 15:25 ` Dave Hansen
2010-06-17 8:37 ` Avi Kivity
2010-06-18 15:49 ` Dave Hansen
2010-06-20 8:11 ` Avi Kivity
2010-06-22 16:32 ` Dave Hansen
2010-07-22 4:36 ` Avi Kivity
2010-07-22 5:36 ` Dave Hansen
2010-07-22 5:42 ` Avi Kivity
2010-06-16 8:38 ` [RFC][PATCH 0/9] rework KVM mmu_shrink() code Avi Kivity
2010-06-16 15:03 ` Dave Hansen [this message]
2010-06-17 8:40 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1276700596.6437.16867.camel@nimitz \
--to=dave@linux.vnet.ibm.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox