From: Sean Christopherson <seanjc@google.com>
To: David Matlack <dmatlack@google.com>
Cc: Vipin Sharma <vipinsh@google.com>,
pbonzini@redhat.com, zhi.wang.linux@gmail.com,
weijiang.yang@intel.com, mizhang@google.com,
liangchen.linux@gmail.com, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/3] KVM: x86/mmu: Use MMU shrinker to shrink KVM MMU memory caches
Date: Mon, 28 Oct 2024 13:49:40 -0700 [thread overview]
Message-ID: <Zx_45FUW1QddzqOU@google.com> (raw)
In-Reply-To: <CALzav=e7utP8wT_0t2bnVjyezyde7q86F3BHTsSpR1=qVbexQg@mail.gmail.com>
On Mon, Oct 28, 2024, David Matlack wrote:
> On Fri, Oct 25, 2024 at 10:37 AM Vipin Sharma <vipinsh@google.com> wrote:
> >
> > On Thu, Oct 24, 2024 at 4:25 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > On Fri, Oct 04, 2024, Vipin Sharma wrote:
> > > > +out_mmu_memory_cache_unlock:
> > > > + mutex_unlock(&vcpu->arch.mmu_memory_cache_lock);
> > >
> > > I've been thinking about this patch on and off for the past few weeks, and every
> > > time I come back to it I can't shake the feeling that we came up with a clever
> > > solution for a problem that doesn't exist. I can't recall a single complaint
> > > about KVM consuming an unreasonable amount of memory for page tables. In fact,
> > > the only time I can think of where the code in question caused problems was when
> > > I unintentionally inverted the iterator and zapped the newest SPs instead of the
> > > oldest SPs.
> > >
> > > So, I'm leaning more and more toward simply removing the shrinker integration.
> >
> > One thing we can agree on is that we don't need MMU shrinker in its
> > current form because it is removing pages which are very well being
> > used by VM instead of shrinking its cache.
> >
> > Regarding the current series, the biggest VM in GCE we can have 416
> > vCPUs, considering each thread can have 40 pages in its cache, total
> > cost gonna be around 65 MiB, doesn't seem much to me considering these
> > VMs have memory in TiB. Since caches in VMs are not unbounded, I think
> > it is fine to not have a MMU shrinker as its impact is miniscule in
> > KVM.
>
> I have no objection to removing the shrinker entirely.
Let's do that. In the unlikely scenario someone comes along with a strong use
case for purging the vCPU caches, we can always resurrect this approach.
Vipin, can you send a v3?
next prev parent reply other threads:[~2024-10-28 20:49 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-04 19:55 [PATCH v2 0/3] KVM: x86/mmu: Repurpose MMU shrinker into page cache shrinker Vipin Sharma
2024-10-04 19:55 ` [PATCH v2 1/3] KVM: x86/mmu: Change KVM mmu shrinker to no-op Vipin Sharma
2024-10-04 19:55 ` [PATCH v2 2/3] KVM: x86/mmu: Use MMU shrinker to shrink KVM MMU memory caches Vipin Sharma
2024-10-04 20:04 ` Vipin Sharma
2024-10-24 23:25 ` Sean Christopherson
2024-10-25 17:36 ` Vipin Sharma
2024-10-28 20:37 ` David Matlack
2024-10-28 20:49 ` Sean Christopherson [this message]
2024-10-28 22:32 ` Vipin Sharma
2024-10-04 19:55 ` [PATCH v2 3/3] KVM: selftests: Add a test to invoke MMU shrinker on KVM VMs Vipin Sharma
2024-10-04 20:03 ` Vipin Sharma
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zx_45FUW1QddzqOU@google.com \
--to=seanjc@google.com \
--cc=dmatlack@google.com \
--cc=kvm@vger.kernel.org \
--cc=liangchen.linux@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mizhang@google.com \
--cc=pbonzini@redhat.com \
--cc=vipinsh@google.com \
--cc=weijiang.yang@intel.com \
--cc=zhi.wang.linux@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox