From: Sean Christopherson <seanjc@google.com>
To: James Houghton <jthoughton@google.com>
Cc: chenhuacai@kernel.org, gshan@redhat.com, jhogan@kernel.org,
joey.gouly@arm.com, kvm@vger.kernel.org, kvmarm@lists.linux.dev,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org,
loongarch@lists.linux.dev, maobibo@loongson.cn, maz@kernel.org,
oupton@kernel.org, pbonzini@redhat.com, ricarkol@google.com,
shahuang@redhat.com, stable@vger.kernel.org,
suzuki.poulose@arm.com, yuzenghui@huawei.com,
zhaotianrui@loongson.cn
Subject: Re: [PATCH 1/5] KVM: arm64: Grab KVM MMU write lock in kvm_arch_flush_shadow_all()
Date: Tue, 5 May 2026 13:14:39 -0700 [thread overview]
Message-ID: <afpPr-a00an1v6mZ@google.com> (raw)
In-Reply-To: <afo0Er7R4MnMP0MB@google.com>
On Tue, May 05, 2026, Sean Christopherson wrote:
> On Tue, May 05, 2026, James Houghton wrote:
> > On Tue, May 5, 2026 at 10:05 AM Sean Christopherson <seanjc@google.com> wrote:
> > > There are more issues. kvm->arch.mmu.split_page_cache can be freed by
> > > kvm_arch_commit_memory_region(), which holds slots_lock and slots_arch_lock,
> > > but not mmu_lock.
> >
> > Thanks. I also noticed that kvm->arch.mmu.split_page_cache is
> > documented as being protected by kvm->slots_lock; we should be holding
> > it here. But we cannot take it here because we are already holding the
> > KVM srcu lock.
> >
> > > IMO, the handling of kvm->arch.mmu.split_page_cache should be reworked. I don't
> > > entirely get the motivation for aggressively freeing the cache. The cache will
> > > only be filled if KVM actually does eager page splitting, so it's not like KVM is
> > > burning pages for setups that will never use the cache.
> > >
> > > Maybe I'm underestimating how many pages arm64 needs in the worst case scenario?
> > > (I can't follow the math, too many macros). But if KVM is configuring the cache
> > > with a capacity that's _so_ high that the "wasted" memory is problematic, then we
> > > probably should we revisit the capacity and algorithm. E.g. if KVM is splitting
> > > from 1GiB => 4KiB in a single pass (I can't tell if KVM does this on arm64), then
> > > we could break that into a 1GiB => 2MiB => 4KiB sequence.
> >
> > I'm not sure I've fully understood the point you're making, but I
> > *think* we can just drop the
> > kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache);
> > line from kvm_uninit_stage2_mmu(). It will get freed when the VM is
> > destroyed anyway.
>
> It's not that simple. KVM arm64 allows userspace to reconfigure the capacity of
> the cache via KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE. kvm_vm_ioctl_enable_cap()
> currently allows userspace to do that so long as there are no memslots.
> __kvm_mmu_topup_memory_cache() will (rightly) yell and fail if it's called with
> the "wrong" capacity, so we'd need to sort that out.
>
> The other issue is that it's not clear to me what happens for large "chunk" sizes.
> If KVM is splitting from 1GiB (or whatever huge-hugepage sizes are supported on
> arm64) all the way to 4KiB, e.g. to optimize against break-before-make, then the
> capacity of the cache could be significant, e.g. MiB of memory or worse. My read
> of things is that purging the cache when dirty logging is disabled is a guard
> against consuming too much memory when the chunk size is large.
If we go down the "let's simplify things" path, one way to simplify the handling
of KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE would be to disallow changing the chunk size
once the cache has been used. That would allow deferring teardown of the cache
until VM destruction.
diff --git arch/arm64/kvm/arm.c arch/arm64/kvm/arm.c
index 176cbe8baad3..671f7ac5e31d 100644
--- arch/arm64/kvm/arm.c
+++ arch/arm64/kvm/arm.c
@@ -165,7 +165,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
* To keep things simple, allow changing the chunk
* size only when no memory slots have been created.
*/
- if (kvm_are_all_memslots_empty(kvm)) {
+ if (kvm_are_all_memslots_empty(kvm) &&
+ !kvm->arch.mmu.split_page_cache.capacity) {
u64 new_cap = cap->args[0];
if (!new_cap || kvm_is_block_size_supported(new_cap)) {
next prev parent reply other threads:[~2026-05-05 20:14 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-04 22:42 [PATCH 0/5] KVM: Fix race conditions in kvm_arch_flush_shadow_all() James Houghton
2026-05-04 22:42 ` [PATCH 1/5] KVM: arm64: Grab KVM MMU write lock " James Houghton
2026-05-04 23:10 ` James Houghton
2026-05-05 17:05 ` Sean Christopherson
2026-05-05 18:01 ` James Houghton
2026-05-05 18:16 ` Sean Christopherson
2026-05-05 20:14 ` Sean Christopherson [this message]
2026-05-04 22:42 ` [PATCH 2/5] KVM: loongarch: Grab MMU " James Houghton
2026-05-04 22:42 ` [PATCH 3/5] KVM: mips: " James Houghton
2026-05-04 22:42 ` [PATCH 4/5] KVM: Hold MMU lock exclusively when calling kvm_arch_flush_shadow_all() James Houghton
2026-05-04 22:42 ` [PATCH 5/5] DO NOT MERGE: KVM: selftests: Reproducer for arm64 double-free James Houghton
2026-05-04 22:44 ` [PATCH 0/5] KVM: Fix race conditions in kvm_arch_flush_shadow_all() James Houghton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=afpPr-a00an1v6mZ@google.com \
--to=seanjc@google.com \
--cc=chenhuacai@kernel.org \
--cc=gshan@redhat.com \
--cc=jhogan@kernel.org \
--cc=joey.gouly@arm.com \
--cc=jthoughton@google.com \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=loongarch@lists.linux.dev \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=oupton@kernel.org \
--cc=pbonzini@redhat.com \
--cc=ricarkol@google.com \
--cc=shahuang@redhat.com \
--cc=stable@vger.kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=yuzenghui@huawei.com \
--cc=zhaotianrui@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox