From: Wei-Lin Chang <weilin.chang@arm.com>
To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
linux-kernel@vger.kernel.org
Cc: Marc Zyngier <maz@kernel.org>, Oliver Upton <oupton@kernel.org>,
Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Wei-Lin Chang <weilin.chang@arm.com>
Subject: [PATCH 3/4] KVM: arm64: nv: Remove reverse map entries during TLBI handling
Date: Mon, 30 Mar 2026 11:06:32 +0100 [thread overview]
Message-ID: <20260330100633.2817076-4-weilin.chang@arm.com> (raw)
In-Reply-To: <20260330100633.2817076-1-weilin.chang@arm.com>
When a guest hypervisor issues a TLBI for a specific IPA range, KVM
unmaps that range from all the effected shadow stage-2s. During this we
get the opportunity to remove the reverse map, and lower the probability
of creating polluted reverse map ranges at subsequent stage-2 faults.
However, the TLBI ranges are specified in nested IPA, so in order to
locate the affected ranges in the reverse map maple tree, which is a
mapping from canonical IPA to nested IPA, we can only iterate through
the entire tree and check each entry.
Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Wei-Lin Chang <weilin.chang@arm.com>
---
arch/arm64/include/asm/kvm_nested.h | 1 +
arch/arm64/kvm/nested.c | 29 +++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 3 +++
3 files changed, 33 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4d09d567d7f9..376619cdc9d5 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -76,6 +76,7 @@ extern void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
const union tlbi_info *info,
void (*)(struct kvm_s2_mmu *,
const union tlbi_info *));
+extern void kvm_remove_nested_revmap(struct kvm_s2_mmu *mmu, u64 addr, u64 size);
extern int kvm_record_nested_revmap(gpa_t gpa, struct kvm_s2_mmu *mmu,
gpa_t fault_gpa, size_t map_size);
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c7d00cb40ba5..125fa21ca2e7 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -912,6 +912,35 @@ static int record_accel(struct kvm_s2_mmu *mmu, gpa_t gpa,
return mas_store_gfp(&mas, (void *)new_entry, GFP_KERNEL_ACCOUNT);
}
+void kvm_remove_nested_revmap(struct kvm_s2_mmu *mmu, u64 addr, u64 size)
+{
+ /*
+ * Iterate through the mt of this mmu, remove all unpolluted canonical
+ * ipa ranges that maps to ranges that are strictly within
+ * [addr, addr + size).
+ */
+ struct maple_tree *mt = &mmu->nested_revmap_mt;
+ void *entry;
+ u64 nested_ipa, nested_ipa_end, addr_end = addr + size;
+ size_t revmap_size;
+
+ MA_STATE(mas, mt, 0, ULONG_MAX);
+
+ mas_for_each(&mas, entry, ULONG_MAX) {
+ if ((u64)entry & UNKNOWN_IPA)
+ continue;
+
+ revmap_size = mas.last - mas.index + 1;
+ nested_ipa = (u64)entry & NESTED_IPA_MASK;
+ nested_ipa_end = nested_ipa + revmap_size;
+
+ if (nested_ipa >= addr && nested_ipa_end <= addr_end) {
+ accel_clear_mmu_range(mmu, mas.index, revmap_size);
+ mas_erase(&mas);
+ }
+ }
+}
+
int kvm_record_nested_revmap(gpa_t ipa, struct kvm_s2_mmu *mmu,
gpa_t fault_ipa, size_t map_size)
{
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e1001544d4f4..c7af0eac9ee4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4006,6 +4006,7 @@ union tlbi_info {
static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu,
const union tlbi_info *info)
{
+ kvm_remove_nested_revmap(mmu, info->range.start, info->range.size);
/*
* The unmap operation is allowed to drop the MMU lock and block, which
* means that @mmu could be used for a different context than the one
@@ -4104,6 +4105,8 @@ static void s2_mmu_unmap_ipa(struct kvm_s2_mmu *mmu,
max_size = compute_tlb_inval_range(mmu, info->ipa.addr);
base_addr &= ~(max_size - 1);
+ kvm_remove_nested_revmap(mmu, base_addr, max_size);
+
/*
* See comment in s2_mmu_unmap_range() for why this is allowed to
* reschedule.
--
2.43.0
next prev parent reply other threads:[~2026-03-30 10:07 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 10:06 [PATCH 0/4] KVM: arm64: nv: Implement nested stage-2 reverse map Wei-Lin Chang
2026-03-30 10:06 ` [PATCH 1/4] KVM: arm64: nv: Avoid full shadow s2 unmap Wei-Lin Chang
2026-03-31 15:16 ` kernel test robot
2026-03-30 10:06 ` [PATCH 2/4] KVM: arm64: nv: Accelerate canonical IPA unmapping with canonical s2 mmu maple tree Wei-Lin Chang
2026-03-30 10:06 ` Wei-Lin Chang [this message]
2026-03-30 10:06 ` [PATCH 4/4] KVM: arm64: nv: Create nested IPA direct map to speed up reverse map removal Wei-Lin Chang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260330100633.2817076-4-weilin.chang@arm.com \
--to=weilin.chang@arm.com \
--cc=catalin.marinas@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oupton@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox