From: Wei-Lin Chang <weilin.chang@arm.com>
To: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
linux-kernel@vger.kernel.org
Cc: Marc Zyngier <maz@kernel.org>, Oliver Upton <oupton@kernel.org>,
Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Wei-Lin Chang <weilin.chang@arm.com>
Subject: [PATCH v3 4/5] KVM: arm64: nv: Remove reverse map entries during TLBI handling
Date: Sun, 10 May 2026 15:53:37 +0100 [thread overview]
Message-ID: <20260510145338.322962-5-weilin.chang@arm.com> (raw)
In-Reply-To: <20260510145338.322962-1-weilin.chang@arm.com>
When a guest hypervisor issues a TLBI for a specific IPA range, KVM
unmaps that range from all the effected shadow stage-2s. During this we
get the opportunity to remove the reverse map, and lower the probability
of creating UNKNOWN_IPA reverse map ranges at subsequent stage-2 faults.
However, the TLBI ranges are specified in nested IPA, so in order to
locate the affected ranges in the reverse map maple tree, which is a
mapping from canonical IPA to nested IPA, we can only iterate through
the entire tree and check each entry.
Suggested-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Wei-Lin Chang <weilin.chang@arm.com>
---
arch/arm64/include/asm/kvm_nested.h | 2 ++
arch/arm64/kvm/nested.c | 38 +++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 3 +++
3 files changed, 43 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 5cbf78dfc685..b11925826b25 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -76,6 +76,8 @@ extern void kvm_s2_mmu_iterate_by_vmid(struct kvm *kvm, u16 vmid,
const union tlbi_info *info,
void (*)(struct kvm_s2_mmu *,
const union tlbi_info *));
+extern void kvm_remove_nested_revmap(struct kvm_s2_mmu *mmu, u64 nested_ipa,
+ size_t size);
extern void kvm_record_nested_revmap(gpa_t gpa, struct kvm_s2_mmu *mmu,
gpa_t fault_ipa, size_t map_size);
extern void kvm_vcpu_load_hw_mmu(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 35b5d5f21a23..96b88d9c0c2a 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -784,6 +784,44 @@ static struct kvm_s2_mmu *get_s2_mmu_nested(struct kvm_vcpu *vcpu)
return s2_mmu;
}
+void kvm_remove_nested_revmap(struct kvm_s2_mmu *mmu, u64 nested_ipa, size_t size)
+{
+ /*
+ * Iterate through the mt of this mmu, remove all canonical ipa ranges
+ * with !UNKNOWN_IPA that maps to ranges that are strictly within
+ * [addr, addr + size).
+ */
+ struct maple_tree *revmap_mt = &mmu->nested_revmap_mt;
+ void *entry;
+ u64 entry_val, nested_ipa_end = nested_ipa + size;
+ u64 this_nested_ipa, this_nested_ipa_end;
+ size_t revmap_size;
+
+ MA_STATE(mas_rev, revmap_mt, 0, ULONG_MAX);
+
+ mtree_lock(revmap_mt);
+ mas_for_each(&mas_rev, entry, ULONG_MAX) {
+ entry_val = xa_to_value(entry);
+ if (entry_val & UNKNOWN_IPA)
+ continue;
+
+ revmap_size = mas_rev.last - mas_rev.index + 1;
+ this_nested_ipa = entry_val & ADDR_MASK;
+ this_nested_ipa_end = this_nested_ipa + revmap_size;
+
+ if (this_nested_ipa >= nested_ipa &&
+ this_nested_ipa_end <= nested_ipa_end) {
+ /*
+ * As the shadow stage-2 is about to be unmapped
+ * after this function, it doesn't matter whether the
+ * removal of the reverse map failed or not.
+ */
+ mas_store_gfp(&mas_rev, NULL, GFP_NOWAIT | __GFP_ACCOUNT);
+ }
+ }
+ mtree_unlock(revmap_mt);
+}
+
void kvm_record_nested_revmap(gpa_t ipa, struct kvm_s2_mmu *mmu,
gpa_t fault_ipa, size_t map_size)
{
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6a96cb7ba9a3..a97304680cee 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -4006,6 +4006,7 @@ union tlbi_info {
static void s2_mmu_unmap_range(struct kvm_s2_mmu *mmu,
const union tlbi_info *info)
{
+ kvm_remove_nested_revmap(mmu, info->range.start, info->range.size);
/*
* The unmap operation is allowed to drop the MMU lock and block, which
* means that @mmu could be used for a different context than the one
@@ -4104,6 +4105,8 @@ static void s2_mmu_unmap_ipa(struct kvm_s2_mmu *mmu,
max_size = compute_tlb_inval_range(mmu, info->ipa.addr);
base_addr &= ~(max_size - 1);
+ kvm_remove_nested_revmap(mmu, base_addr, max_size);
+
/*
* See comment in s2_mmu_unmap_range() for why this is allowed to
* reschedule.
--
2.43.0
next prev parent reply other threads:[~2026-05-10 14:54 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-10 14:53 [PATCH v3 0/5] KVM: arm64: nv: Implement nested stage-2 reverse map Wei-Lin Chang
2026-05-10 14:53 ` [PATCH v3 1/5] KVM: arm64: Use a variable for the canonical GPA in kvm_s2_fault_map() Wei-Lin Chang
2026-05-10 14:53 ` [PATCH v3 2/5] KVM: arm64: Move shadow_pt_debugfs_dentry to reduce holes in kvm_s2_mmu Wei-Lin Chang
2026-05-10 14:53 ` [PATCH v3 3/5] KVM: arm64: nv: Avoid full shadow s2 unmap Wei-Lin Chang
2026-05-10 14:53 ` Wei-Lin Chang [this message]
2026-05-10 14:53 ` [PATCH v3 5/5] KVM: arm64: nv: Create nested IPA direct map to speed up reverse map removal Wei-Lin Chang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260510145338.322962-5-weilin.chang@arm.com \
--to=weilin.chang@arm.com \
--cc=catalin.marinas@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=oupton@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=will@kernel.org \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox