* [RFC PATCH 0/2] Optimize S2 page splitting
@ 2026-05-15 19:59 Leonardo Bras
2026-05-15 19:59 ` [RFC PATCH 1/2] KVM: arm64: Introduce S2 walker SKIP return options Leonardo Bras
2026-05-15 19:59 ` [RFC PATCH 2/2] KVM: arm64: Improve splitting performance by using SKIP return values Leonardo Bras
0 siblings, 2 replies; 3+ messages in thread
From: Leonardo Bras @ 2026-05-15 19:59 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Fuad Tabba,
Leonardo Bras, Raghavendra Rao Ananta
Cc: linux-arm-kernel, kvmarm, linux-kernel
While playing with dirty-bit tracking, I decided to take a look on how page
splitting works. Found out all entries are walked, even though we can infer,
for instance that:
- If a level-3 entry is walked, it means the parent level-2 entry is split
- If a split just succeeded in an table entry, it means all children nodes
are already split
So I tried to optimize it in a way that it does not break other users.
My main idea is to introduce positive return values that hint to the
pagetable walking mechanism that either siblings or children can be
skipped. That should be contained to the visitor function, that returns
zero if no error was detected.
Numbers on above optimization are promising:
A 1GB VM, running on the model, splitting all at the beginning
(no manual protect):
- Memory was already split (4k pages): -97.33% runtime (-172ms) - 20 runs
- THP backed memory: -19.82% runtime (-153ms) - 10 runs
- 1x1GB hugetlb memory: -20.65% runtime (-150ms) - 10 runs
This is measured with this snippet[1].
I ran at least 10 times on different 1GB VMs, to make sure the numbers are
consistent.
Ideas I considered:
- Using a negative return value, using kvm_pgtable_walk_continue to
filter it as a non-error, but decided that is kind of counter-intuitive
- Using the introduced return values to hint the split walker to not
splitting level-2 blocks (or level-1), if by adding a new parameter in
kvm_pgtable_stage2_split() and carrying it over to the walker using
ctx->arg. (Splitting only up to given hugepage size)
- Looking at other walkers, and trying to think on scenarios to
optimize them using the new return values.
Do you think it is worth doing this?
Please provide feedback!
Thanks!
Leo
[1]:
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index d089c107d9b7..6424e833b7be 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1272,22 +1273,26 @@ static void kvm_mmu_split_memory_region(struct kvm *kvm, int slot)
phys_addr_t start, end;
lockdep_assert_held(&kvm->slots_lock);
slots = kvm_memslots(kvm);
memslot = id_to_memslot(slots, slot);
start = memslot->base_gfn << PAGE_SHIFT;
end = (memslot->base_gfn + memslot->npages) << PAGE_SHIFT;
+
write_lock(&kvm->mmu_lock);
+ u64 sw = ktime_get_real_ns();
kvm_mmu_split_huge_pages(kvm, start, end);
+ sw = ktime_get_real_ns() - sw;
+ printk("split from %llx to %llx took %llu ns\n", start, end, sw);
write_unlock(&kvm->mmu_lock);
}
Leonardo Bras (2):
KVM: arm64: Introduce S2 walker SKIP return options
KVM: arm64: Improve splitting performance by using SKIP return values
arch/arm64/kvm/hyp/pgtable.c | 32 +++++++++++++++++++++++++-------
1 file changed, 25 insertions(+), 7 deletions(-)
base-commit: 5d6919055dec134de3c40167a490f33c74c12581
--
2.54.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [RFC PATCH 1/2] KVM: arm64: Introduce S2 walker SKIP return options
2026-05-15 19:59 [RFC PATCH 0/2] Optimize S2 page splitting Leonardo Bras
@ 2026-05-15 19:59 ` Leonardo Bras
2026-05-15 19:59 ` [RFC PATCH 2/2] KVM: arm64: Improve splitting performance by using SKIP return values Leonardo Bras
1 sibling, 0 replies; 3+ messages in thread
From: Leonardo Bras @ 2026-05-15 19:59 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Fuad Tabba,
Leonardo Bras, Raghavendra Rao Ananta
Cc: linux-arm-kernel, kvmarm, linux-kernel
Introduce S2 walker return values:
- SKIP_CHILDREN: skip walking the children of the current node
- SKIP_SIBLINGS: skip waling the siblings of the current node
Also, modify __kvm_pgtable_visit() to fulfil the hing on above return
values. Current walkers should not be impacted
Signed-off-by: Leonardo Bras <leo.bras@arm.com>
---
arch/arm64/kvm/hyp/pgtable.c | 20 ++++++++++++++++----
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 0c1defa5fb0f..4e43339522bb 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -12,20 +12,26 @@
#include <asm/stage2_pgtable.h>
struct kvm_pgtable_walk_data {
struct kvm_pgtable_walker *walker;
const u64 start;
u64 addr;
const u64 end;
};
+/* Positive walker return values point levels to skip */
+enum walker_return{
+ SKIP_CHILDREN = 1,
+ SKIP_SIBLINGS
+};
+
static bool kvm_pgtable_walk_skip_bbm_tlbi(const struct kvm_pgtable_visit_ctx *ctx)
{
return unlikely(ctx->flags & KVM_PGTABLE_WALK_SKIP_BBM_TLBI);
}
static bool kvm_pgtable_walk_skip_cmo(const struct kvm_pgtable_visit_ctx *ctx)
{
return unlikely(ctx->flags & KVM_PGTABLE_WALK_SKIP_CMO);
}
@@ -134,21 +140,21 @@ static bool kvm_pgtable_walk_continue(const struct kvm_pgtable_walker *walker,
* update a PTE. In the context of a fault handler this is interpreted
* as a signal to retry guest execution.
*
* Ignore the return code altogether for walkers outside a fault handler
* (e.g. write protecting a range of memory) and chug along with the
* page table walk.
*/
if (r == -EAGAIN)
return walker->flags & KVM_PGTABLE_WALK_IGNORE_EAGAIN;
- return !r;
+ return r >= 0;
}
static int __kvm_pgtable_walk(struct kvm_pgtable_walk_data *data,
struct kvm_pgtable_mm_ops *mm_ops, kvm_pteref_t pgtable, s8 level);
static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
struct kvm_pgtable_mm_ops *mm_ops,
kvm_pteref_t pteref, s8 level)
{
enum kvm_pgtable_walk_flags flags = data->walker->flags;
@@ -185,23 +191,29 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
* into a newly installed or replaced table.
*/
if (reload) {
ctx.old = READ_ONCE(*ptep);
table = kvm_pte_table(ctx.old, level);
}
if (!kvm_pgtable_walk_continue(data->walker, ret))
goto out;
- if (!table) {
- data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
- data->addr += kvm_granule_size(level);
+ if (!table || ret >= SKIP_CHILDREN) {
+ u64 size;
+
+ if (ret == SKIP_SIBLINGS) /* Skip siblings */
+ size = kvm_granule_size(level - 1);
+ else /* Skip children */
+ size = kvm_granule_size(level);
+
+ data->addr = ALIGN_DOWN(data->addr, size) + size;
goto out;
}
childp = (kvm_pteref_t)kvm_pte_follow(ctx.old, mm_ops);
ret = __kvm_pgtable_walk(data, mm_ops, childp, level + 1);
if (!kvm_pgtable_walk_continue(data->walker, ret))
goto out;
if (ctx.flags & KVM_PGTABLE_WALK_TABLE_POST)
ret = kvm_pgtable_visitor_cb(data, &ctx, KVM_PGTABLE_WALK_TABLE_POST);
--
2.54.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [RFC PATCH 2/2] KVM: arm64: Improve splitting performance by using SKIP return values
2026-05-15 19:59 [RFC PATCH 0/2] Optimize S2 page splitting Leonardo Bras
2026-05-15 19:59 ` [RFC PATCH 1/2] KVM: arm64: Introduce S2 walker SKIP return options Leonardo Bras
@ 2026-05-15 19:59 ` Leonardo Bras
1 sibling, 0 replies; 3+ messages in thread
From: Leonardo Bras @ 2026-05-15 19:59 UTC (permalink / raw)
To: Marc Zyngier, Oliver Upton, Joey Gouly, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Fuad Tabba,
Leonardo Bras, Raghavendra Rao Ananta
Cc: linux-arm-kernel, kvmarm, linux-kernel
Splitting an S2 pagetable is needed when using dirty-bit tracking.
Currently, when splitting, all the child and sibling nodes will be walked,
with the walker just returning earlier if there is nothing to do. This
means all pagetable entries in the splitting range get a callback from the
walker function, even if it was just split, or it's a level-3 entry.
Optimize splitting in two cases:
- If a level-3 entry is walked, it means the parent level-2 entry is split,
so avoid walking all level-3 siblings.
- If a split just succeeded in an table entry, it means all children nodes
are already split, so skip walking this entry's children.
Optimization measured on a 1GB VM, running on the model, splitting all at
the beginning (no manual protect):
- Memory was already split (4k pages): -97.33% runtime (-172ms) - 20 runs
- THP backed memory: -19.82% runtime (-153ms) - 10 runs
- 1x1GB hugetlb memory: -20.65% runtime (-150ms) - 10 runs
(Above runtime is measured on kvm_mmu_split_huge_pages(), using
ktime_get_real_ns() before and after function call)
Signed-off-by: Leonardo Bras <leo.bras@arm.com>
---
arch/arm64/kvm/hyp/pgtable.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4e43339522bb..164c5bcd6026 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -1502,23 +1502,27 @@ static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx,
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
struct kvm_mmu_memory_cache *mc = ctx->arg;
struct kvm_s2_mmu *mmu;
kvm_pte_t pte = ctx->old, new, *childp;
enum kvm_pgtable_prot prot;
s8 level = ctx->level;
bool force_pte;
int nr_pages;
u64 phys;
- /* No huge-pages exist at the last level */
+ /*
+ * No huge-pages exist at the last level
+ * Also, if one PTE exist in the last level, the whole block is already
+ * split, so skip walking it's siblings.
+ */
if (level == KVM_PGTABLE_LAST_LEVEL)
- return 0;
+ return SKIP_SIBLINGS;
/* We only split valid block mappings */
if (!kvm_pte_valid(pte))
return 0;
nr_pages = stage2_block_get_nr_page_tables(level);
if (nr_pages < 0)
return nr_pages;
if (mc->nobjs >= nr_pages) {
@@ -1554,21 +1558,23 @@ static int stage2_split_walker(const struct kvm_pgtable_visit_ctx *ctx,
return -EAGAIN;
}
/*
* Note, the contents of the page table are guaranteed to be made
* visible before the new PTE is assigned because stage2_make_pte()
* writes the PTE using smp_store_release().
*/
new = kvm_init_table_pte(childp, mm_ops);
stage2_make_pte(ctx, new);
- return 0;
+
+ /* All child entries are already split, so skip walking them */
+ return SKIP_CHILDREN;
}
int kvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size,
struct kvm_mmu_memory_cache *mc)
{
struct kvm_pgtable_walker walker = {
.cb = stage2_split_walker,
.flags = KVM_PGTABLE_WALK_LEAF,
.arg = mc,
};
--
2.54.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-15 19:59 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 19:59 [RFC PATCH 0/2] Optimize S2 page splitting Leonardo Bras
2026-05-15 19:59 ` [RFC PATCH 1/2] KVM: arm64: Introduce S2 walker SKIP return options Leonardo Bras
2026-05-15 19:59 ` [RFC PATCH 2/2] KVM: arm64: Improve splitting performance by using SKIP return values Leonardo Bras
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox