From: Will Deacon <will@kernel.org>
To: kvmarm@lists.linux.dev
Cc: linux-arm-kernel@lists.infradead.org,
Will Deacon <will@kernel.org>, Marc Zyngier <maz@kernel.org>,
Oliver Upton <oupton@kernel.org>, Joey Gouly <joey.gouly@arm.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Zenghui Yu <yuzenghui@huawei.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Quentin Perret <qperret@google.com>,
Fuad Tabba <tabba@google.com>,
Vincent Donnefort <vdonnefort@google.com>,
Mostafa Saleh <smostafa@google.com>,
Alexandru Elisei <alexandru.elisei@arm.com>
Subject: [PATCH v3 20/36] KVM: arm64: Generalise kvm_pgtable_stage2_set_owner()
Date: Thu, 5 Mar 2026 14:43:33 +0000 [thread overview]
Message-ID: <20260305144351.17071-21-will@kernel.org> (raw)
In-Reply-To: <20260305144351.17071-1-will@kernel.org>
kvm_pgtable_stage2_set_owner() can be generalised into a way to store
up to 59 bits in the page tables alongside a 4-bit 'type' identifier
specific to the format of the 59-bit payload.
Introduce kvm_pgtable_stage2_annotate() and move the existing invalid
ptes (for locked ptes and donated pages) over to the new scheme.
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/kvm_pgtable.h | 39 +++++++++++++++++++--------
arch/arm64/kvm/hyp/nvhe/mem_protect.c | 16 +++++++++--
arch/arm64/kvm/hyp/pgtable.c | 33 ++++++++++++++---------
3 files changed, 62 insertions(+), 26 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index b6f7595c4979..e36c2908bdb2 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -100,13 +100,25 @@ typedef u64 kvm_pte_t;
KVM_PTE_LEAF_ATTR_HI_S2_XN)
#define KVM_INVALID_PTE_OWNER_MASK GENMASK(9, 2)
-#define KVM_MAX_OWNER_ID 3
-/*
- * Used to indicate a pte for which a 'break-before-make' sequence is in
- * progress.
- */
-#define KVM_INVALID_PTE_LOCKED BIT(10)
+/* pKVM invalid pte encodings */
+#define KVM_INVALID_PTE_TYPE_MASK GENMASK(63, 60)
+#define KVM_INVALID_PTE_ANNOT_MASK ~(KVM_PTE_VALID | \
+ KVM_INVALID_PTE_TYPE_MASK)
+
+enum kvm_invalid_pte_type {
+ /*
+ * Used to indicate a pte for which a 'break-before-make'
+ * sequence is in progress.
+ */
+ KVM_INVALID_PTE_TYPE_LOCKED = 1,
+
+ /*
+ * pKVM has unmapped the page from the host due to a change of
+ * ownership.
+ */
+ KVM_HOST_INVALID_PTE_TYPE_DONATION,
+};
static inline bool kvm_pte_valid(kvm_pte_t pte)
{
@@ -658,14 +670,18 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
void *mc, enum kvm_pgtable_walk_flags flags);
/**
- * kvm_pgtable_stage2_set_owner() - Unmap and annotate pages in the IPA space to
- * track ownership.
+ * kvm_pgtable_stage2_annotate() - Unmap and annotate pages in the IPA space
+ * to track ownership (and more).
* @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*().
* @addr: Base intermediate physical address to annotate.
* @size: Size of the annotated range.
* @mc: Cache of pre-allocated and zeroed memory from which to allocate
* page-table pages.
- * @owner_id: Unique identifier for the owner of the page.
+ * @type: The type of the annotation, determining its meaning and format.
+ * @annotation: A 59-bit value that will be stored in the page tables.
+ * @annotation[0] and @annotation[63:60] must be 0.
+ * @annotation[59:1] is stored in the page tables, along
+ * with @type.
*
* By default, all page-tables are owned by identifier 0. This function can be
* used to mark portions of the IPA space as owned by other entities. When a
@@ -674,8 +690,9 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
*
* Return: 0 on success, negative error code on failure.
*/
-int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
- void *mc, u8 owner_id);
+int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
+ void *mc, enum kvm_invalid_pte_type type,
+ kvm_pte_t annotation);
/**
* kvm_pgtable_stage2_unmap() - Remove a mapping from a guest stage-2 page-table.
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index f673ebe7c888..b4c2ca86e5b3 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -549,10 +549,19 @@ static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm_page_
set_host_state(page, state);
}
+static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
+{
+ return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
+}
+
int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
{
+ kvm_pte_t annotation;
int ret = -EINVAL;
+ if (!FIELD_FIT(KVM_INVALID_PTE_OWNER_MASK, owner_id))
+ return -EINVAL;
+
if (!range_is_memory(addr, addr + size))
return -EPERM;
@@ -564,8 +573,11 @@ int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id)
break;
case PKVM_ID_GUEST:
case PKVM_ID_HYP:
- ret = host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt,
- addr, size, &host_s2_pool, owner_id);
+ annotation = kvm_init_invalid_leaf_owner(owner_id);
+ ret = host_stage2_try(kvm_pgtable_stage2_annotate, &host_mmu.pgt,
+ addr, size, &host_s2_pool,
+ KVM_HOST_INVALID_PTE_TYPE_DONATION,
+ annotation);
if (!ret)
__host_update_page_state(addr, size, PKVM_NOPAGE);
break;
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 9b480f947da2..84c7a1df845d 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -114,11 +114,6 @@ static kvm_pte_t kvm_init_valid_leaf_pte(u64 pa, kvm_pte_t attr, s8 level)
return pte;
}
-static kvm_pte_t kvm_init_invalid_leaf_owner(u8 owner_id)
-{
- return FIELD_PREP(KVM_INVALID_PTE_OWNER_MASK, owner_id);
-}
-
static int kvm_pgtable_visitor_cb(struct kvm_pgtable_walk_data *data,
const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit)
@@ -581,7 +576,7 @@ void kvm_pgtable_hyp_destroy(struct kvm_pgtable *pgt)
struct stage2_map_data {
const u64 phys;
kvm_pte_t attr;
- u8 owner_id;
+ kvm_pte_t pte_annot;
kvm_pte_t *anchor;
kvm_pte_t *childp;
@@ -798,7 +793,11 @@ static bool stage2_pte_is_counted(kvm_pte_t pte)
static bool stage2_pte_is_locked(kvm_pte_t pte)
{
- return !kvm_pte_valid(pte) && (pte & KVM_INVALID_PTE_LOCKED);
+ if (kvm_pte_valid(pte))
+ return false;
+
+ return FIELD_GET(KVM_INVALID_PTE_TYPE_MASK, pte) ==
+ KVM_INVALID_PTE_TYPE_LOCKED;
}
static bool stage2_try_set_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t new)
@@ -829,6 +828,7 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
struct kvm_s2_mmu *mmu)
{
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
+ kvm_pte_t locked_pte;
if (stage2_pte_is_locked(ctx->old)) {
/*
@@ -839,7 +839,9 @@ static bool stage2_try_break_pte(const struct kvm_pgtable_visit_ctx *ctx,
return false;
}
- if (!stage2_try_set_pte(ctx, KVM_INVALID_PTE_LOCKED))
+ locked_pte = FIELD_PREP(KVM_INVALID_PTE_TYPE_MASK,
+ KVM_INVALID_PTE_TYPE_LOCKED);
+ if (!stage2_try_set_pte(ctx, locked_pte))
return false;
if (!kvm_pgtable_walk_skip_bbm_tlbi(ctx)) {
@@ -964,7 +966,7 @@ static int stage2_map_walker_try_leaf(const struct kvm_pgtable_visit_ctx *ctx,
if (!data->annotation)
new = kvm_init_valid_leaf_pte(phys, data->attr, ctx->level);
else
- new = kvm_init_invalid_leaf_owner(data->owner_id);
+ new = data->pte_annot;
/*
* Skip updating the PTE if we are trying to recreate the exact
@@ -1118,16 +1120,18 @@ int kvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size,
return ret;
}
-int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
- void *mc, u8 owner_id)
+int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
+ void *mc, enum kvm_invalid_pte_type type,
+ kvm_pte_t pte_annot)
{
int ret;
struct stage2_map_data map_data = {
.mmu = pgt->mmu,
.memcache = mc,
- .owner_id = owner_id,
.force_pte = true,
.annotation = true,
+ .pte_annot = pte_annot |
+ FIELD_PREP(KVM_INVALID_PTE_TYPE_MASK, type),
};
struct kvm_pgtable_walker walker = {
.cb = stage2_map_walker,
@@ -1136,7 +1140,10 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size,
.arg = &map_data,
};
- if (owner_id > KVM_MAX_OWNER_ID)
+ if (pte_annot & ~KVM_INVALID_PTE_ANNOT_MASK)
+ return -EINVAL;
+
+ if (!type || type == KVM_INVALID_PTE_TYPE_LOCKED)
return -EINVAL;
ret = kvm_pgtable_walk(pgt, addr, size, &walker);
--
2.53.0.473.g4a7958ca14-goog
next prev parent reply other threads:[~2026-03-05 14:45 UTC|newest]
Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 14:43 [PATCH v3 00/36] KVM: arm64: Add support for protected guest memory with pKVM Will Deacon
2026-03-05 14:43 ` [PATCH v3 01/36] KVM: arm64: Don't leak stage-2 page-table if VM fails to init under pKVM Will Deacon
2026-03-11 12:48 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 02/36] KVM: arm64: Move handle check into pkvm_pgtable_stage2_destroy_range() Will Deacon
2026-03-11 10:15 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 03/36] KVM: arm64: Rename __pkvm_pgtable_stage2_unmap() Will Deacon
2026-03-11 12:49 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 04/36] KVM: arm64: Don't advertise unsupported features for protected guests Will Deacon
2026-03-11 10:15 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 05/36] KVM: arm64: Expose self-hosted debug regs as RAZ/WI " Will Deacon
2026-03-05 14:43 ` [PATCH v3 06/36] KVM: arm64: Remove is_protected_kvm_enabled() checks from hypercalls Will Deacon
2026-03-11 10:16 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 07/36] KVM: arm64: Ignore MMU notifier callbacks for protected VMs Will Deacon
2026-03-11 12:50 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 08/36] KVM: arm64: Prevent unsupported memslot operations on " Will Deacon
2026-03-11 10:16 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 09/36] KVM: arm64: Ignore -EAGAIN when mapping in pages for the pKVM host Will Deacon
2026-03-11 10:10 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 10/36] KVM: arm64: Split teardown hypercall into two phases Will Deacon
2026-03-11 10:22 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 11/36] KVM: arm64: Introduce __pkvm_host_donate_guest() Will Deacon
2026-03-20 12:38 ` Marc Zyngier
2026-03-23 14:55 ` Will Deacon
2026-03-05 14:43 ` [PATCH v3 12/36] KVM: arm64: Hook up donation hypercall to pkvm_pgtable_stage2_map() Will Deacon
2026-03-05 14:43 ` [PATCH v3 13/36] KVM: arm64: Handle aborts from protected VMs Will Deacon
2026-03-11 10:22 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 14/36] KVM: arm64: Introduce __pkvm_reclaim_dying_guest_page() Will Deacon
2026-03-05 14:43 ` [PATCH v3 15/36] KVM: arm64: Hook up reclaim hypercall to pkvm_pgtable_stage2_destroy() Will Deacon
2026-03-05 14:43 ` [PATCH v3 16/36] KVM: arm64: Factor out pKVM host exception injection logic Will Deacon
2026-03-11 10:12 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 17/36] KVM: arm64: Support translation faults in inject_host_exception() Will Deacon
2026-03-11 10:12 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 18/36] KVM: arm64: Inject SIGSEGV on illegal accesses Will Deacon
2026-03-11 10:13 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 19/36] KVM: arm64: Avoid pointless annotation when mapping host-owned pages Will Deacon
2026-03-05 14:43 ` Will Deacon [this message]
2026-03-05 14:43 ` [PATCH v3 21/36] KVM: arm64: Introduce host_stage2_set_owner_metadata_locked() Will Deacon
2026-03-05 14:43 ` [PATCH v3 22/36] KVM: arm64: Change 'pkvm_handle_t' to u16 Will Deacon
2026-03-05 14:43 ` [PATCH v3 23/36] KVM: arm64: Annotate guest donations with handle and gfn in host stage-2 Will Deacon
2026-03-05 14:43 ` [PATCH v3 24/36] KVM: arm64: Introduce hypercall to force reclaim of a protected page Will Deacon
2026-03-05 14:43 ` [PATCH v3 25/36] KVM: arm64: Reclaim faulting page from pKVM in spurious fault handler Will Deacon
2026-03-20 16:20 ` Marc Zyngier
2026-03-21 9:39 ` Marc Zyngier
2026-03-23 14:58 ` Will Deacon
2026-03-05 14:43 ` [PATCH v3 26/36] KVM: arm64: Return -EFAULT from VCPU_RUN on access to a poisoned pte Will Deacon
2026-03-20 16:35 ` Marc Zyngier
2026-03-23 14:58 ` Will Deacon
2026-03-05 14:43 ` [PATCH v3 27/36] KVM: arm64: Add hvc handler at EL2 for hypercalls from protected VMs Will Deacon
2026-03-05 14:43 ` [PATCH v3 28/36] KVM: arm64: Implement the MEM_SHARE hypercall for " Will Deacon
2026-03-05 14:43 ` [PATCH v3 29/36] KVM: arm64: Implement the MEM_UNSHARE " Will Deacon
2026-03-05 14:43 ` [PATCH v3 30/36] KVM: arm64: Allow userspace to create protected VMs when pKVM is enabled Will Deacon
2026-03-11 10:25 ` Fuad Tabba
2026-03-20 13:22 ` Marc Zyngier
2026-03-23 15:00 ` Will Deacon
2026-03-05 14:43 ` [PATCH v3 31/36] KVM: arm64: Add some initial documentation for pKVM Will Deacon
2026-03-11 10:25 ` Fuad Tabba
2026-03-05 14:43 ` [PATCH v3 32/36] KVM: arm64: Extend pKVM page ownership selftests to cover guest donation Will Deacon
2026-03-05 14:43 ` [PATCH v3 33/36] KVM: arm64: Register 'selftest_vm' in the VM table Will Deacon
2026-03-05 14:43 ` [PATCH v3 34/36] KVM: arm64: Extend pKVM page ownership selftests to cover forced reclaim Will Deacon
2026-03-05 14:43 ` [PATCH v3 35/36] KVM: arm64: Extend pKVM page ownership selftests to cover guest hvcs Will Deacon
2026-03-05 14:43 ` [PATCH v3 36/36] KVM: arm64: Rename PKVM_PAGE_STATE_MASK Will Deacon
2026-03-11 10:26 ` Fuad Tabba
2026-03-11 10:07 ` [PATCH v3 00/36] KVM: arm64: Add support for protected guest memory with pKVM Fuad Tabba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260305144351.17071-21-will@kernel.org \
--to=will@kernel.org \
--cc=alexandru.elisei@arm.com \
--cc=catalin.marinas@arm.com \
--cc=joey.gouly@arm.com \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=maz@kernel.org \
--cc=oupton@kernel.org \
--cc=qperret@google.com \
--cc=smostafa@google.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=vdonnefort@google.com \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.