From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E903ACD4851 for ; Wed, 13 May 2026 13:20:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=y+wR1noLlGu2q5JZUPyyMW442GvDS4inA6elG+t5lw4=; b=p2v8QNX0+m6T4LH4Z3Iql/LX0E mBo2/kzPVn/+PG72jSX5eU+ZLsJvkc36OHXCFP6tXRXe+IJdzUZHfgJ8rZ73BnLUD/x/cpjkzt25D yi4YJJ/dysFvQfuFW09tF9L3fPKKfkSCehIZXDSgVsGQzI09ftoDaNFUPqkUk9ZwpMxH7b2tH7NtZ BV/DKRsqzB02KWp1npNQSw1f1F8QIFSZ9qnuJV7wPFzbWpQNXf9SCEJD9Ww4LgyvwHn48ECbDdLrr qUNPzmCkPy9cbBT6O9+0kK16HLgpdkcEJKrBuu3EDQ2I4OezKjDbRpbqb/jLJZNgPXii2i7fr8qbS hWiu6u3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wN9VN-00000002ft2-1c2k; Wed, 13 May 2026 13:20:05 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wN9VD-00000002fgz-2nky for linux-arm-kernel@lists.infradead.org; Wed, 13 May 2026 13:20:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F75C237B; Wed, 13 May 2026 06:19:49 -0700 (PDT) Received: from e122027.arm.com (unknown [10.57.68.187]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCF283F836; Wed, 13 May 2026 06:19:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778678394; bh=SJ2XA9UoMM31SBdDaBkuzD597HYUdR3vbt0+Us0jxOY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jya8AZwvTTIq5iuKH8DTIKeMNA3mCn/hvVMNlLncan3epIXRkQghdHRb4KwMFg4FX Ux3sIx0fP25J54739KtWdyRNpbg7QcmkHPBYmG1VpzD4iFx2/JWtyTHbPrHiDzbNVM rD6pzVrrz0uzw+zePKSXuoqRDevf0BXIvJefFZX0= From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun , "Aneesh Kumar K . V" , Emi Kisanuki , Vishal Annapurve , WeiLin.Chang@arm.com, Lorenzo.Pieralisi2@arm.com Subject: [PATCH v14 17/44] arm64: RMI: RTT tear down Date: Wed, 13 May 2026 14:17:25 +0100 Message-ID: <20260513131757.116630-18-steven.price@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260513131757.116630-1-steven.price@arm.com> References: <20260513131757.116630-1-steven.price@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.9.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260513_061955_819544_D7D17B63 X-CRM114-Status: GOOD ( 25.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The RMM owns the stage 2 page tables for a realm, and KVM must request that the RMM creates/destroys entries as necessary. The physical pages to store the page tables are delegated to the realm as required, and can be undelegated when no longer used. Creating new RTTs is the easy part, tearing down is a little more tricky. The result of realm_rtt_destroy() can be used to effectively walk the tree and destroy the entries (undelegating pages that were given to the realm). Signed-off-by: Steven Price --- Changes since v13: * Avoid the double call of kvm_free_stage2_pgd() by splitting the work across that and a new function kvm_realm_uninit_stage2() which is only called for realm guests. Changes since v12: * Simplify some functions now we know RMM page size is the same as the host's. Changes since v11: * Moved some code from earlier in the series to this one so that it's added when it's first used. Changes since v10: * RME->RMI rename. * Some code to handle freeing stage 2 PGD moved into this patch where it belongs. Changes since v9: * Add a comment clarifying that root level RTTs are not destroyed until after the RD is destroyed. Changes since v8: * Introduce free_rtt() wrapper which calls free_delegated_granule() followed by kvm_account_pgtable_pages(). This makes it clear where an RTT is being freed rather than just a delegated granule. Changes since v6: * Move rme_rtt_level_mapsize() and supporting defines from kvm_rme.h into rme.c as they are only used in that file. Changes since v5: * Rename some RME_xxx defines to do with page sizes as RMM_xxx - they are a property of the RMM specification not the RME architecture. Changes since v2: * Moved {alloc,free}_delegated_page() and ensure_spare_page() to a later patch when they are actually used. * Some simplifications now rmi_xxx() functions allow NULL as an output parameter. * Improved comments and code layout. --- arch/arm64/include/asm/kvm_rmi.h | 7 ++ arch/arm64/kvm/mmu.c | 21 ++++- arch/arm64/kvm/rmi.c | 148 +++++++++++++++++++++++++++++++ 3 files changed, 174 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_rmi.h b/arch/arm64/include/asm/kvm_rmi.h index 9de34983ee52..06ba0d4745c6 100644 --- a/arch/arm64/include/asm/kvm_rmi.h +++ b/arch/arm64/include/asm/kvm_rmi.h @@ -64,5 +64,12 @@ u32 kvm_realm_ipa_limit(void); int kvm_init_realm(struct kvm *kvm); void kvm_destroy_realm(struct kvm *kvm); +void kvm_realm_destroy_rtts(struct kvm *kvm); + +static inline bool kvm_realm_is_private_address(struct realm *realm, + unsigned long addr) +{ + return !(addr & BIT(realm->ia_bits - 1)); +} #endif /* __ASM_KVM_RMI_H */ diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ba8286472286..eb56d4e7f21a 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1024,9 +1024,26 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_mmu *mmu, unsigned long t return err; } +static void kvm_realm_uninit_stage2(struct kvm_s2_mmu *mmu) +{ + struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); + struct realm *realm = &kvm->arch.realm; + + if (kvm_realm_state(kvm) != REALM_STATE_ACTIVE) + return; + + write_lock(&kvm->mmu_lock); + kvm_stage2_unmap_range(mmu, 0, BIT(realm->ia_bits - 1), true); + write_unlock(&kvm->mmu_lock); + kvm_realm_destroy_rtts(kvm); +} + void kvm_uninit_stage2_mmu(struct kvm *kvm) { - kvm_free_stage2_pgd(&kvm->arch.mmu); + if (kvm_is_realm(kvm)) + kvm_realm_uninit_stage2(&kvm->arch.mmu); + else + kvm_free_stage2_pgd(&kvm->arch.mmu); kvm_mmu_free_memory_cache(&kvm->arch.mmu.split_page_cache); } @@ -1103,7 +1120,7 @@ void stage2_unmap_vm(struct kvm *kvm) void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) { struct kvm *kvm = kvm_s2_mmu_to_kvm(mmu); - struct kvm_pgtable *pgt = NULL; + struct kvm_pgtable *pgt; write_lock(&kvm->mmu_lock); pgt = mmu->pgt; diff --git a/arch/arm64/kvm/rmi.c b/arch/arm64/kvm/rmi.c index f51ec667445e..5b00ccca4af3 100644 --- a/arch/arm64/kvm/rmi.c +++ b/arch/arm64/kvm/rmi.c @@ -11,6 +11,14 @@ #include #include +static inline unsigned long rmi_rtt_level_mapsize(int level) +{ + if (WARN_ON(level > KVM_PGTABLE_LAST_LEVEL)) + return PAGE_SIZE; + + return (1UL << ARM64_HW_PGTABLE_LEVEL_SHIFT(level)); +} + static bool rmi_has_feature(unsigned long feature) { return !!u64_get_bits(rmm_feat_reg0, feature); @@ -21,6 +29,144 @@ u32 kvm_realm_ipa_limit(void) return u64_get_bits(rmm_feat_reg0, RMI_FEATURE_REGISTER_0_S2SZ); } +static int get_start_level(struct realm *realm) +{ + return 4 - stage2_pgtable_levels(realm->ia_bits); +} + +static void free_rtt(phys_addr_t phys) +{ + if (free_delegated_page(phys)) + return; + + kvm_account_pgtable_pages(phys_to_virt(phys), -1); +} + +/* + * realm_rtt_destroy - Destroy an RTT at @level for @addr. + * + * Returns - Result of the RMI_RTT_DESTROY call, and: + * @rtt_granule: RTT granule, if the RTT was destroyed. + * @next_addr: IPA corresponding to the next possible valid entry we + * can target + */ +static int realm_rtt_destroy(struct realm *realm, unsigned long addr, + int level, phys_addr_t *rtt_granule, + unsigned long *next_addr) +{ + unsigned long out_rtt; + int ret; + + ret = rmi_rtt_destroy(virt_to_phys(realm->rd), addr, level, + &out_rtt, next_addr); + + *rtt_granule = out_rtt; + + return ret; +} + +static int realm_tear_down_rtt_level(struct realm *realm, int level, + unsigned long start, unsigned long end) +{ + ssize_t map_size; + unsigned long addr, next_addr; + + if (WARN_ON(level > KVM_PGTABLE_LAST_LEVEL)) + return -EINVAL; + + map_size = rmi_rtt_level_mapsize(level - 1); + + for (addr = start; addr < end; addr = next_addr) { + phys_addr_t rtt_granule; + int ret; + unsigned long align_addr = ALIGN(addr, map_size); + + next_addr = ALIGN(addr + 1, map_size); + + if (next_addr > end || align_addr != addr) { + /* + * The target range is smaller than what this level + * covers, recurse deeper. + */ + ret = realm_tear_down_rtt_level(realm, + level + 1, + addr, + min(next_addr, end)); + if (ret) + return ret; + continue; + } + + ret = realm_rtt_destroy(realm, addr, level, + &rtt_granule, &next_addr); + + switch (RMI_RETURN_STATUS(ret)) { + case RMI_SUCCESS: + free_rtt(rtt_granule); + break; + case RMI_ERROR_RTT: + if (next_addr > addr) { + /* Missing RTT, skip */ + break; + } + /* + * We tear down the RTT range for the full IPA + * space, after everything is unmapped. Also we + * descend down only if we cannot tear down a + * top level RTT. Thus RMM must be able to walk + * to the requested level. e.g., a block mapping + * exists at L1 or L2. + */ + if (WARN_ON(RMI_RETURN_INDEX(ret) != level)) + return -EBUSY; + if (WARN_ON(level == KVM_PGTABLE_LAST_LEVEL)) + return -EBUSY; + + /* + * The table has active entries in it, recurse deeper + * and tear down the RTTs. + */ + next_addr = ALIGN(addr + 1, map_size); + ret = realm_tear_down_rtt_level(realm, + level + 1, + addr, + next_addr); + if (ret) + return ret; + /* + * Now that the child RTTs are destroyed, + * retry at this level. + */ + next_addr = addr; + break; + default: + WARN_ON(1); + return -ENXIO; + } + } + + return 0; +} + +static int realm_tear_down_rtt_range(struct realm *realm, + unsigned long start, unsigned long end) +{ + /* + * Root level RTTs can only be destroyed after the RD is destroyed. So + * tear down everything below the root level + */ + return realm_tear_down_rtt_level(realm, get_start_level(realm) + 1, + start, end); +} + +void kvm_realm_destroy_rtts(struct kvm *kvm) +{ + struct realm *realm = &kvm->arch.realm; + unsigned int ia_bits = realm->ia_bits; + + realm_tear_down_rtt_range(realm, 0, (1UL << ia_bits)); +} + void kvm_destroy_realm(struct kvm *kvm) { struct realm *realm = &kvm->arch.realm; @@ -47,6 +193,8 @@ void kvm_destroy_realm(struct kvm *kvm) if (WARN_ON(rmi_realm_terminate(rd_phys))) return; + kvm_realm_destroy_rtts(kvm); + if (WARN_ON(rmi_realm_destroy(rd_phys))) return; free_delegated_page(rd_phys); -- 2.43.0