From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 13AEEC79F9D for ; Mon, 5 Jan 2026 15:51:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=KwPHxSjLAucRX8Q9jM7N+QVrg9+UA+V6wukUY8pyyH0=; b=ccncbQZLteZXt0Kda6zr6ril5m fucb0yl6szry2L2dCfQz7EzptJbBHTY+ittgX278K7TsEEuZhNwDSIOzvOl0fDvYavHzq+o+PyfiT L+LoWPy0OLQ/ea5W2zFrN3A5HbdVe55ikgki0tc8idpigtxq2WGwT6vCTrBsq7SlnSYXAkQRec8GC 0I1HC8NJxR8b57taRNF5gtYvTRWJ0sMpqzxJKOMSSF/ddRkBkrqwQI0aTZnTqhqjjNhvtn0WreAar oUjtdWG7iwJA/BEzgqhq9xVuTKwxeVlVnlW+AOWnyQLg+6v3EQdt+ZZX/5APNH+une5dc/sMO/cke N1CcS/4Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vcmrO-0000000Begi-3m2M; Mon, 05 Jan 2026 15:51:10 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vcmrM-0000000Bec5-1Cfk for linux-arm-kernel@lists.infradead.org; Mon, 05 Jan 2026 15:51:09 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id DCE5443C92; Mon, 5 Jan 2026 15:51:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 37BBBC19421; Mon, 5 Jan 2026 15:51:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767628267; bh=BmsYgqIT3uKN2zRXoz1a0bZTuLRzk92Jv5uTqm+wEjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=faDll1WbHW4Yz/l01C1KlR3cln6HGbtAlXXtchRUcWy3maw94CtldBVIIT3qj4RRi h34KaFarwFQ/bCJGA+fB/Ck8YBknvm8uS9UQ/gVooevV67feSRvJFUbSFUh8QHdJiZ RmppDk+xovNHeimfcRdA0yMJtjhiEembpTZDTRfbfsBR86aZRciJknKo/jbr1TNHis D3s2VuRnZar4dr1SdFmubTZHBVUb+zcg1FgqQ/MkpXvMEioT3EkgkMGrWUrAZjN9VC W4n/zpaNhNmEpapAdejSItV2tCfwV4VJzzeTqHObqBmoeWZTdGquOxHfuoUACUlUiN 1uxuRaPubmaqQ== From: Will Deacon To: kvmarm@lists.linux.dev Cc: linux-arm-kernel@lists.infradead.org, Will Deacon , Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Quentin Perret , Fuad Tabba , Vincent Donnefort , Mostafa Saleh Subject: [PATCH 25/30] KVM: arm64: Implement the MEM_UNSHARE hypercall for protected VMs Date: Mon, 5 Jan 2026 15:49:33 +0000 Message-ID: <20260105154939.11041-26-will@kernel.org> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260105154939.11041-1-will@kernel.org> References: <20260105154939.11041-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260105_075108_378345_9DAD6D2D X-CRM114-Status: GOOD ( 13.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Implement the ARM_SMCCC_KVM_FUNC_MEM_UNSHARE hypercall to allow protected VMs to unshare memory that was previously shared with the host using the ARM_SMCCC_KVM_FUNC_MEM_SHARE hypercall. Signed-off-by: Will Deacon --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/mem_protect.c | 32 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 22 +++++++++++++ 3 files changed, 55 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h index 42fd60c5cfc9..e41a128b0854 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -36,6 +36,7 @@ extern unsigned long hyp_nr_cpus; int __pkvm_prot_finalize(void); int __pkvm_host_share_hyp(u64 pfn); int __pkvm_guest_share_host(struct pkvm_hyp_vcpu *vcpu, u64 gfn); +int __pkvm_guest_unshare_host(struct pkvm_hyp_vcpu *vcpu, u64 gfn); int __pkvm_host_unshare_hyp(u64 pfn); int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c index 365c769c82a4..c1600b88c316 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -920,6 +920,38 @@ int __pkvm_guest_share_host(struct pkvm_hyp_vcpu *vcpu, u64 gfn) return ret; } +int __pkvm_guest_unshare_host(struct pkvm_hyp_vcpu *vcpu, u64 gfn) +{ + struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys, ipa = hyp_pfn_to_phys(gfn); + kvm_pte_t pte; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret = get_valid_guest_pte(vm, ipa, &pte, &phys); + if (ret) + goto unlock; + + ret = -EPERM; + if (pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)) != PKVM_PAGE_SHARED_OWNED) + goto unlock; + if (__host_check_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_BORROWED)) + goto unlock; + + ret = 0; + WARN_ON(host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_GUEST)); + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(KVM_PGTABLE_PROT_RWX, PKVM_PAGE_OWNED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} + int __pkvm_host_unshare_hyp(u64 pfn) { u64 phys = hyp_pfn_to_phys(pfn); diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d8afa2b98542..2890328f4a78 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -988,6 +988,19 @@ static bool pkvm_memshare_call(u64 *ret, struct kvm_vcpu *vcpu, u64 *exit_code) return false; } +static void pkvm_memunshare_call(u64 *ret, struct kvm_vcpu *vcpu) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + u64 ipa = smccc_get_arg1(vcpu); + + if (!PAGE_ALIGNED(ipa)) + return; + + hyp_vcpu = container_of(vcpu, struct pkvm_hyp_vcpu, vcpu); + if (!__pkvm_guest_unshare_host(hyp_vcpu, hyp_phys_to_pfn(ipa))) + ret[0] = SMCCC_RET_SUCCESS; +} + /* * Handler for protected VM HVC calls. * @@ -1005,6 +1018,7 @@ bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code) val[0] = BIT(ARM_SMCCC_KVM_FUNC_FEATURES); val[0] |= BIT(ARM_SMCCC_KVM_FUNC_HYP_MEMINFO); val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MEM_SHARE); + val[0] |= BIT(ARM_SMCCC_KVM_FUNC_MEM_UNSHARE); break; case ARM_SMCCC_VENDOR_HYP_KVM_HYP_MEMINFO_FUNC_ID: if (smccc_get_arg1(vcpu) || @@ -1023,6 +1037,14 @@ bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code) handled = pkvm_memshare_call(val, vcpu, exit_code); break; + case ARM_SMCCC_VENDOR_HYP_KVM_MEM_UNSHARE_FUNC_ID: + if (smccc_get_arg2(vcpu) || + smccc_get_arg3(vcpu)) { + break; + } + + pkvm_memunshare_call(val, vcpu); + break; default: /* Punt everything else back to the host, for now. */ handled = false; -- 2.52.0.351.gbe84eed79e-goog