From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCE047082A; Fri, 16 May 2025 13:28:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747402110; cv=none; b=WjLYlVhZ/RBeR9ibaMEQFb+nFiora5OSFG28bZW/iDYcaWD+4Ppe5MR71kj0B42B/0qdGcsjqrjLkXR/CSBIMEhzgPhPpweNwXilzcOEnzv0ng9Zdc//BWBKsrZZ5/XxBcyAnFrmR9xGKayLTzNUE9YPXpHrZACu4howXuEB8mA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747402110; c=relaxed/simple; bh=3sB616abDAZb5EfAV8UVfyzoSDpEdedb+Tq6sVApafs=; h=Date:Message-ID:From:To:Cc:Subject:In-Reply-To:References: MIME-Version:Content-Type; b=VQ5zy/R/SJWKGFT+/bjn/WqFXf7tIGyi3GCA0O7PWvbBhnRLUdvnO4fBo0dvR5j7wuML5pH73NPkcOz4taMpUNFICmIbZJRQRYOxqvxATOoaBQgGc5uzvI2hNJr6NwDKgwSu2HFS/RDESkQnZYdj7nZ7CLnsUOefJE80f9CJQWE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tAXSSID3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tAXSSID3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98826C4CEE4; Fri, 16 May 2025 13:28:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747402110; bh=3sB616abDAZb5EfAV8UVfyzoSDpEdedb+Tq6sVApafs=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=tAXSSID34gaLT7yTutbPqUGXCPqNDpcju20eDMkAm/NIZ6RUyjVcTtewhn/OQSwQR 9W4/+ZCRGTe77wpfOlJDgGjffsjTo0mtzd2Wj72RlMFajFPntjcMdG34s7JBmj94+6 I6lAwCARa+DL+ZpywiThTkmY0fHM8CvPygtcR9lVjDwzg6PELcWXlblPscZ5XvwVMi H8/4J7OjKbIHbukh/bulZOKm1g7NCbvzgtsI7sLaQz46F5YZRIbp0FdQDTu33WLkF0 Wv+ffmd40ig0YN5MVZvcZZTTko1EsZ+9Wyu055ti6MCM7SGycbKkczQKVveJ9qxAdg lpn9iBHEBDJVw== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1uFv6y-00FZ8u-CD; Fri, 16 May 2025 14:28:28 +0100 Date: Fri, 16 May 2025 14:28:27 +0100 Message-ID: <865xi0fzwk.wl-maz@kernel.org> From: Marc Zyngier To: Vincent Donnefort Cc: oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v4 09/10] KVM: arm64: Stage-2 huge mappings for np-guests In-Reply-To: <20250509131706.2336138-10-vdonnefort@google.com> References: <20250509131706.2336138-1-vdonnefort@google.com> <20250509131706.2336138-10-vdonnefort@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/30.1 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: vdonnefort@google.com, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false On Fri, 09 May 2025 14:17:05 +0100, Vincent Donnefort wrote: > > Now np-guests hypercalls with range are supported, we can let the > hypervisor to install block mappings whenever the Stage-1 allows it, > that is when backed by either Hugetlbfs or THPs. The size of those block > mappings is limited to PMD_SIZE. > > Signed-off-by: Vincent Donnefort > > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > index 78fb9cea2034..97e0fea9db4e 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > @@ -167,7 +167,7 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) > static bool guest_stage2_force_pte_cb(u64 addr, u64 end, > enum kvm_pgtable_prot prot) > { > - return true; > + return false; > } Can we get rid of this callback now? And of the .force_pte_cb field in the kvm_pgtable struct? > > static void *guest_s2_zalloc_pages_exact(size_t size) > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 754f2fe0cc67..7c8be22e81f9 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1537,7 +1537,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > * logging_active is guaranteed to never be true for VM_PFNMAP > * memslots. > */ > - if (logging_active || is_protected_kvm_enabled()) { > + if (logging_active) { > force_pte = true; > vma_shift = PAGE_SHIFT; > } else { > @@ -1547,7 +1547,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > switch (vma_shift) { > #ifndef __PAGETABLE_PMD_FOLDED > case PUD_SHIFT: > - if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) > + if (!is_protected_kvm_enabled() && > + fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) Can you move this new condition into the fault_supports...() helper instead? Thanks, M. -- Without deviation from the norm, progress is not possible.