From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 81291C25B75 for ; Mon, 3 Jun 2024 14:22:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=bvE3eHSB92vcxULN8roxoXd9BSdeSbnJdiRy70xkebk=; b=BX8zHclhAm2TSf qQQ0xSs4aAqJduv5NkRsF4jHyHwvrK+UkOcikhAvgrPCEGv4a6eqtrY8OyJ7fbi5kywwYFGdNue6k 3oEm5fGLCBtQ/ARROXeEtVau6iOdvmBqgOgokhKdYbcLPnoqHU3IxmZMQA5DuJ0a4nyQTfpWobWwo ElHiwlgxM6DQ1Do/nUj2i2bReFk6Y5xDAwMHlwByU9otwZonSBiRgtFEikqImgKBYEDtJPxanf1Af uOW8GJz7ElTGzV+8z7u4Y2jrtf6/qpKh4qm08C4FhJpj/8tY4OeSGaUxGW6t7wp9TsaCZ5n85M4mM 4nMLs99SLz5iEEq3xTMA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sE8a2-0000000H36j-0RRY; Mon, 03 Jun 2024 14:22:34 +0000 Received: from sin.source.kernel.org ([145.40.73.55]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sE8Zx-0000000H353-2qP4 for linux-arm-kernel@lists.infradead.org; Mon, 03 Jun 2024 14:22:32 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sin.source.kernel.org (Postfix) with ESMTP id ABD0ECE0CE3; Mon, 3 Jun 2024 14:22:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 97A85C2BD10; Mon, 3 Jun 2024 14:22:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1717424545; bh=rNTx7AsFjOKV7d+MKpBR6n2Udl3BibGEh30vSmw5BL4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=mH/bb9igXinsiXKQwHP1kb+scyiVgWxfxQG1RbYcRvsH3E/Fwe/IRvxPqk9D9WbJk NQIN0YFDZv/nSnEtscGAaDzAMSHN+cqKv1yqZz5PA3q5qZo+bxRjnAqP35Et1zQK+Z XtfRVeXdotx9Hb6inshd2A3KtKr5sgrapzzaVtSr0OzM48NwCbsCltTahyed8ZKqYu aTJeFS1gkalo/UKuxZWpN4GJa5Ba+/9gLiDOqOQ4ISL+vRgQpG92vyldov5oNOlyKV k228ImzRGNu0PqT6NNq+L6stHm/4G/yRA7fOdW4GjOCfMU73Lxx8hSk2a8Nb2dMaFJ sZy4kWDJ5h0Zg== Date: Mon, 3 Jun 2024 15:22:20 +0100 From: Will Deacon To: =?iso-8859-1?Q?Pierre-Cl=E9ment?= Tosi Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Marc Zyngier , Oliver Upton , Suzuki K Poulose , Vincent Donnefort Subject: Re: [PATCH v4 02/13] KVM: arm64: Fix __pkvm_init_switch_pgd call ABI Message-ID: <20240603142220.GC19151@willie-the-truck> References: <20240529121251.1993135-1-ptosi@google.com> <20240529121251.1993135-3-ptosi@google.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240529121251.1993135-3-ptosi@google.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240603_072230_413036_47EA9C8D X-CRM114-Status: GOOD ( 24.58 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, May 29, 2024 at 01:12:08PM +0100, Pierre-Cl=E9ment Tosi wrote: > Fix the mismatch between the (incorrect) C signature, C call site, and > asm implementation by aligning all three on an API passing the > parameters (pgd and SP) separately, instead of as a bundled struct. > = > Remove the now unnecessary memory accesses while the MMU is off from the > asm, which simplifies the C caller (as it does not need to convert a VA > struct pointer to PA) and makes the code slightly more robust by > offsetting the struct fields from C and properly expressing the call to > the C compiler (e.g. type checker and kCFI). > = > Fixes: f320bc742bc2 ("KVM: arm64: Prepare the creation of s1 mappings at = EL2") > Signed-off-by: Pierre-Cl=E9ment Tosi > --- > arch/arm64/include/asm/kvm_hyp.h | 3 +-- > arch/arm64/kvm/hyp/nvhe/hyp-init.S | 17 +++++++++-------- > arch/arm64/kvm/hyp/nvhe/setup.c | 4 ++-- > 3 files changed, 12 insertions(+), 12 deletions(-) > = > diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kv= m_hyp.h > index 3e80464f8953..58b5a2b14d88 100644 > --- a/arch/arm64/include/asm/kvm_hyp.h > +++ b/arch/arm64/include/asm/kvm_hyp.h > @@ -123,8 +123,7 @@ void __noreturn __hyp_do_panic(struct kvm_cpu_context= *host_ctxt, u64 spsr, > #endif > = > #ifdef __KVM_NVHE_HYPERVISOR__ > -void __pkvm_init_switch_pgd(phys_addr_t phys, unsigned long size, > - phys_addr_t pgd, void *sp, void *cont_fn); > +void __pkvm_init_switch_pgd(phys_addr_t pgd, void *sp, void (*fn)(void)); > int __pkvm_init(phys_addr_t phys, unsigned long size, unsigned long nr_c= pus, > unsigned long *per_cpu_base, u32 hyp_va_bits); > void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt); > diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-init.S b/arch/arm64/kvm/hyp/nvhe= /hyp-init.S > index 2994878d68ea..d859c4de06b6 100644 > --- a/arch/arm64/kvm/hyp/nvhe/hyp-init.S > +++ b/arch/arm64/kvm/hyp/nvhe/hyp-init.S > @@ -265,33 +265,34 @@ alternative_else_nop_endif > = > SYM_CODE_END(__kvm_handle_stub_hvc) > = > +/* > + * void __pkvm_init_switch_pgd(phys_addr_t pgd, void *sp, void (*fn)(voi= d)); > + */ > SYM_FUNC_START(__pkvm_init_switch_pgd) > /* Turn the MMU off */ > pre_disable_mmu_workaround > - mrs x2, sctlr_el2 > - bic x3, x2, #SCTLR_ELx_M > + mrs x9, sctlr_el2 > + bic x3, x9, #SCTLR_ELx_M This is fine, but there's no need to jump all the way to x9 for the register allocation. I think it would be neatest to re-jig the function so it uses x4 here for the sctlr and then uses x5 later for the ttbr. > diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/se= tup.c > index 859f22f754d3..1cbd2c78f7a1 100644 > --- a/arch/arm64/kvm/hyp/nvhe/setup.c > +++ b/arch/arm64/kvm/hyp/nvhe/setup.c > @@ -316,7 +316,7 @@ int __pkvm_init(phys_addr_t phys, unsigned long size,= unsigned long nr_cpus, > { > struct kvm_nvhe_init_params *params; > void *virt =3D hyp_phys_to_virt(phys); > - void (*fn)(phys_addr_t params_pa, void *finalize_fn_va); > + typeof(__pkvm_init_switch_pgd) *fn; > int ret; > = > BUG_ON(kvm_check_pvm_sysreg_table()); > @@ -340,7 +340,7 @@ int __pkvm_init(phys_addr_t phys, unsigned long size,= unsigned long nr_cpus, > /* Jump in the idmap page to switch to the new page-tables */ > params =3D this_cpu_ptr(&kvm_init_params); > fn =3D (typeof(fn))__hyp_pa(__pkvm_init_switch_pgd); > - fn(__hyp_pa(params), __pkvm_init_finalise); > + fn(params->pgd_pa, (void *)params->stack_hyp_va, __pkvm_init_finalise); Why not have the prototype of __pkvm_init_switch_pgd() take the SP as an 'unsigned long' so that you can avoid this cast altogether? Will _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel