From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 13976C63697 for ; Mon, 23 Nov 2020 18:01:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B317720781 for ; Mon, 23 Nov 2020 18:01:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="1D4tJXE6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390690AbgKWSA5 (ORCPT ); Mon, 23 Nov 2020 13:00:57 -0500 Received: from mail.kernel.org ([198.145.29.99]:54238 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729686AbgKWSAy (ORCPT ); Mon, 23 Nov 2020 13:00:54 -0500 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id BC15D20758; Mon, 23 Nov 2020 18:00:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1606154453; bh=OgBMsLtKiZ+xCEA9AnmfuQ94yiu816Qd03DhHewUrCs=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=1D4tJXE6sEFGBsF8kVA3HlliXj3KgnEXEawHA6/WRHrqSQcRMITUw7OZbH6sCVLfD rPiVP6tdn5sbUDLgeIOOfi70Z5rLKo3c1dkasSbpR8jRYbTy62yE6c6gY0vD/ubEjm kRsCwfswlmyX19Ebx9RlhZ7tPtXJYGh6CZ+sgZ1g= Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=wait-a-minute.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94) (envelope-from ) id 1khG8x-00D1ci-Hq; Mon, 23 Nov 2020 18:00:51 +0000 Date: Mon, 23 Nov 2020 18:00:50 +0000 Message-ID: <87a6v854x9.wl-maz@kernel.org> From: Marc Zyngier To: David Brazdil Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, James Morse , Julien Thierry , Suzuki K Poulose , Catalin Marinas , Will Deacon , Dennis Zhou , Tejun Heo , Christoph Lameter , Mark Rutland , Lorenzo Pieralisi , Quentin Perret , Andrew Scull , Andrew Walbran , kernel-team@android.com Subject: Re: [PATCH v2 08/24] kvm: arm64: Add SMC handler in nVHE EL2 In-Reply-To: <20201116204318.63987-9-dbrazdil@google.com> References: <20201116204318.63987-1-dbrazdil@google.com> <20201116204318.63987-9-dbrazdil@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: dbrazdil@google.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, james.morse@arm.com, julien.thierry.kdev@gmail.com, suzuki.poulose@arm.com, catalin.marinas@arm.com, will@kernel.org, dennis@kernel.org, tj@kernel.org, cl@linux.com, mark.rutland@arm.com, lorenzo.pieralisi@arm.com, qperret@google.com, ascull@google.com, qwandor@google.com, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 16 Nov 2020 20:43:02 +0000, David Brazdil wrote: > > Add handler of host SMCs in KVM nVHE trap handler. Forward all SMCs to > EL3 and propagate the result back to EL1. This is done in preparation > for validating host SMCs in KVM nVHE protected mode. > > The implementation assumes that firmware uses SMCCC v1.2 or older. That > means x0-x17 can be used both for arguments and results, other GPRs are > preserved. > > Signed-off-by: David Brazdil > --- > arch/arm64/kvm/hyp/nvhe/host.S | 38 ++++++++++++++++++++++++++++++ > arch/arm64/kvm/hyp/nvhe/hyp-main.c | 26 ++++++++++++++++++++ > 2 files changed, 64 insertions(+) > > diff --git a/arch/arm64/kvm/hyp/nvhe/host.S b/arch/arm64/kvm/hyp/nvhe/host.S > index ed27f06a31ba..52dae5cd5a28 100644 > --- a/arch/arm64/kvm/hyp/nvhe/host.S > +++ b/arch/arm64/kvm/hyp/nvhe/host.S > @@ -183,3 +183,41 @@ SYM_CODE_START(__kvm_hyp_host_vector) > invalid_host_el1_vect // FIQ 32-bit EL1 > invalid_host_el1_vect // Error 32-bit EL1 > SYM_CODE_END(__kvm_hyp_host_vector) > + > +/* > + * Forward SMC with arguments in struct kvm_cpu_context, and > + * store the result into the same struct. Assumes SMCCC 1.2 or older. > + * > + * x0: struct kvm_cpu_context* > + */ > +SYM_CODE_START(__kvm_hyp_host_forward_smc) > + /* > + * Use x18 to keep a pointer to the host context because x18 > + * is callee-saved SMCCC but not in AAPCS64. > + */ > + mov x18, x0 > + > + ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)] > + ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)] > + ldp x4, x5, [x18, #CPU_XREG_OFFSET(4)] > + ldp x6, x7, [x18, #CPU_XREG_OFFSET(6)] > + ldp x8, x9, [x18, #CPU_XREG_OFFSET(8)] > + ldp x10, x11, [x18, #CPU_XREG_OFFSET(10)] > + ldp x12, x13, [x18, #CPU_XREG_OFFSET(12)] > + ldp x14, x15, [x18, #CPU_XREG_OFFSET(14)] > + ldp x16, x17, [x18, #CPU_XREG_OFFSET(16)] > + > + smc #0 > + > + stp x0, x1, [x18, #CPU_XREG_OFFSET(0)] > + stp x2, x3, [x18, #CPU_XREG_OFFSET(2)] > + stp x4, x5, [x18, #CPU_XREG_OFFSET(4)] > + stp x6, x7, [x18, #CPU_XREG_OFFSET(6)] > + stp x8, x9, [x18, #CPU_XREG_OFFSET(8)] > + stp x10, x11, [x18, #CPU_XREG_OFFSET(10)] > + stp x12, x13, [x18, #CPU_XREG_OFFSET(12)] > + stp x14, x15, [x18, #CPU_XREG_OFFSET(14)] > + stp x16, x17, [x18, #CPU_XREG_OFFSET(16)] This is going to be really good for CPUs that need to use ARCH_WA1 for their Spectre-v2 mitigation... :-( If that's too expensive, we may have to reduce the number of save/restored registers, but I'm worried the battle is already lost by the time we reach this (the host trap path is already a huge hammer). Eventually, we'll have to insert the mitigation in the vectors anyway, just like we have on the guest exit path. Boo. Thanks, M. -- Without deviation from the norm, progress is not possible.