From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2304CC433EF for ; Mon, 22 Nov 2021 09:58:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=LCpbgAEUXW2EBPOS9vsckmr+oaD0LVHKswCt/fUOaWc=; b=tZ8tqsCtak/w46 TYergXsn761flcx8FChO2S0Q17HcNOCKhshCziS+OzTPyD1uwjr1nSVmNe+PGuih05GeOMBg7CCG6 Zr7jfLg7X0aygAIl+u2JrsvwgSIrK9U9uOkDrXKGknbZBO7GPYP15Iiufa7f277k1Bp8bqxq+u6di EDJuMNzdziUVCQemp+qiT0xjfBNYCTwkJQJEWnbS4+lKRKvB8AShbWD2B9fpmTOerzVdSdu619UVc Z9J9Vr8r2n3D4IgxVxsXap3mhRil03/UJ62GLMbkb2N5Aeo3xSLCiNHZYh7uEwBVAdM9D4XZg/RVZ YG6/PDx51N7AxeJlVR0w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mp646-00Fia0-Vm; Mon, 22 Nov 2021 09:56:48 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mp5cx-00FaSd-2z for linux-arm-kernel@lists.infradead.org; Mon, 22 Nov 2021 09:28:47 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id EAC1D60F22; Mon, 22 Nov 2021 09:28:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1637573322; bh=wYcvOC0Y9/dhcQQNaRkUJ1KLSToaZadClDI22SpEHOE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=F/dguHUm6g/wvTauXd3nLlwKcwn4++RCp1BBB/zmWsCo+2ej/JbGLVA85uBV1KSFE CbUU86xKAzqGCQc2wEgRm+jV6tVrfAordWLnu7cTwXG95Jx/NfSyMdHs6Stbkj41t4 lg5kZ3cGEOxb4sXkjlllHoEuC0LF7cQkGWZXj/9Xc+dRa+vRCrixduYit1YLPV1F3/ 5LQ9O6Y9P7N43dgXUtd/uKAFSrB5hm/GaOy/+HiiIx8j6/BkbzGDssScz/GxHIph1O QljAwrPl5fpL93+i4JuIFlP74Tk/+n8eUYsPPO/OHhkdvzrxGKP/hKbPmPxzWoWXXH lYx0deTxe1VlA== From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Cc: Ard Biesheuvel , Russell King , Nicolas Pitre , Arnd Bergmann , Kees Cook , Keith Packard , Linus Walleij , Nick Desaulniers , Tony Lindgren Subject: [PATCH v4 6/7] ARM: entry: rework stack realignment code in svc_entry Date: Mon, 22 Nov 2021 10:28:15 +0100 Message-Id: <20211122092816.2865873-7-ardb@kernel.org> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20211122092816.2865873-1-ardb@kernel.org> References: <20211122092816.2865873-1-ardb@kernel.org> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=3318; h=from:subject; bh=wYcvOC0Y9/dhcQQNaRkUJ1KLSToaZadClDI22SpEHOE=; b=owEB7QES/pANAwAKAcNPIjmS2Y8kAcsmYgBhm2KuNDlk0d5Isx85SvRDaau+duKCzXRlsa4xscTe 4eY2y0OJAbMEAAEKAB0WIQT72WJ8QGnJQhU3VynDTyI5ktmPJAUCYZtirgAKCRDDTyI5ktmPJPF9C/ 48VWvOQjSP6VEOzvbIK2raLhet72zqyAl/ZiGaZlJAK2DyNMNv3cVHP2XsecDA2kW9S5C8GhTY+atl FblGLOSpN+K90C5sYZCOXF+hUTF8zVQDhIw0jmCGsB0gTJpBWVG/zzs4NJjXnmSggEUGEbhHgO2UX0 7Io0qyGbGWf+tiGg4FEKnmjLNi40FbhXaSrDNqqL+vQIaxJlxmzy8xd+ntuo1/RoY80LhyF1rDRhg1 8aRmpdca1Br94rSUA0ZuD+JDXwlvFk+f7Da0DG8HlgjIp9JzI6SqqsCeWkBX8f1lEMdcg9wKfHbg8n eCPPqg3/bmQ8zK0hqAKbMZRSqpkSgUA49xgOxdAvO8fzRFQf1UWxp2n5ckV1qvGeorCobEkcsJXHlp uEfXLFIfdMwk3Sjd1s5v4yh9csyioWvoFtKVkZZvph/mKsJH65DQZxcwoOij1ZB+paFC1HhVGMhae0 i92JPhe0+85j6GNsNiacO533D8hx6A5Jx+7CgAWtbKKA4= X-Developer-Key: i=ardb@kernel.org; a=openpgp; fpr=F43D03328115A198C90016883D200E9CA6329909 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211122_012843_224184_715DF4F9 X-CRM114-Status: GOOD ( 17.80 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The original Thumb-2 enablement patches updated the stack realignment code in svc_entry to work around the lack of a STMIB instruction in Thumb-2, by subtracting 4 from the frame size, inverting the sense of the misaligment check, and changing to a STMIA instruction and a final stack push of a 4 byte quantity that results in the stack becoming aligned at the end of the sequence. It also pushes and pops R0 to the stack in order to have a temp register that Thumb-2 allows in general purpose ALU instructions, as TST using SP is not permitted. Both are a bit problematic for vmap'ed stacks, as using the stack is only permitted after we decide that we did not overflow the stack, or have already switched to the overflow stack. As for the alignment check: the current approach creates a corner case where, if the initial SUB of SP ends up right at the start of the stack, we will end up subtracting another 8 bytes and overflowing it. This means we would need to add the overflow check *after* the SUB that deliberately misaligns the stack. However, this would require us to keep local state (i.e., whether we performed the subtract or not) across the overflow check, but without any GPRs or stack available. So let's switch to an approach where we don't use the stack, and where the alignment check of the stack pointer occurs in the usual way, as this is guaranteed not to result in overflow. This means we will be able to do the overflow check first. While at it, switch to R1 so the mode stack pointer in R0 remains accesible. Acked-by: Nicolas Pitre Signed-off-by: Ard Biesheuvel --- arch/arm/kernel/entry-armv.S | 25 +++++++++++--------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index ce8ca29461de..b447f7d0708c 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -191,24 +191,27 @@ ENDPROC(__und_invalid) .macro svc_entry, stack_hole=0, trace=1, uaccess=1 UNWIND(.fnstart ) UNWIND(.save {r0 - pc} ) - sub sp, sp, #(SVC_REGS_SIZE + \stack_hole - 4) + sub sp, sp, #(SVC_REGS_SIZE + \stack_hole) #ifdef CONFIG_THUMB2_KERNEL - SPFIX( str r0, [sp] ) @ temporarily saved - SPFIX( mov r0, sp ) - SPFIX( tst r0, #4 ) @ test original stack alignment - SPFIX( ldr r0, [sp] ) @ restored + add sp, r1 @ get SP in a GPR without + sub r1, sp, r1 @ using a temp register + tst r1, #4 @ test stack pointer alignment + sub r1, sp, r1 @ restore original R0 + sub sp, r1 @ restore original SP #else SPFIX( tst sp, #4 ) #endif - SPFIX( subeq sp, sp, #4 ) - stmia sp, {r1 - r12} + SPFIX( subne sp, sp, #4 ) + + ARM( stmib sp, {r1 - r12} ) + THUMB( stmia sp, {r0 - r12} ) @ No STMIB in Thumb-2 ldmia r0, {r3 - r5} - add r7, sp, #S_SP - 4 @ here for interlock avoidance + add r7, sp, #S_SP @ here for interlock avoidance mov r6, #-1 @ "" "" "" "" - add r2, sp, #(SVC_REGS_SIZE + \stack_hole - 4) - SPFIX( addeq r2, r2, #4 ) - str r3, [sp, #-4]! @ save the "real" r0 copied + add r2, sp, #(SVC_REGS_SIZE + \stack_hole) + SPFIX( addne r2, r2, #4 ) + str r3, [sp] @ save the "real" r0 copied @ from the exception stack mov r3, lr -- 2.30.2 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel