From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:33296 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1424171AbeBOPm2 (ORCPT ); Thu, 15 Feb 2018 10:42:28 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mark Rutland , Robin Murphy , Will Deacon , Catalin Marinas Subject: [PATCH 4.15 050/202] [Variant 1/Spectre-v1] arm64: Implement array_index_mask_nospec() Date: Thu, 15 Feb 2018 16:15:50 +0100 Message-Id: <20180215151715.832275307@linuxfoundation.org> In-Reply-To: <20180215151712.768794354@linuxfoundation.org> References: <20180215151712.768794354@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Robin Murphy Commit 022620eed3d0 upstream. Provide an optimised, assembly implementation of array_index_mask_nospec() for arm64 so that the compiler is not in a position to transform the code in ways which affect its ability to inhibit speculation (e.g. by introducing conditional branches). This is similar to the sequence used by x86, modulo architectural differences in the carry/borrow flags. Reviewed-by: Mark Rutland Signed-off-by: Robin Murphy Signed-off-by: Will Deacon Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman --- arch/arm64/include/asm/barrier.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) --- a/arch/arm64/include/asm/barrier.h +++ b/arch/arm64/include/asm/barrier.h @@ -41,6 +41,27 @@ #define dma_rmb() dmb(oshld) #define dma_wmb() dmb(oshst) +/* + * Generate a mask for array_index__nospec() that is ~0UL when 0 <= idx < sz + * and 0 otherwise. + */ +#define array_index_mask_nospec array_index_mask_nospec +static inline unsigned long array_index_mask_nospec(unsigned long idx, + unsigned long sz) +{ + unsigned long mask; + + asm volatile( + " cmp %1, %2\n" + " sbc %0, xzr, xzr\n" + : "=r" (mask) + : "r" (idx), "Ir" (sz) + : "cc"); + + csdb(); + return mask; +} + #define __smp_mb() dmb(ish) #define __smp_rmb() dmb(ishld) #define __smp_wmb() dmb(ishst)