From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C616E2EAE5; Thu, 11 Apr 2024 10:42:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712832124; cv=none; b=JJwQcEB4+Dyx6zC0bYuIL079+A14n4exVNsXjdlMMwS2fwcbIcxdU92IRMz/tcZq3ZXZky8hj6OTcfbbev5Swa2KfawuT0L9Q0cIH8psWVYhxScHTsY/X4vDzsz/R/6daOHOioSTxEwX2hoC3o7rNljJdcA/1TqqPg07QXXqFgg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1712832124; c=relaxed/simple; bh=2xfm2Me9Ic/kOKuUM+ld2p5K0nKSj1ZVNCVpAkTWWWg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TpVE0pDVPga3OmDK+tI4kXy+8JpJ1WXpfy2T4kkEOMUSimmmoSkvhDgMf1s08hy6eKzYBJECdlPISVoweEC8XTnVhYdN84pRGK73+awnSYAbHsVCqvN3FmB5kc7bcUMTYnfgJ8m6ae7vEEhUPN3H+25mW1ho3JGrHWeA3aokVeY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=2H83YoYu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="2H83YoYu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1DAB8C433C7; Thu, 11 Apr 2024 10:42:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1712832124; bh=2xfm2Me9Ic/kOKuUM+ld2p5K0nKSj1ZVNCVpAkTWWWg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2H83YoYu5ErRhn10Ny9teyr0k6URLDrZqUQ3WApX7rQkrlTQHuUu9gEyF2ws4U1c5 3GJY07VWf/KOteA046b2vtUIrYl68BlYqQd3p5Fugh1IvXVqAIpxVUvoKZUoLrSJBK Lium4s0nBecfNkM64GwHBIkDqkGUwn1VPf++kx10= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, "Borislav Petkov (AMD)" , Ingo Molnar , stable@kernel.org, Linus Torvalds Subject: [PATCH 5.10 238/294] x86/bugs: Fix the SRSO mitigation on Zen3/4 Date: Thu, 11 Apr 2024 11:56:41 +0200 Message-ID: <20240411095442.740587176@linuxfoundation.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240411095435.633465671@linuxfoundation.org> References: <20240411095435.633465671@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: "Borislav Petkov (AMD)" Commit 4535e1a4174c4111d92c5a9a21e542d232e0fcaa upstream. The original version of the mitigation would patch in the calls to the untraining routines directly. That is, the alternative() in UNTRAIN_RET will patch in the CALL to srso_alias_untrain_ret() directly. However, even if commit e7c25c441e9e ("x86/cpu: Cleanup the untrain mess") meant well in trying to clean up the situation, due to micro- architectural reasons, the untraining routine srso_alias_untrain_ret() must be the target of a CALL instruction and not of a JMP instruction as it is done now. Reshuffle the alternative macros to accomplish that. Fixes: e7c25c441e9e ("x86/cpu: Cleanup the untrain mess") Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Ingo Molnar Cc: stable@kernel.org Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/asm-prototypes.h | 1 + arch/x86/include/asm/nospec-branch.h | 20 ++++++++++++++------ arch/x86/lib/retpoline.S | 4 +--- 3 files changed, 16 insertions(+), 9 deletions(-) --- a/arch/x86/include/asm/asm-prototypes.h +++ b/arch/x86/include/asm/asm-prototypes.h @@ -12,6 +12,7 @@ #include #include #include +#include #ifndef CONFIG_X86_CMPXCHG64 extern void cmpxchg8b_emu(void); --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -155,11 +155,20 @@ .Lskip_rsb_\@: .endm +/* + * The CALL to srso_alias_untrain_ret() must be patched in directly at + * the spot where untraining must be done, ie., srso_alias_untrain_ret() + * must be the target of a CALL instruction instead of indirectly + * jumping to a wrapper which then calls it. Therefore, this macro is + * called outside of __UNTRAIN_RET below, for the time being, before the + * kernel can support nested alternatives with arbitrary nesting. + */ +.macro CALL_UNTRAIN_RET #ifdef CONFIG_CPU_UNRET_ENTRY -#define CALL_UNTRAIN_RET "call entry_untrain_ret" -#else -#define CALL_UNTRAIN_RET "" + ALTERNATIVE_2 "", "call entry_untrain_ret", X86_FEATURE_UNRET, \ + "call srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS #endif +.endm /* * Mitigate RETBleed for AMD/Hygon Zen uarch. Requires KERNEL CR3 because the @@ -176,9 +185,8 @@ #if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \ defined(CONFIG_CPU_SRSO) ANNOTATE_UNRET_END - ALTERNATIVE_2 "", \ - CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \ - "call entry_ibpb", X86_FEATURE_ENTRY_IBPB + CALL_UNTRAIN_RET + ALTERNATIVE "", "call entry_ibpb", X86_FEATURE_ENTRY_IBPB #endif .endm --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -249,9 +249,7 @@ SYM_CODE_START(srso_return_thunk) SYM_CODE_END(srso_return_thunk) SYM_FUNC_START(entry_untrain_ret) - ALTERNATIVE_2 "jmp retbleed_untrain_ret", \ - "jmp srso_untrain_ret", X86_FEATURE_SRSO, \ - "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS + ALTERNATIVE "jmp retbleed_untrain_ret", "jmp srso_untrain_ret", X86_FEATURE_SRSO SYM_FUNC_END(entry_untrain_ret) __EXPORT_THUNK(entry_untrain_ret)