From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F2FBC4707A for ; Fri, 21 May 2021 22:18:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5549E613F5 for ; Fri, 21 May 2021 22:18:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230217AbhEUWTm (ORCPT ); Fri, 21 May 2021 18:19:42 -0400 Received: from mga05.intel.com ([192.55.52.43]:55754 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229507AbhEUWTW (ORCPT ); Fri, 21 May 2021 18:19:22 -0400 IronPort-SDR: E/ubeBjDuZp58I9y8NlF+xuEp7dpZONLLsUcAhuIoDkZ/y+yofvpNlH87Wp8yRyerCjIFjtLd9 r/vcHD/NwdeQ== X-IronPort-AV: E=McAfee;i="6200,9189,9991"; a="287124422" X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="287124422" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2021 15:16:30 -0700 IronPort-SDR: 1l69DAZyY78End//6ty+lXDAXAR/M4LFGsFa5D08zBRC6f/uigwN86g4Ts7u9QzSGnVqiSUIaU Py/4i8avNqnA== X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="441269472" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2021 15:16:30 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu , Jarkko Sakkinen Subject: [PATCH v27 10/10] x86/vdso: Add ENDBR to __vdso_sgx_enter_enclave Date: Fri, 21 May 2021 15:15:31 -0700 Message-Id: <20210521221531.30168-11-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210521221531.30168-1-yu-cheng.yu@intel.com> References: <20210521221531.30168-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org ENDBR is a special new instruction for the Indirect Branch Tracking (IBT) component of CET. IBT prevents attacks by ensuring that (most) indirect branches and function calls may only land at ENDBR instructions. Branches that don't follow the rules will result in control flow (#CF) exceptions. ENDBR is a noop when IBT is unsupported or disabled. Most ENDBR instructions are inserted automatically by the compiler, but branch targets written in assembly must have ENDBR added manually. Add ENDBR to __vdso_sgx_enter_enclave() branch targets. Signed-off-by: Yu-cheng Yu Reviewed-by: Kees Cook Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Dave Hansen Cc: Jarkko Sakkinen Cc: Peter Zijlstra --- arch/x86/entry/vdso/vsgx.S | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/entry/vdso/vsgx.S b/arch/x86/entry/vdso/vsgx.S index 99dafac992e2..c7cb85d57b3f 100644 --- a/arch/x86/entry/vdso/vsgx.S +++ b/arch/x86/entry/vdso/vsgx.S @@ -4,6 +4,7 @@ #include #include #include +#include #include "extable.h" @@ -27,6 +28,7 @@ SYM_FUNC_START(__vdso_sgx_enter_enclave) /* Prolog */ .cfi_startproc + ENDBR64 push %rbp .cfi_adjust_cfa_offset 8 .cfi_rel_offset %rbp, 0 @@ -62,6 +64,7 @@ SYM_FUNC_START(__vdso_sgx_enter_enclave) .Lasync_exit_pointer: .Lenclu_eenter_eresume: enclu + ENDBR64 /* EEXIT jumps here unless the enclave is doing something fancy. */ mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx @@ -91,6 +94,7 @@ SYM_FUNC_START(__vdso_sgx_enter_enclave) jmp .Lout .Lhandle_exception: + ENDBR64 mov SGX_ENCLAVE_OFFSET_OF_RUN(%rbp), %rbx /* Set the exception info. */ -- 2.21.0