From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2C63E39C007 for ; Tue, 31 Mar 2026 12:43:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774961007; cv=none; b=LzC+0N+h4fF6AMLF861s6M4hpRAcjta6XsLfrmDsVgtIdwCVo4iPcIKdNJeGfHz7Sf6gkwtz/KcGDdTjQmRaq3g3APQ+hiU1X5b7bOZhW3DooaQg98CqgHZk0Dm/rmAx/2AaLkn9Q8IRO0BSrwISSh/8MomkWwExKu2Zf1kXz4Y= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774961007; c=relaxed/simple; bh=GdUu+MNBukS5bkDHCKcwiVjCmfvPKsSg9zofayyK2jI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gvUUbz4XecPc5rzKwDY5KMRxxzvLdQ88GxL/ngnjRRnBFmfC3Q9DOb9GKPeITzfUGcteb1Ji2y7k1I5VcCzeyEWnP8N4oZD9897QD7TbN5k4omW2qu1nMVD/6tIiXogGTg5xOvJlsLqWoCpZssggnca2hCY3VQCfJf6tgxW+DEU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=i1bXgMRJ; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="i1bXgMRJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1774961005; x=1806497005; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=GdUu+MNBukS5bkDHCKcwiVjCmfvPKsSg9zofayyK2jI=; b=i1bXgMRJoDOCLfQirjKSF3Q0K5hRR51mLnIUtdTfRFiJ3fLWBY1akwhT y0KCi0uT3ek0Pi9XApx7D6dUy9JnXpJe0Irh3E8315KsaZPqpeRBFcREm wjBCmfJRw3DySZu0cI5c3ZBmQ1x1E8wSgjl8lSQzAqdP5210MNJG6xEjI c6i4DvqUQ+ytOYPNGr5A2iU3Fcd5/mnngNgbtURcZUElYJE5DmctvtI9C /45Bayle60ioUr90i+mrl+iH6NHRl3V2ejnKlFtuTCyfyoP3eZdQXQJMH TqB2CM7iBUt5G8+ZtIW1k7wZl6qsXcAUJSpvM0aHKFDy/0CzThWwEhEBs Q==; X-CSE-ConnectionGUID: YdCSzvlaSK2lAT+bobIqxg== X-CSE-MsgGUID: 5q4qteyvT8eZi/rORkoYqw== X-IronPort-AV: E=McAfee;i="6800,10657,11745"; a="76084419" X-IronPort-AV: E=Sophos;i="6.23,151,1770624000"; d="scan'208";a="76084419" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2026 05:43:17 -0700 X-CSE-ConnectionGUID: SiJP5pzlT1S3fpeJL4XlVg== X-CSE-MsgGUID: lvyIDH0GQemAwSzhtccxqQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,151,1770624000"; d="scan'208";a="221492081" Received: from 984fee019967.jf.intel.com ([10.23.153.244]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Mar 2026 05:43:16 -0700 From: Chao Gao To: linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, kvm@vger.kernel.org Cc: binbin.wu@linux.intel.com, dan.j.williams@intel.com, dave.hansen@linux.intel.com, ira.weiny@intel.com, kai.huang@intel.com, kas@kernel.org, nik.borisov@suse.com, paulmck@kernel.org, pbonzini@redhat.com, reinette.chatre@intel.com, rick.p.edgecombe@intel.com, sagis@google.com, seanjc@google.com, tony.lindgren@linux.intel.com, vannapurve@google.com, vishal.l.verma@intel.com, yilun.xu@linux.intel.com, xiaoyao.li@intel.com, yan.y.zhao@intel.com, Chao Gao , Zhenzhong Duan , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Subject: [PATCH v7 01/22] x86/virt/tdx: Move low level SEAMCALL helpers out of Date: Tue, 31 Mar 2026 05:41:14 -0700 Message-ID: <20260331124214.117808-2-chao.gao@intel.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260331124214.117808-1-chao.gao@intel.com> References: <20260331124214.117808-1-chao.gao@intel.com> Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Kai Huang TDX host core code implements three seamcall*() helpers to make SEAMCALL to the TDX module. Currently, they are implemented in and are exposed to other kernel code which includes . However, other than the TDX host core, seamcall*() are not expected to be used by other kernel code directly. For instance, for all SEAMCALLs that are used by KVM, the TDX host core exports a wrapper function for each of them. Move seamcall*() and related code out of and make them only visible to TDX host core. Since TDX host core tdx.c is already very heavy, don't put low level seamcall*() code there but to a new dedicated "seamcall_internal.h". Also, currently tdx.c has seamcall_prerr*() helpers which additionally print error message when calling seamcall*() fails. Move them to "seamcall_internal.h" as well. In such way all low level SEAMCALL helpers are in a dedicated place, which is much more readable. Copy the copyright notice from the original files and consolidate the date ranges to: Copyright (C) 2021-2023 Intel Corporation Signed-off-by: Kai Huang Signed-off-by: Chao Gao Reviewed-by: Zhenzhong Duan Reviewed-by: Binbin Wu Reviewed-by: Tony Lindgren Reviewed-by: Kiryl Shutsemau (Meta) Reviewed-by: Xiaoyao Li Acked-by: Dave Hansen --- v5: - s/seamcall.h/seamcall_internal.h [Binbin] - Fix an unintentional change to sc_retry() during code movement. v4: - Collect reviews - add "internal" to the new header file [Dave] - document the scope of the new header file [Dave] - correct the copyright notice [Dave] v2: - new --- arch/x86/include/asm/tdx.h | 47 ---------- arch/x86/virt/vmx/tdx/seamcall_internal.h | 109 ++++++++++++++++++++++ arch/x86/virt/vmx/tdx/tdx.c | 47 +--------- 3 files changed, 111 insertions(+), 92 deletions(-) create mode 100644 arch/x86/virt/vmx/tdx/seamcall_internal.h diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index 6b338d7f01b7..cb2219302dfc 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -97,54 +97,7 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1, #endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */ #ifdef CONFIG_INTEL_TDX_HOST -u64 __seamcall(u64 fn, struct tdx_module_args *args); -u64 __seamcall_ret(u64 fn, struct tdx_module_args *args); -u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args); void tdx_init(void); - -#include -#include -#include - -typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args); - -static __always_inline u64 __seamcall_dirty_cache(sc_func_t func, u64 fn, - struct tdx_module_args *args) -{ - lockdep_assert_preemption_disabled(); - - /* - * SEAMCALLs are made to the TDX module and can generate dirty - * cachelines of TDX private memory. Mark cache state incoherent - * so that the cache can be flushed during kexec. - * - * This needs to be done before actually making the SEAMCALL, - * because kexec-ing CPU could send NMI to stop remote CPUs, - * in which case even disabling IRQ won't help here. - */ - this_cpu_write(cache_state_incoherent, true); - - return func(fn, args); -} - -static __always_inline u64 sc_retry(sc_func_t func, u64 fn, - struct tdx_module_args *args) -{ - int retry = RDRAND_RETRY_LOOPS; - u64 ret; - - do { - preempt_disable(); - ret = __seamcall_dirty_cache(func, fn, args); - preempt_enable(); - } while (ret == TDX_RND_NO_ENTROPY && --retry); - - return ret; -} - -#define seamcall(_fn, _args) sc_retry(__seamcall, (_fn), (_args)) -#define seamcall_ret(_fn, _args) sc_retry(__seamcall_ret, (_fn), (_args)) -#define seamcall_saved_ret(_fn, _args) sc_retry(__seamcall_saved_ret, (_fn), (_args)) int tdx_cpu_enable(void); int tdx_enable(void); const char *tdx_dump_mce_info(struct mce *m); diff --git a/arch/x86/virt/vmx/tdx/seamcall_internal.h b/arch/x86/virt/vmx/tdx/seamcall_internal.h new file mode 100644 index 000000000000..be5f446467df --- /dev/null +++ b/arch/x86/virt/vmx/tdx/seamcall_internal.h @@ -0,0 +1,109 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * SEAMCALL utilities for TDX host-side operations. + * + * Provides convenient wrappers around SEAMCALL assembly with retry logic, + * error reporting and cache coherency tracking. + * + * Copyright (C) 2021-2023 Intel Corporation + */ + +#ifndef _X86_VIRT_SEAMCALL_INTERNAL_H +#define _X86_VIRT_SEAMCALL_INTERNAL_H + +#include +#include +#include +#include +#include + +u64 __seamcall(u64 fn, struct tdx_module_args *args); +u64 __seamcall_ret(u64 fn, struct tdx_module_args *args); +u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args); + +typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args); + +static __always_inline u64 __seamcall_dirty_cache(sc_func_t func, u64 fn, + struct tdx_module_args *args) +{ + lockdep_assert_preemption_disabled(); + + /* + * SEAMCALLs are made to the TDX module and can generate dirty + * cachelines of TDX private memory. Mark cache state incoherent + * so that the cache can be flushed during kexec. + * + * This needs to be done before actually making the SEAMCALL, + * because kexec-ing CPU could send NMI to stop remote CPUs, + * in which case even disabling IRQ won't help here. + */ + this_cpu_write(cache_state_incoherent, true); + + return func(fn, args); +} + +static __always_inline u64 sc_retry(sc_func_t func, u64 fn, + struct tdx_module_args *args) +{ + int retry = RDRAND_RETRY_LOOPS; + u64 ret; + + do { + preempt_disable(); + ret = __seamcall_dirty_cache(func, fn, args); + preempt_enable(); + } while (ret == TDX_RND_NO_ENTROPY && --retry); + + return ret; +} + +#define seamcall(_fn, _args) sc_retry(__seamcall, (_fn), (_args)) +#define seamcall_ret(_fn, _args) sc_retry(__seamcall_ret, (_fn), (_args)) +#define seamcall_saved_ret(_fn, _args) sc_retry(__seamcall_saved_ret, (_fn), (_args)) + +typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args); + +static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args) +{ + pr_err("SEAMCALL (0x%016llx) failed: 0x%016llx\n", fn, err); +} + +static inline void seamcall_err_ret(u64 fn, u64 err, + struct tdx_module_args *args) +{ + seamcall_err(fn, err, args); + pr_err("RCX 0x%016llx RDX 0x%016llx R08 0x%016llx\n", + args->rcx, args->rdx, args->r8); + pr_err("R09 0x%016llx R10 0x%016llx R11 0x%016llx\n", + args->r9, args->r10, args->r11); +} + +static __always_inline int sc_retry_prerr(sc_func_t func, + sc_err_func_t err_func, + u64 fn, struct tdx_module_args *args) +{ + u64 sret = sc_retry(func, fn, args); + + if (sret == TDX_SUCCESS) + return 0; + + if (sret == TDX_SEAMCALL_VMFAILINVALID) + return -ENODEV; + + if (sret == TDX_SEAMCALL_GP) + return -EOPNOTSUPP; + + if (sret == TDX_SEAMCALL_UD) + return -EACCES; + + err_func(fn, sret, args); + return -EIO; +} + +#define seamcall_prerr(__fn, __args) \ + sc_retry_prerr(__seamcall, seamcall_err, (__fn), (__args)) + +#define seamcall_prerr_ret(__fn, __args) \ + sc_retry_prerr(__seamcall_ret, seamcall_err_ret, (__fn), (__args)) + +#endif /* _X86_VIRT_SEAMCALL_INTERNAL_H */ diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c index 8b8e165a2001..06d9709ade85 100644 --- a/arch/x86/virt/vmx/tdx/tdx.c +++ b/arch/x86/virt/vmx/tdx/tdx.c @@ -39,6 +39,8 @@ #include #include #include + +#include "seamcall_internal.h" #include "tdx.h" static u32 tdx_global_keyid __ro_after_init; @@ -59,51 +61,6 @@ static LIST_HEAD(tdx_memlist); static struct tdx_sys_info tdx_sysinfo; -typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args); - -static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args) -{ - pr_err("SEAMCALL (0x%016llx) failed: 0x%016llx\n", fn, err); -} - -static inline void seamcall_err_ret(u64 fn, u64 err, - struct tdx_module_args *args) -{ - seamcall_err(fn, err, args); - pr_err("RCX 0x%016llx RDX 0x%016llx R08 0x%016llx\n", - args->rcx, args->rdx, args->r8); - pr_err("R09 0x%016llx R10 0x%016llx R11 0x%016llx\n", - args->r9, args->r10, args->r11); -} - -static __always_inline int sc_retry_prerr(sc_func_t func, - sc_err_func_t err_func, - u64 fn, struct tdx_module_args *args) -{ - u64 sret = sc_retry(func, fn, args); - - if (sret == TDX_SUCCESS) - return 0; - - if (sret == TDX_SEAMCALL_VMFAILINVALID) - return -ENODEV; - - if (sret == TDX_SEAMCALL_GP) - return -EOPNOTSUPP; - - if (sret == TDX_SEAMCALL_UD) - return -EACCES; - - err_func(fn, sret, args); - return -EIO; -} - -#define seamcall_prerr(__fn, __args) \ - sc_retry_prerr(__seamcall, seamcall_err, (__fn), (__args)) - -#define seamcall_prerr_ret(__fn, __args) \ - sc_retry_prerr(__seamcall_ret, seamcall_err_ret, (__fn), (__args)) - /* * Do the module global initialization once and return its result. * It can be done on any cpu. It's always called with interrupts -- 2.47.3