* [PATCH v8 0/7] TDX host: kexec/kdump support
@ 2025-09-01 16:09 Paolo Bonzini
2025-09-01 16:09 ` [PATCH 1/7] x86/kexec: Consolidate relocate_kernel() function parameters Paolo Bonzini
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
Currently kexec() support and TDX host are muturally exclusive in the
Kconfig. This series adds the TDX host kexec support so that they can
be both enabled in Kconfig.
With this series, the user can kexec (including crash kdump) to the new
kernel at any time regardless of whether TDX has been enabled in the
first kernel. One limitation is if the first kernel has ever enabled
TDX, for now the second kernel cannot use TDX. This is the future work
in my TODO list.
This series should go in through the tip tree.
Thanks,
Paolo
v7->v8: stub out the new code when kexec is not enabled in the kernel.
Of course even the smallest code change is subject to bikeshedding,
and I chose my preferred color for the bikeshed. But it's pastel
green and I'm sure you'll agree that it's beautiful.
Kai Huang (7):
x86/kexec: Consolidate relocate_kernel() function parameters
x86/sme: Use percpu boolean to control WBINVD during kexec
x86/virt/tdx: Mark memory cache state incoherent when making SEAMCALL
x86/kexec: Disable kexec/kdump on platforms with TDX partial write
erratum
x86/virt/tdx: Remove the !KEXEC_CORE dependency
x86/virt/tdx: Update the kexec section in the TDX documentation
KVM: TDX: Explicitly do WBINVD when no more TDX SEAMCALLs
Documentation/arch/x86/tdx.rst | 14 ++++-----
arch/x86/Kconfig | 1 -
arch/x86/include/asm/kexec.h | 12 ++++++--
arch/x86/include/asm/processor.h | 2 ++
arch/x86/include/asm/tdx.h | 31 +++++++++++++++++++-
arch/x86/kernel/cpu/amd.c | 17 +++++++++++
arch/x86/kernel/machine_kexec_64.c | 44 ++++++++++++++++++++++------
arch/x86/kernel/process.c | 24 +++++++--------
arch/x86/kernel/relocate_kernel_64.S | 36 +++++++++++++++--------
arch/x86/kvm/vmx/tdx.c | 10 +++++++
arch/x86/virt/vmx/tdx/tdx.c | 23 +++++++++++++--
11 files changed, 167 insertions(+), 47 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/7] x86/kexec: Consolidate relocate_kernel() function parameters
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
2025-09-01 16:09 ` [PATCH 2/7] x86/sme: Use percpu boolean to control WBINVD during kexec Paolo Bonzini
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
From: Kai Huang <kai.huang@intel.com>
During kexec, the kernel jumps to the new kernel in relocate_kernel(),
which is implemented in assembly and both 32-bit and 64-bit have their
own version.
Currently, for both 32-bit and 64-bit, the last two parameters of the
relocate_kernel() are both 'unsigned int' but actually they only convey
a boolean, i.e., one bit information. The 'unsigned int' has enough
space to carry two bits information therefore there's no need to pass
the two booleans in two separate 'unsigned int'.
Consolidate the last two function parameters of relocate_kernel() into a
single 'unsigned int' and pass flags instead.
Only consolidate the 64-bit version albeit the similar optimization can
be done for the 32-bit version too. Don't bother changing the 32-bit
version while it is working (since assembly code change is required).
Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Reviewed-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kexec.h | 12 ++++++++++--
arch/x86/kernel/machine_kexec_64.c | 22 +++++++++++++---------
arch/x86/kernel/relocate_kernel_64.S | 25 +++++++++++++++----------
3 files changed, 38 insertions(+), 21 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index f2ad77929d6e..12cebbcdb6c8 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -13,6 +13,15 @@
# define KEXEC_DEBUG_EXC_HANDLER_SIZE 6 /* PUSHI, PUSHI, 2-byte JMP */
#endif
+#ifdef CONFIG_X86_64
+
+#include <linux/bits.h>
+
+#define RELOC_KERNEL_PRESERVE_CONTEXT BIT(0)
+#define RELOC_KERNEL_HOST_MEM_ENC_ACTIVE BIT(1)
+
+#endif
+
# define KEXEC_CONTROL_PAGE_SIZE 4096
# define KEXEC_CONTROL_CODE_MAX_SIZE 2048
@@ -121,8 +130,7 @@ typedef unsigned long
relocate_kernel_fn(unsigned long indirection_page,
unsigned long pa_control_page,
unsigned long start_address,
- unsigned int preserve_context,
- unsigned int host_mem_enc_active);
+ unsigned int flags);
#endif
extern relocate_kernel_fn relocate_kernel;
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 8593760c255a..fdd04b5bb70e 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -384,16 +384,10 @@ void __nocfi machine_kexec(struct kimage *image)
{
unsigned long reloc_start = (unsigned long)__relocate_kernel_start;
relocate_kernel_fn *relocate_kernel_ptr;
- unsigned int host_mem_enc_active;
+ unsigned int relocate_kernel_flags;
int save_ftrace_enabled;
void *control_page;
- /*
- * This must be done before load_segments() since if call depth tracking
- * is used then GS must be valid to make any function calls.
- */
- host_mem_enc_active = cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT);
-
#ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
save_processor_state();
@@ -427,6 +421,17 @@ void __nocfi machine_kexec(struct kimage *image)
*/
relocate_kernel_ptr = control_page + (unsigned long)relocate_kernel - reloc_start;
+ relocate_kernel_flags = 0;
+ if (image->preserve_context)
+ relocate_kernel_flags |= RELOC_KERNEL_PRESERVE_CONTEXT;
+
+ /*
+ * This must be done before load_segments() since if call depth tracking
+ * is used then GS must be valid to make any function calls.
+ */
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
+ relocate_kernel_flags |= RELOC_KERNEL_HOST_MEM_ENC_ACTIVE;
+
/*
* The segment registers are funny things, they have both a
* visible and an invisible part. Whenever the visible part is
@@ -443,8 +448,7 @@ void __nocfi machine_kexec(struct kimage *image)
image->start = relocate_kernel_ptr((unsigned long)image->head,
virt_to_phys(control_page),
image->start,
- image->preserve_context,
- host_mem_enc_active);
+ relocate_kernel_flags);
#ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index ea604f4d0b52..26e945f85d19 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -66,8 +66,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
* %rdi indirection_page
* %rsi pa_control_page
* %rdx start address
- * %rcx preserve_context
- * %r8 host_mem_enc_active
+ * %rcx flags: RELOC_KERNEL_*
*/
/* Save the CPU context, used for jumping back */
@@ -111,7 +110,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
/* save indirection list for jumping back */
movq %rdi, pa_backup_pages_map(%rip)
- /* Save the preserve_context to %r11 as swap_pages clobbers %rcx. */
+ /* Save the flags to %r11 as swap_pages clobbers %rcx. */
movq %rcx, %r11
/* setup a new stack at the end of the physical control page */
@@ -129,9 +128,8 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
/*
* %rdi indirection page
* %rdx start address
- * %r8 host_mem_enc_active
* %r9 page table page
- * %r11 preserve_context
+ * %r11 flags: RELOC_KERNEL_*
* %r13 original CR4 when relocate_kernel() was invoked
*/
@@ -204,7 +202,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
* entries that will conflict with the now unencrypted memory
* used by kexec. Flush the caches before copying the kernel.
*/
- testq %r8, %r8
+ testb $RELOC_KERNEL_HOST_MEM_ENC_ACTIVE, %r11b
jz .Lsme_off
wbinvd
.Lsme_off:
@@ -220,7 +218,7 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
movq %cr3, %rax
movq %rax, %cr3
- testq %r11, %r11 /* preserve_context */
+ testb $RELOC_KERNEL_PRESERVE_CONTEXT, %r11b
jnz .Lrelocate
/*
@@ -273,7 +271,13 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
ANNOTATE_NOENDBR
andq $PAGE_MASK, %r8
lea PAGE_SIZE(%r8), %rsp
- movl $1, %r11d /* Ensure preserve_context flag is set */
+ /*
+ * Ensure RELOC_KERNEL_PRESERVE_CONTEXT flag is set so that
+ * swap_pages() can swap pages correctly. Note all other
+ * RELOC_KERNEL_* flags passed to relocate_kernel() are not
+ * restored.
+ */
+ movl $RELOC_KERNEL_PRESERVE_CONTEXT, %r11d
call swap_pages
movq kexec_va_control_page(%rip), %rax
0: addq $virtual_mapped - 0b, %rax
@@ -321,7 +325,7 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
UNWIND_HINT_END_OF_STACK
/*
* %rdi indirection page
- * %r11 preserve_context
+ * %r11 flags: RELOC_KERNEL_*
*/
movq %rdi, %rcx /* Put the indirection_page in %rcx */
xorl %edi, %edi
@@ -357,7 +361,8 @@ SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
movq %rdi, %rdx /* Save destination page to %rdx */
movq %rsi, %rax /* Save source page to %rax */
- testq %r11, %r11 /* Only actually swap for ::preserve_context */
+ /* Only actually swap for ::preserve_context */
+ testb $RELOC_KERNEL_PRESERVE_CONTEXT, %r11b
jz .Lnoswap
/* copy source page to swap page */
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/7] x86/sme: Use percpu boolean to control WBINVD during kexec
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
2025-09-01 16:09 ` [PATCH 1/7] x86/kexec: Consolidate relocate_kernel() function parameters Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
2025-09-01 16:09 ` [PATCH 3/7] x86/virt/tdx: Mark memory cache state incoherent when making SEAMCALL Paolo Bonzini
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
From: Kai Huang <kai.huang@intel.com>
TL;DR:
Prepare to unify how TDX and SME do cache flushing during kexec by
making a percpu boolean control whether to do the WBINVD.
-- Background --
On SME platforms, dirty cacheline aliases with and without encryption
bit can coexist, and the CPU can flush them back to memory in random
order. During kexec, the caches must be flushed before jumping to the
new kernel otherwise the dirty cachelines could silently corrupt the
memory used by the new kernel due to different encryption property.
TDX also needs a cache flush during kexec for the same reason. It would
be good to have a generic way to flush the cache instead of scattering
checks for each feature all around.
When SME is enabled, the kernel basically encrypts all memory including
the kernel itself and a simple memory write from the kernel could dirty
cachelines. Currently, the kernel uses WBINVD to flush the cache for
SME during kexec in two places:
1) the one in stop_this_cpu() for all remote CPUs when the kexec-ing CPU
stops them;
2) the one in the relocate_kernel() where the kexec-ing CPU jumps to the
new kernel.
-- Solution --
Unlike SME, TDX can only dirty cachelines when it is used (i.e., when
SEAMCALLs are performed). Since there are no more SEAMCALLs after the
aforementioned WBINVDs, leverage this for TDX.
To unify the approach for SME and TDX, use a percpu boolean to indicate
the cache may be in an incoherent state and needs flushing during kexec,
and set the boolean for SME. TDX can then leverage it.
While SME could use a global flag (since it's enabled at early boot and
enabled on all CPUs), the percpu flag fits TDX better:
The percpu flag can be set when a CPU makes a SEAMCALL, and cleared when
another WBINVD on the CPU obviates the need for a kexec-time WBINVD.
Saving kexec-time WBINVD is valuable, because there is an existing
race[*] where kexec could proceed while another CPU is active. WBINVD
could make this race worse, so it's worth skipping it when possible.
-- Side effect to SME --
Today the first WBINVD in the stop_this_cpu() is performed when SME is
*supported* by the platform, and the second WBINVD is done in
relocate_kernel() when SME is *activated* by the kernel. Make things
simple by changing to do the second WBINVD when the platform supports
SME. This allows the kernel to simply turn on this percpu boolean when
bringing up a CPU by checking whether the platform supports SME.
No other functional change intended.
[*] The aforementioned race:
During kexec native_stop_other_cpus() is called to stop all remote CPUs
before jumping to the new kernel. native_stop_other_cpus() firstly
sends normal REBOOT vector IPIs to stop remote CPUs and waits them to
stop. If that times out, it sends NMI to stop the CPUs that are still
alive. The race happens when native_stop_other_cpus() has to send NMIs
and could potentially result in the system hang (for more information
please see [1]).
Link: https://lore.kernel.org/kvm/b963fcd60abe26c7ec5dc20b42f1a2ebbcc72397.1750934177.git.kai.huang@intel.com/ [1]
Signed-off-by: Kai Huang <kai.huang@intel.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Tested-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/kexec.h | 4 ++--
arch/x86/include/asm/processor.h | 2 ++
arch/x86/kernel/cpu/amd.c | 17 +++++++++++++++++
arch/x86/kernel/machine_kexec_64.c | 14 ++++++++++----
arch/x86/kernel/process.c | 24 +++++++++++-------------
arch/x86/kernel/relocate_kernel_64.S | 13 ++++++++++---
6 files changed, 52 insertions(+), 22 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 12cebbcdb6c8..5cfb27f26583 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -17,8 +17,8 @@
#include <linux/bits.h>
-#define RELOC_KERNEL_PRESERVE_CONTEXT BIT(0)
-#define RELOC_KERNEL_HOST_MEM_ENC_ACTIVE BIT(1)
+#define RELOC_KERNEL_PRESERVE_CONTEXT BIT(0)
+#define RELOC_KERNEL_CACHE_INCOHERENT BIT(1)
#endif
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index bde58f6510ac..a24c7805acdb 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -731,6 +731,8 @@ void __noreturn stop_this_cpu(void *dummy);
void microcode_check(struct cpuinfo_x86 *prev_info);
void store_cpu_caps(struct cpuinfo_x86 *info);
+DECLARE_PER_CPU(bool, cache_state_incoherent);
+
enum l1tf_mitigations {
L1TF_MITIGATION_OFF,
L1TF_MITIGATION_AUTO,
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index a6f88ca1a6b4..5398db4dedb4 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -545,6 +545,23 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
{
u64 msr;
+ /*
+ * Mark using WBINVD is needed during kexec on processors that
+ * support SME. This provides support for performing a successful
+ * kexec when going from SME inactive to SME active (or vice-versa).
+ *
+ * The cache must be cleared so that if there are entries with the
+ * same physical address, both with and without the encryption bit,
+ * they don't race each other when flushed and potentially end up
+ * with the wrong entry being committed to memory.
+ *
+ * Test the CPUID bit directly because with mem_encrypt=off the
+ * BSP will clear the X86_FEATURE_SME bit and the APs will not
+ * see it set after that.
+ */
+ if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
+ __this_cpu_write(cache_state_incoherent, true);
+
/*
* BIOS support is required for SME and SEV.
* For SME: If BIOS has enabled SME then adjust x86_phys_bits by
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index fdd04b5bb70e..34c303a92eaf 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -29,6 +29,7 @@
#include <asm/set_memory.h>
#include <asm/cpu.h>
#include <asm/efi.h>
+#include <asm/processor.h>
#ifdef CONFIG_ACPI
/*
@@ -426,11 +427,11 @@ void __nocfi machine_kexec(struct kimage *image)
relocate_kernel_flags |= RELOC_KERNEL_PRESERVE_CONTEXT;
/*
- * This must be done before load_segments() since if call depth tracking
- * is used then GS must be valid to make any function calls.
+ * This must be done before load_segments() since it resets
+ * GS to 0 and percpu data needs the correct GS to work.
*/
- if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
- relocate_kernel_flags |= RELOC_KERNEL_HOST_MEM_ENC_ACTIVE;
+ if (this_cpu_read(cache_state_incoherent))
+ relocate_kernel_flags |= RELOC_KERNEL_CACHE_INCOHERENT;
/*
* The segment registers are funny things, they have both a
@@ -441,6 +442,11 @@ void __nocfi machine_kexec(struct kimage *image)
*
* Take advantage of this here by force loading the segments,
* before the GDT is zapped with an invalid value.
+ *
+ * load_segments() resets GS to 0. Don't make any function call
+ * after here since call depth tracking uses percpu variables to
+ * operate (relocate_kernel() is explicitly ignored by call depth
+ * tracking).
*/
load_segments();
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 1b7960cf6eb0..f2bbbeef5477 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -88,6 +88,16 @@ EXPORT_PER_CPU_SYMBOL(cpu_tss_rw);
DEFINE_PER_CPU(bool, __tss_limit_invalid);
EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid);
+/*
+ * The cache may be in an incoherent state and needs flushing during kexec.
+ * E.g., on SME/TDX platforms, dirty cacheline aliases with and without
+ * encryption bit(s) can coexist and the cache needs to be flushed before
+ * booting to the new kernel to avoid the silent memory corruption due to
+ * dirty cachelines with different encryption property being written back
+ * to the memory.
+ */
+DEFINE_PER_CPU(bool, cache_state_incoherent);
+
/*
* this gets called so that we can store lazy state into memory and copy the
* current task into the new thread.
@@ -827,19 +837,7 @@ void __noreturn stop_this_cpu(void *dummy)
disable_local_APIC();
mcheck_cpu_clear(c);
- /*
- * Use wbinvd on processors that support SME. This provides support
- * for performing a successful kexec when going from SME inactive
- * to SME active (or vice-versa). The cache must be cleared so that
- * if there are entries with the same physical address, both with and
- * without the encryption bit, they don't race each other when flushed
- * and potentially end up with the wrong entry being committed to
- * memory.
- *
- * Test the CPUID bit directly because the machine might've cleared
- * X86_FEATURE_SME due to cmdline options.
- */
- if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
+ if (this_cpu_read(cache_state_incoherent))
wbinvd();
/*
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 26e945f85d19..11e20bb13aca 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -198,14 +198,21 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
movq %r9, %cr3
/*
+ * If the memory cache is in incoherent state, e.g., due to
+ * memory encryption, do WBINVD to flush cache.
+ *
* If SME is active, there could be old encrypted cache line
* entries that will conflict with the now unencrypted memory
* used by kexec. Flush the caches before copying the kernel.
+ *
+ * Note SME sets this flag to true when the platform supports
+ * SME, so the WBINVD is performed even SME is not activated
+ * by the kernel. But this has no harm.
*/
- testb $RELOC_KERNEL_HOST_MEM_ENC_ACTIVE, %r11b
- jz .Lsme_off
+ testb $RELOC_KERNEL_CACHE_INCOHERENT, %r11b
+ jz .Lnowbinvd
wbinvd
-.Lsme_off:
+.Lnowbinvd:
call swap_pages
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/7] x86/virt/tdx: Mark memory cache state incoherent when making SEAMCALL
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
2025-09-01 16:09 ` [PATCH 1/7] x86/kexec: Consolidate relocate_kernel() function parameters Paolo Bonzini
2025-09-01 16:09 ` [PATCH 2/7] x86/sme: Use percpu boolean to control WBINVD during kexec Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
2025-09-01 16:09 ` [PATCH 4/7] x86/kexec: Disable kexec/kdump on platforms with TDX partial write erratum Paolo Bonzini
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
From: Kai Huang <kai.huang@intel.com>
On TDX platforms, dirty cacheline aliases with and without encryption
bits can coexist, and the cpu can flush them back to memory in random
order. During kexec, the caches must be flushed before jumping to the
new kernel otherwise the dirty cachelines could silently corrupt the
memory used by the new kernel due to different encryption property.
A percpu boolean is used to mark whether the cache of a given CPU may be
in an incoherent state, and the kexec performs WBINVD on the CPUs with
that boolean turned on.
For TDX, only the TDX module or the TDX guests can generate dirty
cachelines of TDX private memory, i.e., they are only generated when the
kernel does a SEAMCALL.
Set that boolean when the kernel does SEAMCALL so that kexec can flush
the cache correctly.
The kernel provides both the __seamcall*() assembly functions and the
seamcall*() wrapper ones which additionally handle running out of
entropy error in a loop. Most of the SEAMCALLs are called using the
seamcall*(), except TDH.VP.ENTER and TDH.PHYMEM.PAGE.RDMD which are
called using __seamcall*() variant directly.
To cover the two special cases, add a new __seamcall_dirty_cache()
helper which only sets the percpu boolean and calls the __seamcall*(),
and change the special cases to use the new helper. To cover all other
SEAMCALLs, change seamcall*() to call the new helper.
For the SEAMCALLs invoked via seamcall*(), they can be made from both
task context and IRQ disabled context. Given SEAMCALL is just a lengthy
instruction (e.g., thousands of cycles) from kernel's point of view and
preempt_{disable|enable}() is cheap compared to it, just unconditionally
disable preemption during setting the boolean and making SEAMCALL.
Signed-off-by: Kai Huang <kai.huang@intel.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
Reviewed-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/tdx.h | 25 ++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.c | 4 ++--
2 files changed, 26 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 57b46f05ff97..c178360c1fb1 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -102,10 +102,31 @@ u64 __seamcall_ret(u64 fn, struct tdx_module_args *args);
u64 __seamcall_saved_ret(u64 fn, struct tdx_module_args *args);
void tdx_init(void);
+#include <linux/preempt.h>
#include <asm/archrandom.h>
+#include <asm/processor.h>
typedef u64 (*sc_func_t)(u64 fn, struct tdx_module_args *args);
+static __always_inline u64 __seamcall_dirty_cache(sc_func_t func, u64 fn,
+ struct tdx_module_args *args)
+{
+ lockdep_assert_preemption_disabled();
+
+ /*
+ * SEAMCALLs are made to the TDX module and can generate dirty
+ * cachelines of TDX private memory. Mark cache state incoherent
+ * so that the cache can be flushed during kexec.
+ *
+ * This needs to be done before actually making the SEAMCALL,
+ * because kexec-ing CPU could send NMI to stop remote CPUs,
+ * in which case even disabling IRQ won't help here.
+ */
+ this_cpu_write(cache_state_incoherent, true);
+
+ return func(fn, args);
+}
+
static __always_inline u64 sc_retry(sc_func_t func, u64 fn,
struct tdx_module_args *args)
{
@@ -113,7 +134,9 @@ static __always_inline u64 sc_retry(sc_func_t func, u64 fn,
u64 ret;
do {
- ret = func(fn, args);
+ preempt_disable();
+ ret = __seamcall_dirty_cache(func, fn, args);
+ preempt_enable();
} while (ret == TDX_RND_NO_ENTROPY && --retry);
return ret;
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 823850399bb7..2abf53ed59c8 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1268,7 +1268,7 @@ static bool paddr_is_tdx_private(unsigned long phys)
return false;
/* Get page type from the TDX module */
- sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
+ sret = __seamcall_dirty_cache(__seamcall_ret, TDH_PHYMEM_PAGE_RDMD, &args);
/*
* The SEAMCALL will not return success unless there is a
@@ -1524,7 +1524,7 @@ noinstr __flatten u64 tdh_vp_enter(struct tdx_vp *td, struct tdx_module_args *ar
{
args->rcx = tdx_tdvpr_pa(td);
- return __seamcall_saved_ret(TDH_VP_ENTER, args);
+ return __seamcall_dirty_cache(__seamcall_saved_ret, TDH_VP_ENTER, args);
}
EXPORT_SYMBOL_GPL(tdh_vp_enter);
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/7] x86/kexec: Disable kexec/kdump on platforms with TDX partial write erratum
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
` (2 preceding siblings ...)
2025-09-01 16:09 ` [PATCH 3/7] x86/virt/tdx: Mark memory cache state incoherent when making SEAMCALL Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
2025-09-01 16:09 ` [PATCH 5/7] x86/virt/tdx: Remove the !KEXEC_CORE dependency Paolo Bonzini
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen, Binbin Wu
From: Kai Huang <kai.huang@intel.com>
Some early TDX-capable platforms have an erratum: A kernel partial
write (a write transaction of less than cacheline lands at memory
controller) to TDX private memory poisons that memory, and a subsequent
read triggers a machine check.
On those platforms, the old kernel must reset TDX private memory before
jumping to the new kernel, otherwise the new kernel may see unexpected
machine check. Currently the kernel doesn't track which page is a TDX
private page. For simplicity just fail kexec/kdump for those platforms.
Leverage the existing machine_kexec_prepare() to fail kexec/kdump by
adding the check of the presence of the TDX erratum (which is only
checked for if the kernel is built with TDX host support). This rejects
kexec/kdump when the kernel is loading the kexec/kdump kernel image.
The alternative is to reject kexec/kdump when the kernel is jumping to
the new kernel. But for kexec this requires adding a new check (e.g.,
arch_kexec_allowed()) in the common code to fail kernel_kexec() at early
stage. Kdump (crash_kexec()) needs similar check, but it's hard to
justify because crash_kexec() is not supposed to abort.
It's feasible to further relax this limitation, i.e., only fail kexec
when TDX is actually enabled by the kernel. But this is still a half
measure compared to resetting TDX private memory so just do the simplest
thing for now.
The impact to userspace is the users will get an error when loading the
kexec/kdump kernel image:
kexec_load failed: Operation not supported
This might be confusing to the users, thus also print the reason in the
dmesg:
[..] kexec: Not allowed on platform with tdx_pw_mce bug.
Signed-off-by: Kai Huang <kai.huang@intel.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/kernel/machine_kexec_64.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 34c303a92eaf..201137b98fb8 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -347,6 +347,22 @@ int machine_kexec_prepare(struct kimage *image)
unsigned long reloc_end = (unsigned long)__relocate_kernel_end;
int result;
+ /*
+ * Some early TDX-capable platforms have an erratum. A kernel
+ * partial write (a write transaction of less than cacheline
+ * lands at memory controller) to TDX private memory poisons that
+ * memory, and a subsequent read triggers a machine check.
+ *
+ * On those platforms the old kernel must reset TDX private
+ * memory before jumping to the new kernel otherwise the new
+ * kernel may see unexpected machine check. For simplicity
+ * just fail kexec/kdump on those platforms.
+ */
+ if (boot_cpu_has_bug(X86_BUG_TDX_PW_MCE)) {
+ pr_info_once("Not allowed on platform with tdx_pw_mce bug\n");
+ return -EOPNOTSUPP;
+ }
+
/* Setup the identity mapped 64bit page table */
result = init_pgtable(image, __pa(control_page));
if (result)
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 5/7] x86/virt/tdx: Remove the !KEXEC_CORE dependency
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
` (3 preceding siblings ...)
2025-09-01 16:09 ` [PATCH 4/7] x86/kexec: Disable kexec/kdump on platforms with TDX partial write erratum Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
2025-09-01 16:09 ` [PATCH 6/7] x86/virt/tdx: Update the kexec section in the TDX documentation Paolo Bonzini
2025-09-01 16:09 ` [PATCH 7/7] KVM: TDX: Explicitly do WBINVD when no more TDX SEAMCALLs Paolo Bonzini
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
From: Kai Huang <kai.huang@intel.com>
During kexec it is now guaranteed that all dirty cachelines of TDX
private memory are flushed before jumping to the new kernel. The TDX
private memory from the old kernel will remain as TDX private memory in
the new kernel, but it is OK because kernel read/write to TDX private
memory will never cause machine check, except on the platforms with the
TDX partial write erratum, which has already been handled.
It is safe to allow kexec to work together with TDX now. Remove the
!KEXEC_CORE dependency.
Signed-off-by: Kai Huang <kai.huang@intel.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 348a193a3ede..217982814bd7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1895,7 +1895,6 @@ config INTEL_TDX_HOST
depends on X86_X2APIC
select ARCH_KEEP_MEMBLOCK
depends on CONTIG_ALLOC
- depends on !KEXEC_CORE
depends on X86_MCE
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 6/7] x86/virt/tdx: Update the kexec section in the TDX documentation
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
` (4 preceding siblings ...)
2025-09-01 16:09 ` [PATCH 5/7] x86/virt/tdx: Remove the !KEXEC_CORE dependency Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
2025-09-01 16:09 ` [PATCH 7/7] KVM: TDX: Explicitly do WBINVD when no more TDX SEAMCALLs Paolo Bonzini
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
From: Kai Huang <kai.huang@intel.com>
TDX host kernel now supports kexec/kdump. Update the documentation to
reflect that.
Opportunistically, remove the parentheses in "Kexec()" and move this
section under the "Erratum" section because the updated "Kexec" section
now refers to that erratum.
Signed-off-by: Kai Huang <kai.huang@intel.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Documentation/arch/x86/tdx.rst | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/Documentation/arch/x86/tdx.rst b/Documentation/arch/x86/tdx.rst
index 719043cd8b46..61670e7df2f7 100644
--- a/Documentation/arch/x86/tdx.rst
+++ b/Documentation/arch/x86/tdx.rst
@@ -142,13 +142,6 @@ but depends on the BIOS to behave correctly.
Note TDX works with CPU logical online/offline, thus the kernel still
allows to offline logical CPU and online it again.
-Kexec()
-~~~~~~~
-
-TDX host support currently lacks the ability to handle kexec. For
-simplicity only one of them can be enabled in the Kconfig. This will be
-fixed in the future.
-
Erratum
~~~~~~~
@@ -171,6 +164,13 @@ If the platform has such erratum, the kernel prints additional message in
machine check handler to tell user the machine check may be caused by
kernel bug on TDX private memory.
+Kexec
+~~~~~~~
+
+Currently kexec doesn't work on the TDX platforms with the aforementioned
+erratum. It fails when loading the kexec kernel image. Otherwise it
+works normally.
+
Interaction vs S3 and deeper states
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 7/7] KVM: TDX: Explicitly do WBINVD when no more TDX SEAMCALLs
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
` (5 preceding siblings ...)
2025-09-01 16:09 ` [PATCH 6/7] x86/virt/tdx: Update the kexec section in the TDX documentation Paolo Bonzini
@ 2025-09-01 16:09 ` Paolo Bonzini
6 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2025-09-01 16:09 UTC (permalink / raw)
To: linux-kernel, kvm
Cc: dave.hansen, bp, tglx, peterz, mingo, hpa, thomas.lendacky, x86,
kas, rick.p.edgecombe, dwmw, kai.huang, seanjc, reinette.chatre,
isaku.yamahata, dan.j.williams, ashish.kalra, nik.borisov,
chao.gao, sagis, farrah.chen
From: Kai Huang <kai.huang@intel.com>
On TDX platforms, during kexec, the kernel needs to make sure there
are no dirty cachelines of TDX private memory before booting to the new
kernel to avoid silent memory corruption to the new kernel.
To do this, the kernel has a percpu boolean to indicate whether the
cache of a CPU may be in incoherent state. During kexec, namely in
stop_this_cpu(), the kernel does WBINVD if that percpu boolean is true.
TDX turns on that percpu boolean on a CPU when the kernel does SEAMCALL,
Thus making sure the cache will be flushed during kexec.
However, kexec has a race condition that, while remaining extremely rare,
would be more likely in the presence of a relatively long operation such
as WBINVD.
In particular, the kexec-ing CPU invokes native_stop_other_cpus()
to stop all remote CPUs before booting to the new kernel.
native_stop_other_cpus() then sends a REBOOT vector IPI to remote CPUs
and waits for them to stop; if that times out, it also sends NMIs to the
still-alive CPUs and waits again for them to stop. If the race happens,
kexec proceeds before all CPUs have processed the NMI and stopped[1],
and the system hangs.
But after tdx_disable_virtualization_cpu(), no more TDX activity
can happen on this cpu. When kexec is enabled, flush the cache
explicitly at that point; this moves the WBINVD to an earlier stage than
stop_this_cpus(), avoiding a possibly lengthy operation at a time where
it could cause this race.
[1] https://lore.kernel.org/kvm/b963fcd60abe26c7ec5dc20b42f1a2ebbcc72397.1750934177.git.kai.huang@intel.com/
Signed-off-by: Kai Huang <kai.huang@intel.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Farrah Chen <farrah.chen@intel.com>
[Make the new function a stub for !CONFIG_KEXEC_CORE. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
arch/x86/include/asm/tdx.h | 6 ++++++
arch/x86/kvm/vmx/tdx.c | 10 ++++++++++
arch/x86/virt/vmx/tdx/tdx.c | 19 +++++++++++++++++++
3 files changed, 35 insertions(+)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index c178360c1fb1..6120461bd5ff 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -228,5 +228,11 @@ static inline const char *tdx_dump_mce_info(struct mce *m) { return NULL; }
static inline const struct tdx_sys_info *tdx_get_sysinfo(void) { return NULL; }
#endif /* CONFIG_INTEL_TDX_HOST */
+#ifdef CONFIG_KEXEC_CORE
+void tdx_cpu_flush_cache_for_kexec(void);
+#else
+static inline void tdx_cpu_flush_cache_for_kexec(void) { }
+#endif
+
#endif /* !__ASSEMBLER__ */
#endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index f457b2e578b2..04b6d332c1af 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -423,6 +423,16 @@ void tdx_disable_virtualization_cpu(void)
tdx_flush_vp(&arg);
}
local_irq_restore(flags);
+
+ /*
+ * Flush cache now if kexec is possible: this is necessary to avoid
+ * having dirty private memory cachelines when the new kernel boots,
+ * but WBINVD is a relatively expensive operation and doing it during
+ * kexec can exacerbate races in native_stop_other_cpus(). Do it
+ * now, since this is a safe moment and there is going to be no more
+ * TDX activity on this CPU from this point on.
+ */
+ tdx_cpu_flush_cache_for_kexec();
}
#define TDX_SEAMCALL_RETRIES 10000
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 2abf53ed59c8..330b560313af 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1872,3 +1872,22 @@ u64 tdh_phymem_page_wbinvd_hkid(u64 hkid, struct page *page)
return seamcall(TDH_PHYMEM_PAGE_WBINVD, &args);
}
EXPORT_SYMBOL_GPL(tdh_phymem_page_wbinvd_hkid);
+
+#ifdef CONFIG_KEXEC_CORE
+void tdx_cpu_flush_cache_for_kexec(void)
+{
+ lockdep_assert_preemption_disabled();
+
+ if (!this_cpu_read(cache_state_incoherent))
+ return;
+
+ /*
+ * Private memory cachelines need to be clean at the time of
+ * kexec. Write them back now, as the caller promises that
+ * there should be no more SEAMCALLs on this CPU.
+ */
+ wbinvd();
+ this_cpu_write(cache_state_incoherent, false);
+}
+EXPORT_SYMBOL_GPL(tdx_cpu_flush_cache_for_kexec);
+#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-09-01 16:10 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-01 16:09 [PATCH v8 0/7] TDX host: kexec/kdump support Paolo Bonzini
2025-09-01 16:09 ` [PATCH 1/7] x86/kexec: Consolidate relocate_kernel() function parameters Paolo Bonzini
2025-09-01 16:09 ` [PATCH 2/7] x86/sme: Use percpu boolean to control WBINVD during kexec Paolo Bonzini
2025-09-01 16:09 ` [PATCH 3/7] x86/virt/tdx: Mark memory cache state incoherent when making SEAMCALL Paolo Bonzini
2025-09-01 16:09 ` [PATCH 4/7] x86/kexec: Disable kexec/kdump on platforms with TDX partial write erratum Paolo Bonzini
2025-09-01 16:09 ` [PATCH 5/7] x86/virt/tdx: Remove the !KEXEC_CORE dependency Paolo Bonzini
2025-09-01 16:09 ` [PATCH 6/7] x86/virt/tdx: Update the kexec section in the TDX documentation Paolo Bonzini
2025-09-01 16:09 ` [PATCH 7/7] KVM: TDX: Explicitly do WBINVD when no more TDX SEAMCALLs Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).