public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-02 10:22 [PATCH] " Kai Huang
@ 2026-03-02 10:22 ` Kai Huang
  2026-03-02 10:26   ` Huang, Kai
                     ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Kai Huang @ 2026-03-02 10:22 UTC (permalink / raw)
  To: dave.hansen, pbonzini, seanjc, kas
  Cc: rick.p.edgecombe, tglx, bp, mingo, x86, hpa, linux-kernel,
	Kai Huang, stable, Vishal Verma

TDX can leave the cache in an incoherent state for the memory it uses.
During kexec the kernel does a WBINVD for each CPU before memory gets
reused in the second kernel.

There were two considerations for where this WBINVD should happen.  In
order to handle cases where the cache might get into an incoherent state
while the kexec is in the initial stages, it is needed to do this later
in the kexec path, when the kexecing CPU stops all remote CPUs.  However,
the later kexec process is sensitive to existing races.  So to avoid
perturbing that operation, it is better to do it earlier.

The existing solution is to track the need for the kexec time WBINVD
generically (i.e., not just for TDX) in a per-cpu var.  The late
invocation only happens if the earlier TDX specific logic in
tdx_cpu_flush_cache_for_kexec() didn’t take care of the work.  This
earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
handler, which is called earlier in the kexec path.

However, this accidentally added it to KVM’s unload path as well (also
the "error path" when bringing up TDX during KVM module load), which
uses the same internal functions.  This makes some sense too, though,
because if KVM is getting unloaded, TDX cache affecting operations will
likely cease.  So it is a good point to do the work before KVM is
unloaded and won't have a chance to handle the shutdown operation in the
future.

Unfortunately this KVM unload invocation triggers a lockdep warning in
tdx_cpu_flush_cache_for_kexec().  Since tdx_cpu_flush_cache_for_kexec()
is doing WBINVD on a specific CPU, it has an assert for preemption being
disabled.  This works fine for the kexec time invocation, but the KVM
unload path calls this as part of a CPUHP callback for which, despite
always executing on the target CPU, preemption is not disabled.

It might be better to add the earlier invocation logic to a dedicated
arch/x86 TDX syscore shutdown() handler, but to make the fix more
backport friendly just adjust the lockdep assert in the
tdx_cpu_flush_cache_for_kexec().

The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
the same CPU.  It's OK that it can be preempted in the middle as long as
it won't be rescheduled to another CPU.

Remove the too strong lockdep_assert_preemption_disabled(), and change
this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the more
proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
conditions that the context cannot be moved to another CPU to run in the
middle.

Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
Cc: stable@vger.kernel.org
Reported-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
Tested-by: Vishal Verma <vishal.l.verma@intel.com>
---

v1 -> v2:
  - Improve changelog as discussed in v1.
  - Also mention this can also be triggered in "error path" in changelog.

Hi Rick,

Are you OK with sending this patch out to public, or do you have more
comments?

-- below is for public --

Hi Dave, Paolo, Sean,

/facepalm.

This was recently reported by Vishal.  Sorry that I forgot to test the
module unloading (but too focused on kexecing path, which doesn't have
this issue).  This wasn't caught by our CI because there's no such test
case in CI.  Right now we are adding this to the CI so it will be covered
in the future.

---
 arch/x86/virt/vmx/tdx/tdx.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 8b8e165a2001..6f6be1df4b78 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1872,9 +1872,7 @@ EXPORT_SYMBOL_FOR_KVM(tdh_phymem_page_wbinvd_hkid);
 #ifdef CONFIG_KEXEC_CORE
 void tdx_cpu_flush_cache_for_kexec(void)
 {
-	lockdep_assert_preemption_disabled();
-
-	if (!this_cpu_read(cache_state_incoherent))
+	if (!__this_cpu_read(cache_state_incoherent))
 		return;
 
 	/*
@@ -1883,7 +1881,7 @@ void tdx_cpu_flush_cache_for_kexec(void)
 	 * there should be no more SEAMCALLs on this CPU.
 	 */
 	wbinvd();
-	this_cpu_write(cache_state_incoherent, false);
+	__this_cpu_write(cache_state_incoherent, false);
 }
 EXPORT_SYMBOL_FOR_KVM(tdx_cpu_flush_cache_for_kexec);
 #endif

base-commit: 7dff99b354601dd01829e1511711846e04340a69
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-02 10:22 ` [PATCH v2] " Kai Huang
@ 2026-03-02 10:26   ` Huang, Kai
  2026-03-05 18:33   ` Nikolay Borisov
  2026-03-10 13:43   ` Sean Christopherson
  2 siblings, 0 replies; 10+ messages in thread
From: Huang, Kai @ 2026-03-02 10:26 UTC (permalink / raw)
  To: pbonzini@redhat.com, kas@kernel.org, seanjc@google.com,
	dave.hansen@linux.intel.com
  Cc: Edgecombe, Rick P, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	mingo@redhat.com, linux-kernel@vger.kernel.org, Verma, Vishal L,
	tglx@kernel.org, stable@vger.kernel.org

> 
> v1 -> v2:
>   - Improve changelog as discussed in v1.
> 

Apologize, this is the internal version that I forgot to delete from my
tree, and was sent out by accident.

Please ignore this.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-02 10:22 ` [PATCH v2] " Kai Huang
  2026-03-02 10:26   ` Huang, Kai
@ 2026-03-05 18:33   ` Nikolay Borisov
  2026-03-05 21:35     ` Huang, Kai
  2026-03-10 13:43   ` Sean Christopherson
  2 siblings, 1 reply; 10+ messages in thread
From: Nikolay Borisov @ 2026-03-05 18:33 UTC (permalink / raw)
  To: Kai Huang, dave.hansen, pbonzini, seanjc, kas
  Cc: rick.p.edgecombe, tglx, bp, mingo, x86, hpa, linux-kernel, stable,
	Vishal Verma



On 2.03.26 г. 12:22 ч., Kai Huang wrote:
> TDX can leave the cache in an incoherent state for the memory it uses.
> During kexec the kernel does a WBINVD for each CPU before memory gets
> reused in the second kernel.
> 
> There were two considerations for where this WBINVD should happen.  In
> order to handle cases where the cache might get into an incoherent state
> while the kexec is in the initial stages, it is needed to do this later
> in the kexec path, when the kexecing CPU stops all remote CPUs.  However,
> the later kexec process is sensitive to existing races.  So to avoid
> perturbing that operation, it is better to do it earlier.
> 
> The existing solution is to track the need for the kexec time WBINVD
> generically (i.e., not just for TDX) in a per-cpu var.  The late
> invocation only happens if the earlier TDX specific logic in
> tdx_cpu_flush_cache_for_kexec() didn’t take care of the work.  This
> earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
> handler, which is called earlier in the kexec path.
> 
> However, this accidentally added it to KVM’s unload path as well (also
> the "error path" when bringing up TDX during KVM module load), which
> uses the same internal functions.  This makes some sense too, though,
> because if KVM is getting unloaded, TDX cache affecting operations will
> likely cease.  So it is a good point to do the work before KVM is
> unloaded and won't have a chance to handle the shutdown operation in the
> future.
> 
> Unfortunately this KVM unload invocation triggers a lockdep warning in
> tdx_cpu_flush_cache_for_kexec().  Since tdx_cpu_flush_cache_for_kexec()
> is doing WBINVD on a specific CPU, it has an assert for preemption being
> disabled.  This works fine for the kexec time invocation, but the KVM
> unload path calls this as part of a CPUHP callback for which, despite
> always executing on the target CPU, preemption is not disabled.
> 
> It might be better to add the earlier invocation logic to a dedicated
> arch/x86 TDX syscore shutdown() handler, but to make the fix more
> backport friendly just adjust the lockdep assert in the
> tdx_cpu_flush_cache_for_kexec().
> 
> The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
> the same CPU.  It's OK that it can be preempted in the middle as long as
> it won't be rescheduled to another CPU.

TLDR: It wants migration disabled.

> 
> Remove the too strong lockdep_assert_preemption_disabled(), and change
> this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the more
> proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
> conditions that the context cannot be moved to another CPU to run in the
> middle.
> 
> Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
> Cc: stable@vger.kernel.org
> Reported-by: Vishal Verma <vishal.l.verma@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Tested-by: Vishal Verma <vishal.l.verma@intel.com>


So how exactly does this patch prevent the BUG: printk in 
check_preemption_disabled from triggering, if the lockdep assert was 
triggering?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-05 18:33   ` Nikolay Borisov
@ 2026-03-05 21:35     ` Huang, Kai
  2026-03-06  9:58       ` Nikolay Borisov
  0 siblings, 1 reply; 10+ messages in thread
From: Huang, Kai @ 2026-03-05 21:35 UTC (permalink / raw)
  To: kas@kernel.org, pbonzini@redhat.com, nik.borisov@suse.com,
	seanjc@google.com, dave.hansen@linux.intel.com
  Cc: Edgecombe, Rick P, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	mingo@redhat.com, linux-kernel@vger.kernel.org, Verma, Vishal L,
	tglx@kernel.org, stable@vger.kernel.org


> > 
> > The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
> > the same CPU.  It's OK that it can be preempted in the middle as long as
> > it won't be rescheduled to another CPU.
> 
> TLDR: It wants migration disabled.

Basically yes.

> 
> > 
> > Remove the too strong lockdep_assert_preemption_disabled(), and change
> > this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the more
> > proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
> > conditions that the context cannot be moved to another CPU to run in the
> > middle.
> > 
> > Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
> > Cc: stable@vger.kernel.org
> > Reported-by: Vishal Verma <vishal.l.verma@intel.com>
> > Signed-off-by: Kai Huang <kai.huang@intel.com>
> > Tested-by: Vishal Verma <vishal.l.verma@intel.com>
> 
> 
> So how exactly does this patch prevent the BUG: printk in 
> check_preemption_disabled from triggering, if the lockdep assert was 
> triggering?

There's no real BUG here.  It's just the
lockdep_assert_preemption_disabled() is misused.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-05 21:35     ` Huang, Kai
@ 2026-03-06  9:58       ` Nikolay Borisov
  2026-03-08 10:12         ` Huang, Kai
  0 siblings, 1 reply; 10+ messages in thread
From: Nikolay Borisov @ 2026-03-06  9:58 UTC (permalink / raw)
  To: Huang, Kai, kas@kernel.org, pbonzini@redhat.com,
	nik.borisov@suse.com, seanjc@google.com,
	dave.hansen@linux.intel.com
  Cc: Edgecombe, Rick P, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	mingo@redhat.com, linux-kernel@vger.kernel.org, Verma, Vishal L,
	tglx@kernel.org, stable@vger.kernel.org



On 5.03.26 г. 23:35 ч., Huang, Kai wrote:
> 
>>>
>>> The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
>>> the same CPU.  It's OK that it can be preempted in the middle as long as
>>> it won't be rescheduled to another CPU.
>>
>> TLDR: It wants migration disabled.
> 
> Basically yes.
> 
>>
>>>
>>> Remove the too strong lockdep_assert_preemption_disabled(), and change
>>> this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the more
>>> proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
>>> conditions that the context cannot be moved to another CPU to run in the
>>> middle.
>>>
>>> Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
>>> Cc: stable@vger.kernel.org
>>> Reported-by: Vishal Verma <vishal.l.verma@intel.com>
>>> Signed-off-by: Kai Huang <kai.huang@intel.com>
>>> Tested-by: Vishal Verma <vishal.l.verma@intel.com>
>>
>>
>> So how exactly does this patch prevent the BUG: printk in
>> check_preemption_disabled from triggering, if the lockdep assert was
>> triggering?
> 
> There's no real BUG here.  It's just the
> lockdep_assert_preemption_disabled() is misused.

Essentially in check_preemption_disabled() the check is considered 
passed IF ANY of the preempt disable conditions is met, i.e it's more 
laxed. So yeah, makes sense!

Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-06  9:58       ` Nikolay Borisov
@ 2026-03-08 10:12         ` Huang, Kai
  0 siblings, 0 replies; 10+ messages in thread
From: Huang, Kai @ 2026-03-08 10:12 UTC (permalink / raw)
  To: pbonzini@redhat.com, nik.borisov@suse.com, kas@kernel.org,
	seanjc@google.com, dave.hansen@linux.intel.com
  Cc: Edgecombe, Rick P, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	mingo@redhat.com, Verma, Vishal L, tglx@kernel.org,
	stable@vger.kernel.org, linux-kernel@vger.kernel.org

> > > 
> > > So how exactly does this patch prevent the BUG: printk in
> > > check_preemption_disabled from triggering, if the lockdep assert was
> > > triggering?
> > 
> > There's no real BUG here.  It's just the
> > lockdep_assert_preemption_disabled() is misused.
> 
> Essentially in check_preemption_disabled() the check is considered 
> passed IF ANY of the preempt disable conditions is met, i.e it's more 
> laxed. So yeah, makes sense!
> 
> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-02 10:22 ` [PATCH v2] " Kai Huang
  2026-03-02 10:26   ` Huang, Kai
  2026-03-05 18:33   ` Nikolay Borisov
@ 2026-03-10 13:43   ` Sean Christopherson
  2 siblings, 0 replies; 10+ messages in thread
From: Sean Christopherson @ 2026-03-10 13:43 UTC (permalink / raw)
  To: Kai Huang
  Cc: dave.hansen, pbonzini, kas, rick.p.edgecombe, tglx, bp, mingo,
	x86, hpa, linux-kernel, stable, Vishal Verma

On Mon, Mar 02, 2026, Kai Huang wrote:
> TDX can leave the cache in an incoherent state for the memory it uses.
> During kexec the kernel does a WBINVD for each CPU before memory gets
> reused in the second kernel.
> 
> There were two considerations for where this WBINVD should happen.  In
> order to handle cases where the cache might get into an incoherent state
> while the kexec is in the initial stages, it is needed to do this later
> in the kexec path, when the kexecing CPU stops all remote CPUs.  However,
> the later kexec process is sensitive to existing races.  So to avoid
> perturbing that operation, it is better to do it earlier.
> 
> The existing solution is to track the need for the kexec time WBINVD
> generically (i.e., not just for TDX) in a per-cpu var.  The late
> invocation only happens if the earlier TDX specific logic in
> tdx_cpu_flush_cache_for_kexec() didn’t take care of the work.  This
> earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
> handler, which is called earlier in the kexec path.
> 
> However, this accidentally added it to KVM’s unload path as well (also
> the "error path" when bringing up TDX during KVM module load), which
> uses the same internal functions.  This makes some sense too, though,
> because if KVM is getting unloaded, TDX cache affecting operations will
> likely cease.  So it is a good point to do the work before KVM is
> unloaded and won't have a chance to handle the shutdown operation in the
> future.
> 
> Unfortunately this KVM unload invocation triggers a lockdep warning in
> tdx_cpu_flush_cache_for_kexec().  Since tdx_cpu_flush_cache_for_kexec()
> is doing WBINVD on a specific CPU, it has an assert for preemption being
> disabled.  This works fine for the kexec time invocation, but the KVM
> unload path calls this as part of a CPUHP callback for which, despite
> always executing on the target CPU, preemption is not disabled.
> 
> It might be better to add the earlier invocation logic to a dedicated
> arch/x86 TDX syscore shutdown() handler, but to make the fix more
> backport friendly just adjust the lockdep assert in the
> tdx_cpu_flush_cache_for_kexec().
> 
> The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
> the same CPU.  It's OK that it can be preempted in the middle as long as
> it won't be rescheduled to another CPU.
> 
> Remove the too strong lockdep_assert_preemption_disabled(), and change
> this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the more
> proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
> conditions that the context cannot be moved to another CPU to run in the
> middle.
> 
> Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
> Cc: stable@vger.kernel.org
> Reported-by: Vishal Verma <vishal.l.verma@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>
> Tested-by: Vishal Verma <vishal.l.verma@intel.com>
> ---

Acked-by: Sean Christopherson <seanjc@google.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
@ 2026-03-12 10:00 Kai Huang
  2026-03-16 12:14 ` Kiryl Shutsemau
  0 siblings, 1 reply; 10+ messages in thread
From: Kai Huang @ 2026-03-12 10:00 UTC (permalink / raw)
  To: dave.hansen, pbonzini, seanjc, kas
  Cc: rick.p.edgecombe, tglx, bp, mingo, x86, hpa, linux-kernel,
	Kai Huang, stable, Vishal Verma, Nikolay Borisov

TDX can leave the cache in an incoherent state for the memory it uses.
During kexec the kernel does a WBINVD for each CPU before memory gets
reused in the second kernel.

There were two considerations for where this WBINVD should happen.  In
order to handle cases where the cache might get into an incoherent state
while the kexec is in the initial stages, it is needed to do this later
in the kexec path, when the kexecing CPU stops all remote CPUs.  However,
the later kexec process is sensitive to existing races.  So to avoid
perturbing that operation, it is better to do it earlier.

The existing solution is to track the need for the kexec time WBINVD
generically (i.e., not just for TDX) in a per-cpu var.  The late
invocation only happens if the earlier TDX specific logic in
tdx_cpu_flush_cache_for_kexec() didn’t take care of the work.  This
earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
handler, which is called earlier in the kexec path.

However, this accidentally added it to KVM’s unload path as well (also
the "error path" when bringing up TDX during KVM module load), which
uses the same internal functions.  This makes some sense too, though,
because if KVM is getting unloaded, TDX cache affecting operations will
likely cease.  So it is a good point to do the work before KVM is
unloaded and won't have a chance to handle the shutdown operation in the
future.

Unfortunately this KVM unload invocation triggers a lockdep warning in
tdx_cpu_flush_cache_for_kexec():

  IS_ENABLED(CONFIG_PREEMPT_COUNT) && __lockdep_enabled && (preempt_count() == 0 && this_cpu_read(hardirqs_enabled))
  WARNING: arch/x86/virt/vmx/tdx/tdx.c:1875 at tdx_cpu_flush_cache_for_kexec+0x36/0x60, CPU#0: cpuhp/0/22
  ...
  Call Trace:
   <TASK>
   vt_disable_virtualization_cpu+0x1c/0x30 [kvm_intel]
   kvm_arch_disable_virtualization_cpu+0x12/0x80 [kvm]
   kvm_offline_cpu+0x24/0x40 [kvm]
   cpuhp_invoke_callback+0x1b0/0x740
   ...

Since tdx_cpu_flush_cache_for_kexec() is doing WBINVD on a specific CPU,
it has an assert for preemption being disabled.  This works fine for the
kexec time invocation, but the KVM unload path calls this as part of a
CPUHP callback for which, despite always executing on the target CPU,
preemption is not disabled.

It might be better to add the earlier invocation logic to a dedicated
arch/x86 TDX syscore shutdown() handler, but to make the fix more
backport friendly just adjust the lockdep assert in the
tdx_cpu_flush_cache_for_kexec().

The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
the same CPU.  It's OK that it can be preempted in the middle as long as
it won't be rescheduled to another CPU.

Remove the too strong lockdep_assert_preemption_disabled(), and change
this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the
more proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
conditions that the context cannot be moved to another CPU to run in the
middle.

Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
Cc: stable@vger.kernel.org
Reported-by: Vishal Verma <vishal.l.verma@intel.com>
Tested-by: Vishal Verma <vishal.l.verma@intel.com>
Acked-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Kai Huang <kai.huang@intel.com>
---

v1 -> v2:
 - Collect tags - Thanks Nikolay, Rick and Sean!.
 - Add the actual lockdep warn splat - Rick, Sean

v1: https://lore.kernel.org/lkml/20260302102226.7459-1-kai.huang@intel.com/

---
 arch/x86/virt/vmx/tdx/tdx.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 8b8e165a2001..6f6be1df4b78 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1872,9 +1872,7 @@ EXPORT_SYMBOL_FOR_KVM(tdh_phymem_page_wbinvd_hkid);
 #ifdef CONFIG_KEXEC_CORE
 void tdx_cpu_flush_cache_for_kexec(void)
 {
-	lockdep_assert_preemption_disabled();
-
-	if (!this_cpu_read(cache_state_incoherent))
+	if (!__this_cpu_read(cache_state_incoherent))
 		return;
 
 	/*
@@ -1883,7 +1881,7 @@ void tdx_cpu_flush_cache_for_kexec(void)
 	 * there should be no more SEAMCALLs on this CPU.
 	 */
 	wbinvd();
-	this_cpu_write(cache_state_incoherent, false);
+	__this_cpu_write(cache_state_incoherent, false);
 }
 EXPORT_SYMBOL_FOR_KVM(tdx_cpu_flush_cache_for_kexec);
 #endif

base-commit: 0f409eaea53e49932cf92a761de66345c9a4b4be
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-12 10:00 [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec Kai Huang
@ 2026-03-16 12:14 ` Kiryl Shutsemau
  2026-03-16 21:07   ` Huang, Kai
  0 siblings, 1 reply; 10+ messages in thread
From: Kiryl Shutsemau @ 2026-03-16 12:14 UTC (permalink / raw)
  To: Kai Huang
  Cc: dave.hansen, pbonzini, seanjc, rick.p.edgecombe, tglx, bp, mingo,
	x86, hpa, linux-kernel, stable, Vishal Verma, Nikolay Borisov

On Thu, Mar 12, 2026 at 11:00:09PM +1300, Kai Huang wrote:
> TDX can leave the cache in an incoherent state for the memory it uses.
> During kexec the kernel does a WBINVD for each CPU before memory gets
> reused in the second kernel.
> 
> There were two considerations for where this WBINVD should happen.  In
> order to handle cases where the cache might get into an incoherent state
> while the kexec is in the initial stages, it is needed to do this later
> in the kexec path, when the kexecing CPU stops all remote CPUs.  However,
> the later kexec process is sensitive to existing races.  So to avoid
> perturbing that operation, it is better to do it earlier.
> 
> The existing solution is to track the need for the kexec time WBINVD
> generically (i.e., not just for TDX) in a per-cpu var.  The late
> invocation only happens if the earlier TDX specific logic in
> tdx_cpu_flush_cache_for_kexec() didn’t take care of the work.  This
> earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
> handler, which is called earlier in the kexec path.
> 
> However, this accidentally added it to KVM’s unload path as well (also
> the "error path" when bringing up TDX during KVM module load), which
> uses the same internal functions.  This makes some sense too, though,
> because if KVM is getting unloaded, TDX cache affecting operations will
> likely cease.  So it is a good point to do the work before KVM is
> unloaded and won't have a chance to handle the shutdown operation in the
> future.
> 
> Unfortunately this KVM unload invocation triggers a lockdep warning in
> tdx_cpu_flush_cache_for_kexec():
> 
>   IS_ENABLED(CONFIG_PREEMPT_COUNT) && __lockdep_enabled && (preempt_count() == 0 && this_cpu_read(hardirqs_enabled))
>   WARNING: arch/x86/virt/vmx/tdx/tdx.c:1875 at tdx_cpu_flush_cache_for_kexec+0x36/0x60, CPU#0: cpuhp/0/22
>   ...
>   Call Trace:
>    <TASK>
>    vt_disable_virtualization_cpu+0x1c/0x30 [kvm_intel]
>    kvm_arch_disable_virtualization_cpu+0x12/0x80 [kvm]
>    kvm_offline_cpu+0x24/0x40 [kvm]
>    cpuhp_invoke_callback+0x1b0/0x740
>    ...
> 
> Since tdx_cpu_flush_cache_for_kexec() is doing WBINVD on a specific CPU,
> it has an assert for preemption being disabled.  This works fine for the
> kexec time invocation, but the KVM unload path calls this as part of a
> CPUHP callback for which, despite always executing on the target CPU,
> preemption is not disabled.
> 
> It might be better to add the earlier invocation logic to a dedicated
> arch/x86 TDX syscore shutdown() handler, but to make the fix more
> backport friendly just adjust the lockdep assert in the
> tdx_cpu_flush_cache_for_kexec().
> 
> The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
> the same CPU.  It's OK that it can be preempted in the middle as long as
> it won't be rescheduled to another CPU.
> 
> Remove the too strong lockdep_assert_preemption_disabled(), and change
> this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the
> more proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
> conditions that the context cannot be moved to another CPU to run in the
> middle.
> 
> Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
> Cc: stable@vger.kernel.org
> Reported-by: Vishal Verma <vishal.l.verma@intel.com>
> Tested-by: Vishal Verma <vishal.l.verma@intel.com>
> Acked-by: Sean Christopherson <seanjc@google.com>
> Reviewed-by: Nikolay Borisov <nik.borisov@suse.com>
> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Signed-off-by: Kai Huang <kai.huang@intel.com>

Acked-by: Kiryl Shutsemau (Meta) <kas@kernel.org>

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
  2026-03-16 12:14 ` Kiryl Shutsemau
@ 2026-03-16 21:07   ` Huang, Kai
  0 siblings, 0 replies; 10+ messages in thread
From: Huang, Kai @ 2026-03-16 21:07 UTC (permalink / raw)
  To: kas@kernel.org
  Cc: Edgecombe, Rick P, seanjc@google.com, bp@alien8.de,
	x86@kernel.org, dave.hansen@linux.intel.com, mingo@redhat.com,
	linux-kernel@vger.kernel.org, hpa@zytor.com, tglx@kernel.org,
	pbonzini@redhat.com, stable@vger.kernel.org, nik.borisov@suse.com,
	Verma, Vishal L

> 
> Acked-by: Kiryl Shutsemau (Meta) <kas@kernel.org>

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-03-16 21:07 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-12 10:00 [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec Kai Huang
2026-03-16 12:14 ` Kiryl Shutsemau
2026-03-16 21:07   ` Huang, Kai
  -- strict thread matches above, loose matches on Subject: below --
2026-03-02 10:22 [PATCH] " Kai Huang
2026-03-02 10:22 ` [PATCH v2] " Kai Huang
2026-03-02 10:26   ` Huang, Kai
2026-03-05 18:33   ` Nikolay Borisov
2026-03-05 21:35     ` Huang, Kai
2026-03-06  9:58       ` Nikolay Borisov
2026-03-08 10:12         ` Huang, Kai
2026-03-10 13:43   ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox