* [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving
@ 2026-05-08 14:39 Juergen Gross
2026-05-08 20:54 ` Kevin Brodsky
2026-05-12 16:05 ` Marek Marczykowski-Górecki
0 siblings, 2 replies; 5+ messages in thread
From: Juergen Gross @ 2026-05-08 14:39 UTC (permalink / raw)
To: linux-kernel, x86
Cc: kevin.brodsky, marmarek, Juergen Gross, Boris Ostrovsky,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, xen-devel
With the support of nested lazy mmu sections it can happen that
arch_enter_lazy_mmu_mode() is being called twice without a call of
arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
are not disabling preemption when checking for nested lazy mmu
sections.
This is a problem when running as a Xen PV guest, as
xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this
case.
Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order
not to hurt all other lazy mmu mode users.
Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
arch/x86/xen/mmu_pv.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index c80d0058efd1..3eee5f84f8a7 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -2145,7 +2145,10 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
static void xen_enter_lazy_mmu(void)
{
- enter_lazy(XEN_LAZY_MMU);
+ preempt_disable();
+ if (xen_get_lazy_mode() != XEN_LAZY_MMU)
+ enter_lazy(XEN_LAZY_MMU);
+ preempt_enable();
}
static void xen_flush_lazy_mmu(void)
@@ -2182,7 +2185,8 @@ static void xen_leave_lazy_mmu(void)
{
preempt_disable();
xen_mc_flush();
- leave_lazy(XEN_LAZY_MMU);
+ if (xen_get_lazy_mode() != XEN_LAZY_NONE)
+ leave_lazy(XEN_LAZY_MMU);
preempt_enable();
}
--
2.54.0
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving
2026-05-08 14:39 [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving Juergen Gross
@ 2026-05-08 20:54 ` Kevin Brodsky
2026-05-09 6:32 ` Jürgen Groß
2026-05-12 16:05 ` Marek Marczykowski-Górecki
1 sibling, 1 reply; 5+ messages in thread
From: Kevin Brodsky @ 2026-05-08 20:54 UTC (permalink / raw)
To: Juergen Gross, linux-kernel, x86
Cc: marmarek, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, xen-devel
On 08/05/2026 16:39, Juergen Gross wrote:
> With the support of nested lazy mmu sections it can happen that
> arch_enter_lazy_mmu_mode() is being called twice without a call of
> arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
> are not disabling preemption when checking for nested lazy mmu
> sections.
I think this is a correct description of the issue, i.e. potentially we
have arch_enter_lazy_mmu_mode() called twice *sequentially*. Therefore I
don't think that disabling preemption inside arch_enter_lazy_mmu_mode()
is enough - we have a problem with preemption occurring inside
lazy_mmu_mode_enable() generally, not necessarily inside
arch_enter_lazy_mmu_mode().
Preemption shouldn't matter if commit 291b3abed657 is reverted. AFAICT
this is the only easy fix.
- Kevin
> This is a problem when running as a Xen PV guest, as
> xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this
> case.
>
> Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order
> not to hurt all other lazy mmu mode users.
>
> Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> arch/x86/xen/mmu_pv.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index c80d0058efd1..3eee5f84f8a7 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2145,7 +2145,10 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
>
> static void xen_enter_lazy_mmu(void)
> {
> - enter_lazy(XEN_LAZY_MMU);
> + preempt_disable();
> + if (xen_get_lazy_mode() != XEN_LAZY_MMU)
> + enter_lazy(XEN_LAZY_MMU);
> + preempt_enable();
> }
>
> static void xen_flush_lazy_mmu(void)
> @@ -2182,7 +2185,8 @@ static void xen_leave_lazy_mmu(void)
> {
> preempt_disable();
> xen_mc_flush();
> - leave_lazy(XEN_LAZY_MMU);
> + if (xen_get_lazy_mode() != XEN_LAZY_NONE)
> + leave_lazy(XEN_LAZY_MMU);
> preempt_enable();
> }
>
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving
2026-05-08 20:54 ` Kevin Brodsky
@ 2026-05-09 6:32 ` Jürgen Groß
0 siblings, 0 replies; 5+ messages in thread
From: Jürgen Groß @ 2026-05-09 6:32 UTC (permalink / raw)
To: Kevin Brodsky, linux-kernel, x86
Cc: marmarek, Boris Ostrovsky, Thomas Gleixner, Ingo Molnar,
Borislav Petkov, Dave Hansen, H. Peter Anvin, xen-devel
[-- Attachment #1.1.1: Type: text/plain, Size: 1378 bytes --]
On 08.05.26 22:54, Kevin Brodsky wrote:
> On 08/05/2026 16:39, Juergen Gross wrote:
>> With the support of nested lazy mmu sections it can happen that
>> arch_enter_lazy_mmu_mode() is being called twice without a call of
>> arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
>> are not disabling preemption when checking for nested lazy mmu
>> sections.
>
> I think this is a correct description of the issue, i.e. potentially we
> have arch_enter_lazy_mmu_mode() called twice *sequentially*. Therefore I
> don't think that disabling preemption inside arch_enter_lazy_mmu_mode()
> is enough - we have a problem with preemption occurring inside
> lazy_mmu_mode_enable() generally, not necessarily inside
> arch_enter_lazy_mmu_mode().
>
> Preemption shouldn't matter if commit 291b3abed657 is reverted. AFAICT
> this is the only easy fix.
The description wasn't really complete, I think.
The double call will only be possible if arch_end_context_switch() is
calling arch_enter_lazy_mmu_mode(), and this is happening for Xen PV only.
arch_end_context_switch() is a nop for all other cases.
So this can be handled completely internal of Xen (otherwise a revert of
291b3abed657 wouldn't help), and it is easy to do so as my patch is
showing.
As said, I'd like to get rid of the extra tracking by Xen regarding lazy mode.
Juergen
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving
2026-05-08 14:39 [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving Juergen Gross
2026-05-08 20:54 ` Kevin Brodsky
@ 2026-05-12 16:05 ` Marek Marczykowski-Górecki
2026-05-12 16:10 ` Jürgen Groß
1 sibling, 1 reply; 5+ messages in thread
From: Marek Marczykowski-Górecki @ 2026-05-12 16:05 UTC (permalink / raw)
To: Juergen Gross
Cc: linux-kernel, x86, kevin.brodsky, Boris Ostrovsky,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2119 bytes --]
On Fri, May 08, 2026 at 04:39:33PM +0200, Juergen Gross wrote:
> With the support of nested lazy mmu sections it can happen that
> arch_enter_lazy_mmu_mode() is being called twice without a call of
> arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
> are not disabling preemption when checking for nested lazy mmu
> sections.
>
> This is a problem when running as a Xen PV guest, as
> xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this
> case.
>
> Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order
> not to hurt all other lazy mmu mode users.
>
> Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching")
> Signed-off-by: Juergen Gross <jgross@suse.com>
I have ran several test iterations with this patch (on top of 7.0.4) and
it seems to fix the issue. So,
Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
I did run some tests also with 291b3abed657 reverted (instead of this
patch), and that seems to work too, but I didn't run enough of
iterations to be 100% sure. Would it be helpful to that that further
too?
> ---
> arch/x86/xen/mmu_pv.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
> index c80d0058efd1..3eee5f84f8a7 100644
> --- a/arch/x86/xen/mmu_pv.c
> +++ b/arch/x86/xen/mmu_pv.c
> @@ -2145,7 +2145,10 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
>
> static void xen_enter_lazy_mmu(void)
> {
> - enter_lazy(XEN_LAZY_MMU);
> + preempt_disable();
> + if (xen_get_lazy_mode() != XEN_LAZY_MMU)
> + enter_lazy(XEN_LAZY_MMU);
> + preempt_enable();
> }
>
> static void xen_flush_lazy_mmu(void)
> @@ -2182,7 +2185,8 @@ static void xen_leave_lazy_mmu(void)
> {
> preempt_disable();
> xen_mc_flush();
> - leave_lazy(XEN_LAZY_MMU);
> + if (xen_get_lazy_mode() != XEN_LAZY_NONE)
> + leave_lazy(XEN_LAZY_MMU);
> preempt_enable();
> }
>
> --
> 2.54.0
>
--
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving
2026-05-12 16:05 ` Marek Marczykowski-Górecki
@ 2026-05-12 16:10 ` Jürgen Groß
0 siblings, 0 replies; 5+ messages in thread
From: Jürgen Groß @ 2026-05-12 16:10 UTC (permalink / raw)
To: Marek Marczykowski-Górecki
Cc: linux-kernel, x86, kevin.brodsky, Boris Ostrovsky,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
H. Peter Anvin, xen-devel
[-- Attachment #1.1.1: Type: text/plain, Size: 1439 bytes --]
On 12.05.26 18:05, Marek Marczykowski-Górecki wrote:
> On Fri, May 08, 2026 at 04:39:33PM +0200, Juergen Gross wrote:
>> With the support of nested lazy mmu sections it can happen that
>> arch_enter_lazy_mmu_mode() is being called twice without a call of
>> arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers
>> are not disabling preemption when checking for nested lazy mmu
>> sections.
>>
>> This is a problem when running as a Xen PV guest, as
>> xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this
>> case.
>>
>> Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order
>> not to hurt all other lazy mmu mode users.
>>
>> Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>
> I have ran several test iterations with this patch (on top of 7.0.4) and
> it seems to fix the issue. So,
>
> Tested-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Thanks for testing.
>
> I did run some tests also with 291b3abed657 reverted (instead of this
> patch), and that seems to work too, but I didn't run enough of
> iterations to be 100% sure. Would it be helpful to that that further
> too?
I do prefer my variant, as it is on my preferred path to get rid of the
Xen-private lazy mode tracking.
So in my personal opinion you don't need to continue this test.
Juergen
[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 3743 bytes --]
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-12 16:10 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-08 14:39 [PATCH] x86/xen: Tolerate nested XEN_LAZY_MMU entering/leaving Juergen Gross
2026-05-08 20:54 ` Kevin Brodsky
2026-05-09 6:32 ` Jürgen Groß
2026-05-12 16:05 ` Marek Marczykowski-Górecki
2026-05-12 16:10 ` Jürgen Groß
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox