On Fri, May 08, 2026 at 04:39:33PM +0200, Juergen Gross wrote: > With the support of nested lazy mmu sections it can happen that > arch_enter_lazy_mmu_mode() is being called twice without a call of > arch_leave_lazy_mmu_mode() in between, as the lazy_mmu_*() helpers > are not disabling preemption when checking for nested lazy mmu > sections. > > This is a problem when running as a Xen PV guest, as > xen_enter_lazy_mmu() and xen_leave_lazy_mmu() don't tolerate this > case. > > Fix that in xen_enter_lazy_mmu() and xen_leave_lazy_mmu() in order > not to hurt all other lazy mmu mode users. > > Fixes: 291b3abed657 ("x86/xen: use lazy_mmu_state when context-switching") > Signed-off-by: Juergen Gross I have ran several test iterations with this patch (on top of 7.0.4) and it seems to fix the issue. So, Tested-by: Marek Marczykowski-Górecki I did run some tests also with 291b3abed657 reverted (instead of this patch), and that seems to work too, but I didn't run enough of iterations to be 100% sure. Would it be helpful to that that further too? > --- > arch/x86/xen/mmu_pv.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c > index c80d0058efd1..3eee5f84f8a7 100644 > --- a/arch/x86/xen/mmu_pv.c > +++ b/arch/x86/xen/mmu_pv.c > @@ -2145,7 +2145,10 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot) > > static void xen_enter_lazy_mmu(void) > { > - enter_lazy(XEN_LAZY_MMU); > + preempt_disable(); > + if (xen_get_lazy_mode() != XEN_LAZY_MMU) > + enter_lazy(XEN_LAZY_MMU); > + preempt_enable(); > } > > static void xen_flush_lazy_mmu(void) > @@ -2182,7 +2185,8 @@ static void xen_leave_lazy_mmu(void) > { > preempt_disable(); > xen_mc_flush(); > - leave_lazy(XEN_LAZY_MMU); > + if (xen_get_lazy_mode() != XEN_LAZY_NONE) > + leave_lazy(XEN_LAZY_MMU); > preempt_enable(); > } > > -- > 2.54.0 > -- Best Regards, Marek Marczykowski-Górecki Invisible Things Lab