* [merged mm-hotfixes-stable] mm-kasan-avoid-lazy-mmu-mode-hazards.patch removed from -mm tree
@ 2025-08-28 5:46 Andrew Morton
0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2025-08-28 5:46 UTC (permalink / raw)
To: mm-commits, ryan.roberts, ryabinin.a.a, mark.rutland, dja,
agordeev, akpm
The quilt patch titled
Subject: mm/kasan: avoid lazy MMU mode hazards
has been removed from the -mm tree. Its filename was
mm-kasan-avoid-lazy-mmu-mode-hazards.patch
This patch was dropped because it was merged into the mm-hotfixes-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Alexander Gordeev <agordeev@linux.ibm.com>
Subject: mm/kasan: avoid lazy MMU mode hazards
Date: Mon, 18 Aug 2025 18:39:13 +0200
Functions __kasan_populate_vmalloc() and __kasan_depopulate_vmalloc() use
apply_to_pte_range(), which enters lazy MMU mode. In that mode updating
PTEs may not be observed until the mode is left.
That may lead to a situation in which otherwise correct reads and writes
to a PTE using ptep_get(), set_pte(), pte_clear() and other access
primitives bring wrong results when the vmalloc shadow memory is being
(de-)populated.
To avoid these hazards leave the lazy MMU mode before and re-enter it
after each PTE manipulation.
Link: https://lkml.kernel.org/r/0d2efb7ddddbff6b288fbffeeb10166e90771718.1755528662.git.agordeev@linux.ibm.com
Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Daniel Axtens <dja@axtens.net>
Cc: Marc Rutland <mark.rutland@arm.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/kasan/shadow.c | 8 ++++++++
1 file changed, 8 insertions(+)
--- a/mm/kasan/shadow.c~mm-kasan-avoid-lazy-mmu-mode-hazards
+++ a/mm/kasan/shadow.c
@@ -305,6 +305,8 @@ static int kasan_populate_vmalloc_pte(pt
pte_t pte;
int index;
+ arch_leave_lazy_mmu_mode();
+
index = PFN_DOWN(addr - data->start);
page = data->pages[index];
__memset(page_to_virt(page), KASAN_VMALLOC_INVALID, PAGE_SIZE);
@@ -317,6 +319,8 @@ static int kasan_populate_vmalloc_pte(pt
}
spin_unlock(&init_mm.page_table_lock);
+ arch_enter_lazy_mmu_mode();
+
return 0;
}
@@ -461,6 +465,8 @@ static int kasan_depopulate_vmalloc_pte(
pte_t pte;
int none;
+ arch_leave_lazy_mmu_mode();
+
spin_lock(&init_mm.page_table_lock);
pte = ptep_get(ptep);
none = pte_none(pte);
@@ -471,6 +477,8 @@ static int kasan_depopulate_vmalloc_pte(
if (likely(!none))
__free_page(pfn_to_page(pte_pfn(pte)));
+ arch_enter_lazy_mmu_mode();
+
return 0;
}
_
Patches currently in -mm which might be from agordeev@linux.ibm.com are
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2025-08-28 5:46 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-28 5:46 [merged mm-hotfixes-stable] mm-kasan-avoid-lazy-mmu-mode-hazards.patch removed from -mm tree Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).