linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1] mm: Close theoretical race where stale TLB entries could linger
@ 2025-06-06  9:28 Ryan Roberts
  2025-06-06 12:37 ` Jann Horn
  2025-06-06 12:43 ` David Hildenbrand
  0 siblings, 2 replies; 3+ messages in thread
From: Ryan Roberts @ 2025-06-06  9:28 UTC (permalink / raw)
  To: Andrew Morton, Liam R. Howlett, Lorenzo Stoakes,
	David Hildenbrand, Vlastimil Babka, Jann Horn, Mel Gorman
  Cc: Ryan Roberts, linux-mm, linux-kernel, stable

Commit 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with
a parallel reclaim leaving stale TLB entries") described a theoretical
race as such:

"""
Nadav Amit identified a theoritical race between page reclaim and
mprotect due to TLB flushes being batched outside of the PTL being held.

He described the race as follows:

	CPU0                            CPU1
	----                            ----
					user accesses memory using RW PTE
					[PTE now cached in TLB]
	try_to_unmap_one()
	==> ptep_get_and_clear()
	==> set_tlb_ubc_flush_pending()
					mprotect(addr, PROT_READ)
					==> change_pte_range()
					==> [ PTE non-present - no flush ]

					user writes using cached RW PTE
	...

	try_to_unmap_flush()

The same type of race exists for reads when protecting for PROT_NONE and
also exists for operations that can leave an old TLB entry behind such
as munmap, mremap and madvise.
"""

The solution was to introduce flush_tlb_batched_pending() and call it
under the PTL from mprotect/madvise/munmap/mremap to complete any
pending tlb flushes.

However, while madvise_free_pte_range() and
madvise_cold_or_pageout_pte_range() were both retro-fitted to call
flush_tlb_batched_pending() immediately after initially acquiring the
PTL, they both temporarily release the PTL to split a large folio if
they stumble upon one. In this case, where re-acquiring the PTL
flush_tlb_batched_pending() must be called again, but it previously was
not. Let's fix that.

There are 2 Fixes: tags here: the first is the commit that fixed
madvise_free_pte_range(). The second is the commit that added
madvise_cold_or_pageout_pte_range(), which looks like it copy/pasted the
faulty pattern from madvise_free_pte_range().

This is a theoretical bug discovered during code review.

Cc: stable@vger.kernel.org
Fixes: 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries")
Fixes: 9c276cc65a58 ("mm: introduce MADV_COLD")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---

Applies on today's mm-unstable (3f676fe5c7a0). All mm selftests continue to
pass.

Thanks,
Ryan

 mm/madvise.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/madvise.c b/mm/madvise.c
index 5f7a66a1617e..1d44a35ae85c 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -508,6 +508,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 					pte_offset_map_lock(mm, pmd, addr, &ptl);
 				if (!start_pte)
 					break;
+				flush_tlb_batched_pending(mm);
 				arch_enter_lazy_mmu_mode();
 				if (!err)
 					nr = 0;
@@ -741,6 +742,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
 				start_pte = pte;
 				if (!start_pte)
 					break;
+				flush_tlb_batched_pending(mm);
 				arch_enter_lazy_mmu_mode();
 				if (!err)
 					nr = 0;
--
2.43.0



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v1] mm: Close theoretical race where stale TLB entries could linger
  2025-06-06  9:28 [PATCH v1] mm: Close theoretical race where stale TLB entries could linger Ryan Roberts
@ 2025-06-06 12:37 ` Jann Horn
  2025-06-06 12:43 ` David Hildenbrand
  1 sibling, 0 replies; 3+ messages in thread
From: Jann Horn @ 2025-06-06 12:37 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Andrew Morton, Liam R. Howlett, Lorenzo Stoakes,
	David Hildenbrand, Vlastimil Babka, Mel Gorman, linux-mm,
	linux-kernel, stable

On Fri, Jun 6, 2025 at 11:28 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
> Commit 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with
> a parallel reclaim leaving stale TLB entries") described a theoretical
> race as such:
>
> """
> Nadav Amit identified a theoritical race between page reclaim and
> mprotect due to TLB flushes being batched outside of the PTL being held.
>
> He described the race as follows:
>
>         CPU0                            CPU1
>         ----                            ----
>                                         user accesses memory using RW PTE
>                                         [PTE now cached in TLB]
>         try_to_unmap_one()
>         ==> ptep_get_and_clear()
>         ==> set_tlb_ubc_flush_pending()
>                                         mprotect(addr, PROT_READ)
>                                         ==> change_pte_range()
>                                         ==> [ PTE non-present - no flush ]
>
>                                         user writes using cached RW PTE
>         ...
>
>         try_to_unmap_flush()
>
> The same type of race exists for reads when protecting for PROT_NONE and
> also exists for operations that can leave an old TLB entry behind such
> as munmap, mremap and madvise.
> """
>
> The solution was to introduce flush_tlb_batched_pending() and call it
> under the PTL from mprotect/madvise/munmap/mremap to complete any
> pending tlb flushes.
>
> However, while madvise_free_pte_range() and
> madvise_cold_or_pageout_pte_range() were both retro-fitted to call
> flush_tlb_batched_pending() immediately after initially acquiring the
> PTL, they both temporarily release the PTL to split a large folio if
> they stumble upon one. In this case, where re-acquiring the PTL
> flush_tlb_batched_pending() must be called again, but it previously was
> not. Let's fix that.
>
> There are 2 Fixes: tags here: the first is the commit that fixed
> madvise_free_pte_range(). The second is the commit that added
> madvise_cold_or_pageout_pte_range(), which looks like it copy/pasted the
> faulty pattern from madvise_free_pte_range().
>
> This is a theoretical bug discovered during code review.

Yeah, good point. So we could race like this:

CPU 0                         CPU 1
madvise_free_pte_range
  pte_offset_map_lock
  flush_tlb_batched_pending
  pte_unmap_unlock
                              try_to_unmap_one
                                get_and_clear_full_ptes
                                set_tlb_ubc_flush_pending
  pte_offset_map_lock
[old PTE still cached in TLB]

which is not a security problem for the kernel (a TLB flush will
happen before the page is actually freed) but affects userspace
correctness.

(Maybe we could at some point refactor this into tlb_finish_mmu(), and
give tlb_finish_mmu() a boolean parameter for "did we maybe try to
unmap/protect some range of memory"; just like how tlb_finish_mmu()
already does the safety flush against concurrent mmu_gather
operations. Maybe that would make it harder to mess this up?)

> Cc: stable@vger.kernel.org
> Fixes: 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries")
> Fixes: 9c276cc65a58 ("mm: introduce MADV_COLD")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Reviewed-by: Jann Horn <jannh@google.com>


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v1] mm: Close theoretical race where stale TLB entries could linger
  2025-06-06  9:28 [PATCH v1] mm: Close theoretical race where stale TLB entries could linger Ryan Roberts
  2025-06-06 12:37 ` Jann Horn
@ 2025-06-06 12:43 ` David Hildenbrand
  1 sibling, 0 replies; 3+ messages in thread
From: David Hildenbrand @ 2025-06-06 12:43 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, Liam R. Howlett, Lorenzo Stoakes,
	Vlastimil Babka, Jann Horn, Mel Gorman
  Cc: linux-mm, linux-kernel, stable

On 06.06.25 11:28, Ryan Roberts wrote:
> Commit 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with
> a parallel reclaim leaving stale TLB entries") described a theoretical
> race as such:
> 
> """
> Nadav Amit identified a theoritical race between page reclaim and
> mprotect due to TLB flushes being batched outside of the PTL being held.
> 
> He described the race as follows:
> 
> 	CPU0                            CPU1
> 	----                            ----
> 					user accesses memory using RW PTE
> 					[PTE now cached in TLB]
> 	try_to_unmap_one()
> 	==> ptep_get_and_clear()
> 	==> set_tlb_ubc_flush_pending()
> 					mprotect(addr, PROT_READ)
> 					==> change_pte_range()
> 					==> [ PTE non-present - no flush ]
> 
> 					user writes using cached RW PTE
> 	...
> 
> 	try_to_unmap_flush()
> 
> The same type of race exists for reads when protecting for PROT_NONE and
> also exists for operations that can leave an old TLB entry behind such
> as munmap, mremap and madvise.
> """
> 
> The solution was to introduce flush_tlb_batched_pending() and call it
> under the PTL from mprotect/madvise/munmap/mremap to complete any
> pending tlb flushes.
> 
> However, while madvise_free_pte_range() and
> madvise_cold_or_pageout_pte_range() were both retro-fitted to call
> flush_tlb_batched_pending() immediately after initially acquiring the
> PTL, they both temporarily release the PTL to split a large folio if
> they stumble upon one. In this case, where re-acquiring the PTL
> flush_tlb_batched_pending() must be called again, but it previously was
> not. Let's fix that.
> 
> There are 2 Fixes: tags here: the first is the commit that fixed
> madvise_free_pte_range(). The second is the commit that added
> madvise_cold_or_pageout_pte_range(), which looks like it copy/pasted the
> faulty pattern from madvise_free_pte_range().
> 
> This is a theoretical bug discovered during code review.
> 
> Cc: stable@vger.kernel.org
> Fixes: 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with a parallel reclaim leaving stale TLB entries")
> Fixes: 9c276cc65a58 ("mm: introduce MADV_COLD")
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> 
> Applies on today's mm-unstable (3f676fe5c7a0). All mm selftests continue to
> pass.
> 
> Thanks,
> Ryan

LGTM

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-06-06 12:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-06  9:28 [PATCH v1] mm: Close theoretical race where stale TLB entries could linger Ryan Roberts
2025-06-06 12:37 ` Jann Horn
2025-06-06 12:43 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).