* [PATCH 1/3] sh: use folio_mapped() instead of page_mapped() in sh4_flush_cache_page()
2026-04-27 11:43 [PATCH 0/3] mm: remove page_mapped() David Hildenbrand (Arm)
@ 2026-04-27 11:43 ` David Hildenbrand (Arm)
2026-04-27 12:43 ` Matthew Wilcox
2026-04-27 11:43 ` [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages() David Hildenbrand (Arm)
` (2 subsequent siblings)
3 siblings, 1 reply; 19+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-27 11:43 UTC (permalink / raw)
To: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett
Cc: linux-sh, linux-kernel, bpf, linux-mm, David Hildenbrand (Arm)
We already have the folio in our hands, so let's just use
folio_mapped().
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
arch/sh/mm/cache-sh4.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sh/mm/cache-sh4.c b/arch/sh/mm/cache-sh4.c
index 83fb34b39ca7..8bc9ce541c14 100644
--- a/arch/sh/mm/cache-sh4.c
+++ b/arch/sh/mm/cache-sh4.c
@@ -248,7 +248,7 @@ static void sh4_flush_cache_page(void *args)
*/
map_coherent = (current_cpu_data.dcache.n_aliases &&
test_bit(PG_dcache_clean, folio_flags(folio, 0)) &&
- page_mapped(page));
+ folio_mapped(folio));
if (map_coherent)
vaddr = kmap_coherent(page, address);
else
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 1/3] sh: use folio_mapped() instead of page_mapped() in sh4_flush_cache_page()
2026-04-27 11:43 ` [PATCH 1/3] sh: use folio_mapped() instead of page_mapped() in sh4_flush_cache_page() David Hildenbrand (Arm)
@ 2026-04-27 12:43 ` Matthew Wilcox
0 siblings, 0 replies; 19+ messages in thread
From: Matthew Wilcox @ 2026-04-27 12:43 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Liam R. Howlett, linux-sh, linux-kernel, bpf, linux-mm
On Mon, Apr 27, 2026 at 01:43:14PM +0200, David Hildenbrand (Arm) wrote:
> We already have the folio in our hands, so let's just use
> folio_mapped().
>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
Thanks; that was an oversight on my part.
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 11:43 [PATCH 0/3] mm: remove page_mapped() David Hildenbrand (Arm)
2026-04-27 11:43 ` [PATCH 1/3] sh: use folio_mapped() instead of page_mapped() in sh4_flush_cache_page() David Hildenbrand (Arm)
@ 2026-04-27 11:43 ` David Hildenbrand (Arm)
2026-04-27 12:17 ` Andrew Morton
` (2 more replies)
2026-04-27 11:43 ` [PATCH 3/3] mm: remove page_mapped() David Hildenbrand (Arm)
2026-04-27 20:59 ` [PATCH 0/3] " David Hildenbrand (Arm)
3 siblings, 3 replies; 19+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-27 11:43 UTC (permalink / raw)
To: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett
Cc: linux-sh, linux-kernel, bpf, linux-mm, David Hildenbrand (Arm)
Pages that BPF arena code maps are allocated through
bpf_map_alloc_pages(), which does not allocate folios but pages.
In the future, pages will not have a mapcount, only folios will.
Converting the code to use folios and rely on folio_mapped() sounds like
the wrong approach.
Should BPF arena code allocate folios and use folio_mapped() here? But
likely we would not want to use folios here longterm, as we don't really
need folio information.
Hard to tell. But in the meantime, we can simply use the page refcount
instead, as a heuristic whether the page might be mapped to user space
and we would want to try zapping it, so we can get rid of page_mapped().
Page allocation will give us a page with a refcount of 1. Any user space
mapping adds a page reference. While there can be references from other
subsystems (e.g., GUP), in the common case for this test here relying on
the page count is good enough.
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
kernel/bpf/arena.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
index 802656c6fd3c..608c55c260bc 100644
--- a/kernel/bpf/arena.c
+++ b/kernel/bpf/arena.c
@@ -729,7 +729,7 @@ static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt,
llist_for_each_safe(pos, t, __llist_del_all(&free_pages)) {
page = llist_entry(pos, struct page, pcp_llist);
- if (page_cnt == 1 && page_mapped(page)) /* mapped by some user process */
+ if (page_cnt == 1 && page_ref_count(page) > 1) /* maybe mapped by user space */
/* Optimization for the common case of page_cnt==1:
* If page wasn't mapped into some user vma there
* is no need to call zap_pages which is slow. When
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 11:43 ` [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages() David Hildenbrand (Arm)
@ 2026-04-27 12:17 ` Andrew Morton
2026-04-27 15:00 ` Alexei Starovoitov
2026-04-27 13:00 ` Matthew Wilcox
2026-04-27 20:14 ` sashiko-bot
2 siblings, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2026-04-27 12:17 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Lorenzo Stoakes,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Rik van Riel, Harry Yoo, Jann Horn, Matthew Wilcox,
Liam R. Howlett, linux-sh, linux-kernel, bpf, linux-mm
On Mon, 27 Apr 2026 13:43:15 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
> Pages that BPF arena code maps are allocated through
> bpf_map_alloc_pages(), which does not allocate folios but pages.
>
> In the future, pages will not have a mapcount, only folios will.
> Converting the code to use folios and rely on folio_mapped() sounds like
> the wrong approach.
>
> Should BPF arena code allocate folios and use folio_mapped() here? But
> likely we would not want to use folios here longterm, as we don't really
> need folio information.
>
> Hard to tell. But in the meantime, we can simply use the page refcount
> instead, as a heuristic whether the page might be mapped to user space
> and we would want to try zapping it, so we can get rid of page_mapped().
>
> Page allocation will give us a page with a refcount of 1. Any user space
> mapping adds a page reference. While there can be references from other
> subsystems (e.g., GUP), in the common case for this test here relying on
> the page count is good enough.
>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
> kernel/bpf/arena.c | 2 +-
BPF maintainers will probably want to carry this in the BPF tree.
That's fine - please go ahead and add it. I'll carry a duplicate in
mm.git so it compiles.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 12:17 ` Andrew Morton
@ 2026-04-27 15:00 ` Alexei Starovoitov
2026-04-27 15:15 ` Andrew Morton
0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2026-04-27 15:00 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand (Arm), Yoshinori Sato, Rich Felker,
John Paul Adrian Glaubitz, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman,
Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett, linux-sh, LKML, bpf,
linux-mm
On Mon, Apr 27, 2026 at 1:18 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 27 Apr 2026 13:43:15 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
>
> > Pages that BPF arena code maps are allocated through
> > bpf_map_alloc_pages(), which does not allocate folios but pages.
> >
> > In the future, pages will not have a mapcount, only folios will.
> > Converting the code to use folios and rely on folio_mapped() sounds like
> > the wrong approach.
> >
> > Should BPF arena code allocate folios and use folio_mapped() here?
no
> But
> > likely we would not want to use folios here longterm, as we don't really
> > need folio information.
exactly
> > Hard to tell. But in the meantime, we can simply use the page refcount
> > instead, as a heuristic whether the page might be mapped to user space
> > and we would want to try zapping it, so we can get rid of page_mapped().
> >
> > Page allocation will give us a page with a refcount of 1. Any user space
> > mapping adds a page reference. While there can be references from other
> > subsystems (e.g., GUP), in the common case for this test here relying on
> > the page count is good enough.
makes sense to me.
> >
> > Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> > ---
> > kernel/bpf/arena.c | 2 +-
>
> BPF maintainers will probably want to carry this in the BPF tree.
> That's fine - please go ahead and add it. I'll carry a duplicate in
> mm.git so it compiles.
We cannot carry the same patch in 2 trees.
Sooner or later it will create problems for linux-next
and issues during merge window if more changes
are done in the same area.
The only way to share a patch between trees is to
create a stable branch and pull it into 2 trees
then sha will be the same,
but mm tree has its own way of doing things,
so this patch needs to stay in mm only and if no one
should be touching adjacent lines :(
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 15:00 ` Alexei Starovoitov
@ 2026-04-27 15:15 ` Andrew Morton
2026-04-27 15:27 ` Alexei Starovoitov
0 siblings, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2026-04-27 15:15 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: David Hildenbrand (Arm), Yoshinori Sato, Rich Felker,
John Paul Adrian Glaubitz, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman,
Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett, linux-sh, LKML, bpf,
linux-mm
On Mon, 27 Apr 2026 16:00:59 +0100 Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
> > >
> > > Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> > > ---
> > > kernel/bpf/arena.c | 2 +-
> >
> > BPF maintainers will probably want to carry this in the BPF tree.
> > That's fine - please go ahead and add it. I'll carry a duplicate in
> > mm.git so it compiles.
>
> We cannot carry the same patch in 2 trees.
Git is fine with that.
> Sooner or later it will create problems for linux-next
> and issues during merge window if more changes
> are done in the same area.
> The only way to share a patch between trees is to
> create a stable branch and pull it into 2 trees
> then sha will be the same,
For a single one-line patch?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 15:15 ` Andrew Morton
@ 2026-04-27 15:27 ` Alexei Starovoitov
0 siblings, 0 replies; 19+ messages in thread
From: Alexei Starovoitov @ 2026-04-27 15:27 UTC (permalink / raw)
To: Andrew Morton
Cc: David Hildenbrand (Arm), Yoshinori Sato, Rich Felker,
John Paul Adrian Glaubitz, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman,
Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett, linux-sh, LKML, bpf,
linux-mm
On Mon, Apr 27, 2026 at 4:15 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 27 Apr 2026 16:00:59 +0100 Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>
> > > >
> > > > Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> > > > ---
> > > > kernel/bpf/arena.c | 2 +-
> > >
> > > BPF maintainers will probably want to carry this in the BPF tree.
> > > That's fine - please go ahead and add it. I'll carry a duplicate in
> > > mm.git so it compiles.
> >
> > We cannot carry the same patch in 2 trees.
>
> Git is fine with that.
>
> > Sooner or later it will create problems for linux-next
> > and issues during merge window if more changes
> > are done in the same area.
> > The only way to share a patch between trees is to
> > create a stable branch and pull it into 2 trees
> > then sha will be the same,
>
> For a single one-line patch?
Number of changed lines is irrelevant.
If there will be a commit on top that moves this line
or lines within the hunk, git will be confused depending
on order of pulls.
We learned this lesson the hard way.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 11:43 ` [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages() David Hildenbrand (Arm)
2026-04-27 12:17 ` Andrew Morton
@ 2026-04-27 13:00 ` Matthew Wilcox
2026-04-27 20:14 ` sashiko-bot
2 siblings, 0 replies; 19+ messages in thread
From: Matthew Wilcox @ 2026-04-27 13:00 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Liam R. Howlett, linux-sh, linux-kernel, bpf, linux-mm
On Mon, Apr 27, 2026 at 01:43:15PM +0200, David Hildenbrand (Arm) wrote:
> Pages that BPF arena code maps are allocated through
> bpf_map_alloc_pages(), which does not allocate folios but pages.
>
> In the future, pages will not have a mapcount, only folios will.
> Converting the code to use folios and rely on folio_mapped() sounds like
> the wrong approach.
>
> Should BPF arena code allocate folios and use folio_mapped() here? But
> likely we would not want to use folios here longterm, as we don't really
> need folio information.
>
> Hard to tell. But in the meantime, we can simply use the page refcount
> instead, as a heuristic whether the page might be mapped to user space
> and we would want to try zapping it, so we can get rid of page_mapped().
>
> Page allocation will give us a page with a refcount of 1. Any user space
> mapping adds a page reference. While there can be references from other
> subsystems (e.g., GUP), in the common case for this test here relying on
> the page count is good enough.
Well, if we GUPped the page, it must have been mapped at some point ;-)
Anyway, I approve of this patch; it lets us defer the hard decisions and
work until we know what we really need.
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
2026-04-27 11:43 ` [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages() David Hildenbrand (Arm)
2026-04-27 12:17 ` Andrew Morton
2026-04-27 13:00 ` Matthew Wilcox
@ 2026-04-27 20:14 ` sashiko-bot
2 siblings, 0 replies; 19+ messages in thread
From: sashiko-bot @ 2026-04-27 20:14 UTC (permalink / raw)
Cc: bpf
Sashiko AI review found 2 potential issue(s):
- [Critical] Replacing `page_mapped()` with `page_ref_count() > 1` removes the synchronous mapcount check, allowing the lockless `zap_pages()` page table walk to execute concurrently with `free_pgtables()` during `munmap()`, leading to a page table use-after-free.
- [Medium] The patch incorrectly uses a lifetime counter (`page_ref_count`) as a semantic indicator to determine if a page is mapped in user space, violating MM guidelines.
--
commit 7d590c0af75e613441c8863162be310388ecc992
Author: David Hildenbrand (Arm) <david@kernel.org>
bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages()
This commit changes the heuristic in arena_free_pages() to use
page_ref_count(page) > 1 instead of page_mapped() to determine whether
to call zap_pages(). This avoids mapcount usage as pages will not have
a mapcount in the future.
> diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
> index 802656c6fd3c..608c55c260bc 100644
> --- a/kernel/bpf/arena.c
> +++ b/kernel/bpf/arena.c
[ ... ]
> @@ -729,7 +729,7 @@ static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt,
>
> llist_for_each_safe(pos, t, __llist_del_all(&free_pages)) {
> page = llist_entry(pos, struct page, pcp_llist);
> - if (page_cnt == 1 && page_mapped(page)) /* mapped by some user process */
> + if (page_cnt == 1 && page_ref_count(page) > 1) /* maybe mapped by user space */
Can this lead to a page table use-after-free during concurrent munmap()?
zap_pages() iterates over arena->vma_list and walks user page tables locklessly.
During a concurrent munmap(), unmap_region() executes unmap_vmas() (which drops
mapcounts), free_pgtables() (which unlinks and frees page tables), and finally
tlb_finish_mmu() (which drops the batched page refcounts via mmu_gather).
The VMA is only removed from arena->vma_list later during arena_vm_close(),
meaning it is still visible to zap_pages() during free_pgtables().
Since mmu_gather retains the page reference until tlb_finish_mmu(),
page_ref_count(page) > 1 will remain true during free_pgtables().
With the old page_mapped() check, it would correctly evaluate to false because
unmap_vmas() had already dropped the mapcount to 0. It appears this change
could allow zap_pages() to walk page tables while they are actively being freed.
This could also be forced by artificially inflating the refcount using GUP pins.
Also, does this use of page_ref_count() conflict with the MM guideline that
refcount should be treated as a lifetime counter rather than a semantic
indicator?
Speculative references (like GUP pins or page_idle) can transiently inflate
the refcount independently of user space mappings.
> /* Optimization for the common case of page_cnt==1:
> * If page wasn't mapped into some user vma there
> * is no need to call zap_pages which is slow. When
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260427-page_mapped-v1-0-e89c3592c74c@kernel.org?part=2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 3/3] mm: remove page_mapped()
2026-04-27 11:43 [PATCH 0/3] mm: remove page_mapped() David Hildenbrand (Arm)
2026-04-27 11:43 ` [PATCH 1/3] sh: use folio_mapped() instead of page_mapped() in sh4_flush_cache_page() David Hildenbrand (Arm)
2026-04-27 11:43 ` [PATCH 2/3] bpf: arena: use page_ref_count() instead of page_mapped() in arena_free_pages() David Hildenbrand (Arm)
@ 2026-04-27 11:43 ` David Hildenbrand (Arm)
2026-04-27 13:12 ` Matthew Wilcox
2026-04-27 13:21 ` Andrew Morton
2026-04-27 20:59 ` [PATCH 0/3] " David Hildenbrand (Arm)
3 siblings, 2 replies; 19+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-27 11:43 UTC (permalink / raw)
To: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett
Cc: linux-sh, linux-kernel, bpf, linux-mm, David Hildenbrand (Arm)
Let's replace the last user of page_mapped() by folio_mapped() so we
can get rid of page_mapped().
Replace the remaining occurrences of page_mapped() in rmap documentation
by folio_mapped().
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
include/linux/mm.h | 10 ----------
mm/memory.c | 2 +-
mm/rmap.c | 8 ++++----
3 files changed, 5 insertions(+), 15 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index af23453e9dbd..87fcd068303a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1888,16 +1888,6 @@ static inline bool folio_mapped(const struct folio *folio)
return folio_mapcount(folio) >= 1;
}
-/*
- * Return true if this page is mapped into pagetables.
- * For compound page it returns true if any sub-page of compound page is mapped,
- * even if this particular sub-page is not itself mapped by any PTE or PMD.
- */
-static inline bool page_mapped(const struct page *page)
-{
- return folio_mapped(page_folio(page));
-}
-
static inline struct page *virt_to_head_page(const void *x)
{
struct page *page = virt_to_page(x);
diff --git a/mm/memory.c b/mm/memory.c
index ea6568571131..99854e6a2793 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5464,7 +5464,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
if (unlikely(PageHWPoison(vmf->page))) {
vm_fault_t poisonret = VM_FAULT_HWPOISON;
if (ret & VM_FAULT_LOCKED) {
- if (page_mapped(vmf->page))
+ if (folio_mapped(folio))
unmap_mapping_folio(folio);
/* Retry if a clean folio was removed from the cache. */
if (mapping_evict_folio(folio->mapping, folio))
diff --git a/mm/rmap.c b/mm/rmap.c
index 78b7fb5f367c..fb3c351f8c45 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -571,7 +571,7 @@ void __init anon_vma_init(void)
* In case it was remapped to a different anon_vma, the new anon_vma will be a
* child of the old anon_vma, and the anon_vma lifetime rules will therefore
* ensure that any anon_vma obtained from the page will still be valid for as
- * long as we observe page_mapped() [ hence all those page_mapped() tests ].
+ * long as we observe folio_mapped() [ hence all those folio_mapped() tests ].
*
* All users of this function must be very careful when walking the anon_vma
* chain and verify that the page in question is indeed mapped in it
@@ -1999,7 +1999,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
/*
* When racing against e.g. zap_pte_range() on another cpu,
* in between its ptep_get_and_clear_full() and folio_remove_rmap_*(),
- * try_to_unmap() may return before page_mapped() has become false,
+ * try_to_unmap() may return before folio_mapped() has become false,
* if page table locking is skipped: use TTU_SYNC to wait for that.
*/
if (flags & TTU_SYNC)
@@ -2426,7 +2426,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
/*
* When racing against e.g. zap_pte_range() on another cpu,
* in between its ptep_get_and_clear_full() and folio_remove_rmap_*(),
- * try_to_migrate() may return before page_mapped() has become false,
+ * try_to_migrate() may return before folio_mapped() has become false,
* if page table locking is skipped: use TTU_SYNC to wait for that.
*/
if (flags & TTU_SYNC)
@@ -2927,7 +2927,7 @@ static struct anon_vma *rmap_walk_anon_lock(const struct folio *folio,
/*
* Note: remove_migration_ptes() cannot use folio_lock_anon_vma_read()
- * because that depends on page_mapped(); but not all its usages
+ * because that depends on folio_mapped(); but not all its usages
* are holding mmap_lock. Users without mmap_lock are required to
* take a reference count to prevent the anon_vma disappearing
*/
--
2.43.0
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 3/3] mm: remove page_mapped()
2026-04-27 11:43 ` [PATCH 3/3] mm: remove page_mapped() David Hildenbrand (Arm)
@ 2026-04-27 13:12 ` Matthew Wilcox
2026-04-27 13:21 ` Andrew Morton
1 sibling, 0 replies; 19+ messages in thread
From: Matthew Wilcox @ 2026-04-27 13:12 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Liam R. Howlett, linux-sh, linux-kernel, bpf, linux-mm
On Mon, Apr 27, 2026 at 01:43:16PM +0200, David Hildenbrand (Arm) wrote:
> Let's replace the last user of page_mapped() by folio_mapped() so we
> can get rid of page_mapped().
Yay!
> Replace the remaining occurrences of page_mapped() in rmap documentation
> by folio_mapped().
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> +++ b/mm/memory.c
> @@ -5464,7 +5464,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf)
> if (unlikely(PageHWPoison(vmf->page))) {
> vm_fault_t poisonret = VM_FAULT_HWPOISON;
> if (ret & VM_FAULT_LOCKED) {
> - if (page_mapped(vmf->page))
> + if (folio_mapped(folio))
> unmap_mapping_folio(folio);
The idiot who authored 01d1e0e6b7d9 really should have done this at the
time ... Oh, wait, I see what I was trying to do.
I believe my thinking was that we only needed to unmap the folio if
this specific page that had hardware poison was mapped. But no, we need
to unmap the entire folio if any page in it is mapped.
Does the affect recoverability from hwpoison? I don't think so. When
we detect hwpoison, the first thing we try to do is split the folio.
Of course that can fail, but if we do, we kill the process.
So yes, my R-b above stands.
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 3/3] mm: remove page_mapped()
2026-04-27 11:43 ` [PATCH 3/3] mm: remove page_mapped() David Hildenbrand (Arm)
2026-04-27 13:12 ` Matthew Wilcox
@ 2026-04-27 13:21 ` Andrew Morton
2026-04-27 13:23 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 19+ messages in thread
From: Andrew Morton @ 2026-04-27 13:21 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Lorenzo Stoakes,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Rik van Riel, Harry Yoo, Jann Horn, Matthew Wilcox,
Liam R. Howlett, linux-sh, linux-kernel, bpf, linux-mm,
Breno Leitao
On Mon, 27 Apr 2026 13:43:16 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
> Let's replace the last user of page_mapped() by folio_mapped() so we
> can get rid of page_mapped().
>
> Replace the remaining occurrences of page_mapped() in rmap documentation
> by folio_mapped().
This broke Breno's "mm/memory-failure: add panic option for
unrecoverable pages"
(https://lore.kernel.org/20260424-ecc_panic-v5-2-a35f4b50425c@debian.org),
which added a new page_mapped() call. I made the below adjustment to
Breno's patch:
--- a/mm/memory-failure.c~mm-memory-failure-add-panic-option-for-unrecoverable-pages-fix
+++ a/mm/memory-failure.c
@@ -1353,7 +1353,7 @@ static bool panic_on_unrecoverable_mf(un
cpu_relax();
return page_count(p) == 0 &&
!PageLRU(p) &&
- !page_mapped(p) &&
+ !folio_mapped(page_folio(p)) &&
!page_folio(p)->mapping &&
!is_free_buddy_page(p);
default:
_
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/3] mm: remove page_mapped()
2026-04-27 13:21 ` Andrew Morton
@ 2026-04-27 13:23 ` David Hildenbrand (Arm)
2026-04-27 14:42 ` Breno Leitao
0 siblings, 1 reply; 19+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-27 13:23 UTC (permalink / raw)
To: Andrew Morton
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Lorenzo Stoakes,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Rik van Riel, Harry Yoo, Jann Horn, Matthew Wilcox,
Liam R. Howlett, linux-sh, linux-kernel, bpf, linux-mm,
Breno Leitao
On 4/27/26 15:21, Andrew Morton wrote:
> On Mon, 27 Apr 2026 13:43:16 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
>
>> Let's replace the last user of page_mapped() by folio_mapped() so we
>> can get rid of page_mapped().
>>
>> Replace the remaining occurrences of page_mapped() in rmap documentation
>> by folio_mapped().
>
> This broke Breno's "mm/memory-failure: add panic option for
> unrecoverable pages"
> (https://lore.kernel.org/20260424-ecc_panic-v5-2-a35f4b50425c@debian.org),
> which added a new page_mapped() call. I made the below adjustment to
> Breno's patch:
>
> --- a/mm/memory-failure.c~mm-memory-failure-add-panic-option-for-unrecoverable-pages-fix
> +++ a/mm/memory-failure.c
> @@ -1353,7 +1353,7 @@ static bool panic_on_unrecoverable_mf(un
> cpu_relax();
> return page_count(p) == 0 &&
> !PageLRU(p) &&
> - !page_mapped(p) &&
> + !folio_mapped(page_folio(p)) &&
> !page_folio(p)->mapping &&
If we have a folio, we should really lookup the folio once. Not 4 times.
Breno's patch likely needs some love. :)
--
Cheers,
David
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/3] mm: remove page_mapped()
2026-04-27 13:23 ` David Hildenbrand (Arm)
@ 2026-04-27 14:42 ` Breno Leitao
2026-04-27 14:59 ` Matthew Wilcox
0 siblings, 1 reply; 19+ messages in thread
From: Breno Leitao @ 2026-04-27 14:42 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Andrew Morton, Yoshinori Sato, Rich Felker,
John Paul Adrian Glaubitz, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman,
Kumar Kartikeya Dwivedi, Song Liu, Yonghong Song, Jiri Olsa,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett, linux-sh,
linux-kernel, bpf, linux-mm
Hello David,
On Mon, Apr 27, 2026 at 03:23:33PM +0200, David Hildenbrand (Arm) wrote:
> On 4/27/26 15:21, Andrew Morton wrote:
> > On Mon, 27 Apr 2026 13:43:16 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
> >
> >> Let's replace the last user of page_mapped() by folio_mapped() so we
> >> can get rid of page_mapped().
> >>
> >> Replace the remaining occurrences of page_mapped() in rmap documentation
> >> by folio_mapped().
> >
> > This broke Breno's "mm/memory-failure: add panic option for
> > unrecoverable pages"
> > (https://lore.kernel.org/20260424-ecc_panic-v5-2-a35f4b50425c@debian.org),
> > which added a new page_mapped() call. I made the below adjustment to
> > Breno's patch:
> >
> > --- a/mm/memory-failure.c~mm-memory-failure-add-panic-option-for-unrecoverable-pages-fix
> > +++ a/mm/memory-failure.c
> > @@ -1353,7 +1353,7 @@ static bool panic_on_unrecoverable_mf(un
> > cpu_relax();
> > return page_count(p) == 0 &&
> > !PageLRU(p) &&
> > - !page_mapped(p) &&
> > + !folio_mapped(page_folio(p)) &&
> > !page_folio(p)->mapping &&
>
> If we have a folio, we should really lookup the folio once. Not 4 times.
Why 4 times?
> Breno's patch likely needs some love. :)
Would something like the following give it all the love in the world?
folio = page_folio(p);
return page_count(p) == 0 &&
!PageLRU(p) &&
!folio_mapped(folio) &&
!folio->mapping &&
!is_free_buddy_page(p);
Thanks,
--breno
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 3/3] mm: remove page_mapped()
2026-04-27 14:42 ` Breno Leitao
@ 2026-04-27 14:59 ` Matthew Wilcox
0 siblings, 0 replies; 19+ messages in thread
From: Matthew Wilcox @ 2026-04-27 14:59 UTC (permalink / raw)
To: Breno Leitao
Cc: David Hildenbrand (Arm), Andrew Morton, Yoshinori Sato,
Rich Felker, John Paul Adrian Glaubitz, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Song Liu,
Yonghong Song, Jiri Olsa, Lorenzo Stoakes, Vlastimil Babka,
Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Rik van Riel,
Harry Yoo, Jann Horn, Liam R. Howlett, linux-sh, linux-kernel,
bpf, linux-mm
On Mon, Apr 27, 2026 at 07:42:46AM -0700, Breno Leitao wrote:
> > > @@ -1353,7 +1353,7 @@ static bool panic_on_unrecoverable_mf(un
> > > cpu_relax();
> > > return page_count(p) == 0 &&
> > > !PageLRU(p) &&
> > > - !page_mapped(p) &&
> > > + !folio_mapped(page_folio(p)) &&
> > > !page_folio(p)->mapping &&
> >
> > If we have a folio, we should really lookup the folio once. Not 4 times.
>
> Why 4 times?
Because there are page_folio() calls hidden in PageLRU, page_mapped()
and page_count().
> > Breno's patch likely needs some love. :)
>
> Would something like the following give it all the love in the world?
>
> folio = page_folio(p);
> return page_count(p) == 0 &&
> !PageLRU(p) &&
> !folio_mapped(folio) &&
> !folio->mapping &&
> !is_free_buddy_page(p);
No. You need to immerse yourself more deeply in the folio transition
;-)
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 0/3] mm: remove page_mapped()
2026-04-27 11:43 [PATCH 0/3] mm: remove page_mapped() David Hildenbrand (Arm)
` (2 preceding siblings ...)
2026-04-27 11:43 ` [PATCH 3/3] mm: remove page_mapped() David Hildenbrand (Arm)
@ 2026-04-27 20:59 ` David Hildenbrand (Arm)
2026-04-27 21:38 ` Alexei Starovoitov
3 siblings, 1 reply; 19+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-27 20:59 UTC (permalink / raw)
To: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett
Cc: linux-sh, linux-kernel, bpf, linux-mm
On 4/27/26 13:43, David Hildenbrand (Arm) wrote:
> While preparing my slides for an LSF/MM talk, I realized that I did not
> yet remove page_mapped().
>
> So let's do that. In the BPF arena code it's unclear which memdesc we
> would want to allocate in the future: certainly something with a
> refcount, but likely none with a mapcount. So let's just rely on
> the page refcount instead to decide whether we want to try zapping the
> page from user page tables.
>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
I scanned AI review and I think it founds something that is not related to this
patch.
We use the page_mapped()->page_ref_count() check as an optimization to avoid
calling zap_vma_range(). We must be able to call it even without that optimization.
Just like the bulk zap call earlier
if (page_cnt > 1)
/* bulk zap if multiple pages being freed */
zap_pages(arena, full_uaddr, page_cnt);
It talks about concurrent "munmap(), unmap_region() executes unmap_vmas()"
racing with our zap_vma_range().
Looking into the details, arena_map_mmap() calls remember_vma(). We reject
mremap and VMA split. arena_vm_close() removes the VMA from the list. The
arena->lock protects our VMA list.
So in zap_pages, the VMA cannot go away. If we find a VMA, ->close could not
have been called yet.
In vma.c, we call remove_vma() after vms_clear_pte(). So after unmapping the
pages and freeing the page tables.
So munmap() can indeed race with zap_vma_range(), and the page_mapped() check
would not have changed anything about that really.
@BPF folks: does BPF take anywhere the mmap lock in read mode before calling
zap_vma_range()? It should do that.
--
Cheers,
David
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 0/3] mm: remove page_mapped()
2026-04-27 20:59 ` [PATCH 0/3] " David Hildenbrand (Arm)
@ 2026-04-27 21:38 ` Alexei Starovoitov
2026-04-28 5:37 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 19+ messages in thread
From: Alexei Starovoitov @ 2026-04-27 21:38 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett, linux-sh, LKML, bpf,
linux-mm
On Mon, Apr 27, 2026 at 9:59 PM David Hildenbrand (Arm)
<david@kernel.org> wrote:
>
> On 4/27/26 13:43, David Hildenbrand (Arm) wrote:
> > While preparing my slides for an LSF/MM talk, I realized that I did not
> > yet remove page_mapped().
> >
> > So let's do that. In the BPF arena code it's unclear which memdesc we
> > would want to allocate in the future: certainly something with a
> > refcount, but likely none with a mapcount. So let's just rely on
> > the page refcount instead to decide whether we want to try zapping the
> > page from user page tables.
> >
> > Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> > ---
>
> I scanned AI review and I think it founds something that is not related to this
> patch.
>
> We use the page_mapped()->page_ref_count() check as an optimization to avoid
> calling zap_vma_range(). We must be able to call it even without that optimization.
>
> Just like the bulk zap call earlier
>
> if (page_cnt > 1)
> /* bulk zap if multiple pages being freed */
> zap_pages(arena, full_uaddr, page_cnt);
>
> It talks about concurrent "munmap(), unmap_region() executes unmap_vmas()"
> racing with our zap_vma_range().
>
> Looking into the details, arena_map_mmap() calls remember_vma(). We reject
> mremap and VMA split. arena_vm_close() removes the VMA from the list. The
> arena->lock protects our VMA list.
>
> So in zap_pages, the VMA cannot go away. If we find a VMA, ->close could not
> have been called yet.
>
> In vma.c, we call remove_vma() after vms_clear_pte(). So after unmapping the
> pages and freeing the page tables.
>
> So munmap() can indeed race with zap_vma_range(), and the page_mapped() check
> would not have changed anything about that really.
>
>
> @BPF folks: does BPF take anywhere the mmap lock in read mode before calling
> zap_vma_range()? It should do that.
Yes, but do NOT. As I explained to Andrew.
It's a git mess. I don't want any more changes that cross trees.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 0/3] mm: remove page_mapped()
2026-04-27 21:38 ` Alexei Starovoitov
@ 2026-04-28 5:37 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 19+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-28 5:37 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Yoshinori Sato, Rich Felker, John Paul Adrian Glaubitz,
Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Song Liu, Yonghong Song, Jiri Olsa, Andrew Morton,
Lorenzo Stoakes, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Rik van Riel, Harry Yoo,
Jann Horn, Matthew Wilcox, Liam R. Howlett, linux-sh, LKML, bpf,
linux-mm, Ryan Roberts
On 4/27/26 23:38, Alexei Starovoitov wrote:
> On Mon, Apr 27, 2026 at 9:59 PM David Hildenbrand (Arm)
> <david@kernel.org> wrote:
>>
>> On 4/27/26 13:43, David Hildenbrand (Arm) wrote:
>>> While preparing my slides for an LSF/MM talk, I realized that I did not
>>> yet remove page_mapped().
>>>
>>> So let's do that. In the BPF arena code it's unclear which memdesc we
>>> would want to allocate in the future: certainly something with a
>>> refcount, but likely none with a mapcount. So let's just rely on
>>> the page refcount instead to decide whether we want to try zapping the
>>> page from user page tables.
>>>
>>> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
>>> ---
>>
>> I scanned AI review and I think it founds something that is not related to this
>> patch.
>>
>> We use the page_mapped()->page_ref_count() check as an optimization to avoid
>> calling zap_vma_range(). We must be able to call it even without that optimization.
>>
>> Just like the bulk zap call earlier
>>
>> if (page_cnt > 1)
>> /* bulk zap if multiple pages being freed */
>> zap_pages(arena, full_uaddr, page_cnt);
>>
>> It talks about concurrent "munmap(), unmap_region() executes unmap_vmas()"
>> racing with our zap_vma_range().
>>
>> Looking into the details, arena_map_mmap() calls remember_vma(). We reject
>> mremap and VMA split. arena_vm_close() removes the VMA from the list. The
>> arena->lock protects our VMA list.
>>
>> So in zap_pages, the VMA cannot go away. If we find a VMA, ->close could not
>> have been called yet.
>>
>> In vma.c, we call remove_vma() after vms_clear_pte(). So after unmapping the
>> pages and freeing the page tables.
>>
>> So munmap() can indeed race with zap_vma_range(), and the page_mapped() check
>> would not have changed anything about that really.
>>
>>
>> @BPF folks: does BPF take anywhere the mmap lock in read mode before calling
>> zap_vma_range()? It should do that.
>
> Yes, but do NOT.
In general, do NOT talk to me like that.
As I discussed recently with Ryan, we should update the locking requirements for
zap_vma_range().
I assume friendly BPF folks will take care of fixing this.
--
Cheers,
David
^ permalink raw reply [flat|nested] 19+ messages in thread