public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
@ 2026-03-26 16:26 Gregory Price
  2026-03-26 17:07 ` Pedro Falcato
  2026-03-26 19:21 ` Matthew Wilcox
  0 siblings, 2 replies; 7+ messages in thread
From: Gregory Price @ 2026-03-26 16:26 UTC (permalink / raw)
  To: linux-mm, akpm, hughd
  Cc: david, ljs, Liam.Howlett, vbabka, rppt, surenb, mhocko,
	baolin.wang, linux-kernel, kernel-team, stable

Inflating a VM's balloon while vhost-user-net fork+exec's a helper
triggers "still mapped when deleted" on the memfd backing guest RAM:

  BUG: Bad page cache in process __balloon  pfn:6520704
  page dumped because: still mapped when deleted
  ...
  shmem_undo_range+0x3fa/0x570
  shmem_fallocate+0x366/0x4d0
  vfs_fallocate+0x13c/0x310

This BUG also resulted in guests seeing stale mappings backed by a
zeroed page, causing guest kernel panics.  I was unable to trace that
specific interaction, but it appears to be related to THP splitting.

Two races allow PTEs to be re-installed for a folio that fallocate
is about to remove from page cache:

Race 1 — fault-around (filemap_map_pages):

  fallocate              fault-around           fork
  --------               ------------           ----
  set i_private
  unmap_mapping_range()
  # zaps PTEs
                       filemap_map_pages()
                        # re-maps folio!
                                              dup_mmap()
                                              # child VMA
                                              # in tree
  shmem_undo_range()
    lock folio
    unmap_mapping_folio()
    # child VMA:
    #   no PTE, skip
                                            copy_page_range()
                                              # copies PTE
    # parent VMA:
    #   zaps PTE
  filemap_remove_folio()
    # mapcount=1, BUG!

filemap_map_pages() is called directly as .map_pages, bypassing
shmem_fault()'s i_private synchronization.

Race 2 — shmem_fault TOCTOU:

  fallocate                   shmem_fault
  --------                    -----------
                            check i_private → NULL
  set i_private
  unmap_mapping_range()
  # zaps PTEs
                            shmem_get_folio_gfp()
                              # finds folio in cache
                            finish_fault()
                              # installs PTE
  shmem_undo_range()
    truncate_inode_folio()
      # mapcount=1, BUG!

Fix both races with invalidate_lock.

This matches the existing pattern used by secretmem_fault(),
udf_page_mkwrite(), and zonefs_filemap_page_mkwrite(), all of
which take invalidate_lock shared under mmap_lock in their
fault handlers.

This also requires removing the rcu_read_lock() from
do_fault_around() so that .map_pages may use sleeping locks.

The outer rcu_read_lock is redundant for all in-tree .map_pages
implementations: every one either IS filemap_map_pages (which
takes rcu_read_lock) or is a thin wrapper around it.

Fixes: d7c1755179b8 ("mm: implement ->map_pages for shmem/tmpfs")
Cc: stable@vger.kernel.org
Signed-off-by: Gregory Price <gourry@gourry.net>
---
 mm/memory.c |  2 --
 mm/shmem.c  | 33 ++++++++++++++++++++++++++++++---
 2 files changed, 30 insertions(+), 5 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index e44469f9cf65..838583591fdf 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5900,11 +5900,9 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf)
 			return VM_FAULT_OOM;
 	}
 
-	rcu_read_lock();
 	ret = vmf->vma->vm_ops->map_pages(vmf,
 			vmf->pgoff + from_pte - pte_off,
 			vmf->pgoff + to_pte - pte_off);
-	rcu_read_unlock();
 
 	return ret;
 }
diff --git a/mm/shmem.c b/mm/shmem.c
index 4ecefe02881d..5c654b86f3cf 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2731,7 +2731,8 @@ static vm_fault_t shmem_falloc_wait(struct vm_fault *vmf, struct inode *inode)
 static vm_fault_t shmem_fault(struct vm_fault *vmf)
 {
 	struct inode *inode = file_inode(vmf->vma->vm_file);
-	gfp_t gfp = mapping_gfp_mask(inode->i_mapping);
+	struct address_space *mapping = inode->i_mapping;
+	gfp_t gfp = mapping_gfp_mask(mapping);
 	struct folio *folio = NULL;
 	vm_fault_t ret = 0;
 	int err;
@@ -2747,8 +2748,15 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf)
 	}
 
 	WARN_ON_ONCE(vmf->page != NULL);
+	/*
+	 * shmem_fallocate(PUNCH_HOLE) holds invalidate_lock exclusive across
+	 * unmap+truncate.  Take it shared here so shmem_fault cannot obtain
+	 * a folio in the process of being punched.
+	 */
+	filemap_invalidate_lock_shared(mapping);
 	err = shmem_get_folio_gfp(inode, vmf->pgoff, 0, &folio, SGP_CACHE,
 				  gfp, vmf, &ret);
+	filemap_invalidate_unlock_shared(mapping);
 	if (err)
 		return vmf_error(err);
 	if (folio) {
@@ -3683,11 +3691,13 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
 		inode->i_private = &shmem_falloc;
 		spin_unlock(&inode->i_lock);
 
+		filemap_invalidate_lock(mapping);
 		if ((u64)unmap_end > (u64)unmap_start)
 			unmap_mapping_range(mapping, unmap_start,
 					    1 + unmap_end - unmap_start, 0);
 		shmem_truncate_range(inode, offset, offset + len - 1);
 		/* No need to unmap again: hole-punching leaves COWed pages */
+		filemap_invalidate_unlock(mapping);
 
 		spin_lock(&inode->i_lock);
 		inode->i_private = NULL;
@@ -5268,9 +5278,26 @@ static const struct super_operations shmem_ops = {
 #endif
 };
 
+/*
+ * shmem_fallocate(PUNCH_HOLE) holds invalidate_lock for write across
+ * unmap+truncate.  Take it for read here so fault-around cannot re-map
+ * pages being punched.
+ */
+static vm_fault_t shmem_map_pages(struct vm_fault *vmf,
+				  pgoff_t start_pgoff, pgoff_t end_pgoff)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	vm_fault_t ret;
+
+	filemap_invalidate_lock_shared(mapping);
+	ret = filemap_map_pages(vmf, start_pgoff, end_pgoff);
+	filemap_invalidate_unlock_shared(mapping);
+	return ret;
+}
+
 static const struct vm_operations_struct shmem_vm_ops = {
 	.fault		= shmem_fault,
-	.map_pages	= filemap_map_pages,
+	.map_pages	= shmem_map_pages,
 #ifdef CONFIG_NUMA
 	.set_policy     = shmem_set_policy,
 	.get_policy     = shmem_get_policy,
@@ -5282,7 +5309,7 @@ static const struct vm_operations_struct shmem_vm_ops = {
 
 static const struct vm_operations_struct shmem_anon_vm_ops = {
 	.fault		= shmem_fault,
-	.map_pages	= filemap_map_pages,
+	.map_pages	= shmem_map_pages,
 #ifdef CONFIG_NUMA
 	.set_policy     = shmem_set_policy,
 	.get_policy     = shmem_get_policy,
-- 
2.53.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
  2026-03-26 16:26 [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race Gregory Price
@ 2026-03-26 17:07 ` Pedro Falcato
  2026-03-26 18:37   ` Gregory Price
  2026-03-26 19:21 ` Matthew Wilcox
  1 sibling, 1 reply; 7+ messages in thread
From: Pedro Falcato @ 2026-03-26 17:07 UTC (permalink / raw)
  To: Gregory Price
  Cc: linux-mm, akpm, hughd, david, ljs, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, baolin.wang, linux-kernel, kernel-team, stable

On Thu, Mar 26, 2026 at 11:26:11AM -0500, Gregory Price wrote:
> Inflating a VM's balloon while vhost-user-net fork+exec's a helper
> triggers "still mapped when deleted" on the memfd backing guest RAM:
> 
>   BUG: Bad page cache in process __balloon  pfn:6520704
>   page dumped because: still mapped when deleted
>   ...
>   shmem_undo_range+0x3fa/0x570
>   shmem_fallocate+0x366/0x4d0
>   vfs_fallocate+0x13c/0x310
> 
> This BUG also resulted in guests seeing stale mappings backed by a
> zeroed page, causing guest kernel panics.  I was unable to trace that
> specific interaction, but it appears to be related to THP splitting.
> 
> Two races allow PTEs to be re-installed for a folio that fallocate
> is about to remove from page cache:

Hmm, I don't see how your patch fixes anything.

> 
> Race 1 — fault-around (filemap_map_pages):
> 
>   fallocate              fault-around           fork
>   --------               ------------           ----
>   set i_private
>   unmap_mapping_range()
>   # zaps PTEs
>                        filemap_map_pages()
>                         # re-maps folio!
>                                               dup_mmap()
>                                               # child VMA
>                                               # in tree
>   shmem_undo_range()
>     lock folio
>     unmap_mapping_folio()
	spin_lock(ptl);
>     # child VMA:
>     #   no PTE, skip
	spin_unlock(ptl);
>                                             copy_page_range()
                                               spin_lock(dst_ptl);
					       spin_lock(src_ptl);
						/* does not copy PTE. either
						 * we find a zapped PTE, or unmap_mapping_folio()
						 * finds two mappings instead of one. */
>						# copies PTE
>     # parent VMA:
>     #   zaps PTE
>   filemap_remove_folio()
>     # mapcount=1, BUG!
> 
> filemap_map_pages() is called directly as .map_pages, bypassing
> shmem_fault()'s i_private synchronization.
> 
> Race 2 — shmem_fault TOCTOU:
> 
>   fallocate                   shmem_fault
>   --------                    -----------
>                             check i_private → NULL
>   set i_private
>   unmap_mapping_range()
>   # zaps PTEs
>                             shmem_get_folio_gfp()
>                               # finds folio in cache
>                             finish_fault()
>                               # installs PTE
>   shmem_undo_range()
>     truncate_inode_folio()
	truncate_inode_folio() zaps the PTEs, thus mapcount = 0.
	shmem folio is locked by both truncate and shmem_fault().
>       # mapcount=1, BUG!
> 
> Fix both races with invalidate_lock.
> 

I don't see what you're seeing? Note that both map_pages and fault()
take the folio lock (map_pages does a trylock) to exclude against truncate
as well.

-- 
Pedro


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
  2026-03-26 17:07 ` Pedro Falcato
@ 2026-03-26 18:37   ` Gregory Price
  2026-03-26 19:16     ` Pedro Falcato
  0 siblings, 1 reply; 7+ messages in thread
From: Gregory Price @ 2026-03-26 18:37 UTC (permalink / raw)
  To: Pedro Falcato
  Cc: linux-mm, akpm, hughd, david, ljs, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, baolin.wang, linux-kernel, kernel-team, stable

On Thu, Mar 26, 2026 at 05:07:42PM +0000, Pedro Falcato wrote:
> > Two races allow PTEs to be re-installed for a folio that fallocate
> > is about to remove from page cache:
> 
> Hmm, I don't see how your patch fixes anything.
> 

after looking at your comments below i realized race 2 actually requires
the fork as well, which means they're both essentially variations of the
same race, so hopefully i can simplify the change log.

> >   fallocate              fault-around           fork
> >   --------               ------------           ----
> >   set i_private
> >   unmap_mapping_range()
> >   # zaps PTEs
> >                        filemap_map_pages()
> >                         # re-maps folio!
> >                                               dup_mmap()
> >                                               # child VMA
> >                                               # in tree
> >   shmem_undo_range()
> >     lock folio
> >     unmap_mapping_folio()
                  ^^^ i_mmap_lock_read held, iterates VMAs
> 	spin_lock(ptl);
                  ^^^ child VMA's PTL
> >     # child VMA:
> >     #   no PTE, skip
> 	spin_unlock(ptl);
                    ^^^ child VMA done, iterator moves on
		        it will not re-visit the child.

> >                                             copy_page_range()
>                                                spin_lock(dst_ptl);
                                                   ^ Child PTL
> 					       spin_lock(src_ptl);
                                                   ^ Parent PTL
> 						/* does not copy PTE. either
> 						 * we find a zapped PTE, or unmap_mapping_folio()
> 						 * finds two mappings instead of one. */

At this point, unmap_mapping_folio only processed the child VMA
(no PTE, skip). The parent PTE *has not* been zapped.

copy_page_range() acquires src_ptl (parent) and reads a present PTE,
and boom copies it to child.

When it reaches the parent VMA next, it zaps the parent PTE,
but the child PTE (just installed) survives.  

> > 
> > Fix both races with invalidate_lock.
> > 
> 
> I don't see what you're seeing? Note that both map_pages and fault()
> take the folio lock (map_pages does a trylock) to exclude against truncate
> as well.
> 

The folio lock serializes map_pages/fault against truncate - but the
race isn't between those two. It's between truncate's unmap walk and
fork's copy_page_range - and copy_page_range doesn't take folio lock.

The easiest way to deal with this is to prevent these fork-inserted PTEs
from existing rather than try to make copy_page_range aware of
truncation (it already holds the PTL when it finds the PTE, so you can't
take the folio lock unless you drop/reacquire the PTL).

~Gregory


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
  2026-03-26 18:37   ` Gregory Price
@ 2026-03-26 19:16     ` Pedro Falcato
  2026-03-26 19:48       ` Gregory Price
  0 siblings, 1 reply; 7+ messages in thread
From: Pedro Falcato @ 2026-03-26 19:16 UTC (permalink / raw)
  To: Gregory Price
  Cc: linux-mm, akpm, hughd, david, ljs, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, baolin.wang, linux-kernel, kernel-team, stable

On Thu, Mar 26, 2026 at 01:37:17PM -0500, Gregory Price wrote:
> On Thu, Mar 26, 2026 at 05:07:42PM +0000, Pedro Falcato wrote:
> > > Two races allow PTEs to be re-installed for a folio that fallocate
> > > is about to remove from page cache:
> > 
> > Hmm, I don't see how your patch fixes anything.
> > 
> 
> after looking at your comments below i realized race 2 actually requires
> the fork as well, which means they're both essentially variations of the
> same race, so hopefully i can simplify the change log.

Well, then I don't see how changing shmem_fault() & map_mages() fixes fork.

> 
> > >   fallocate              fault-around           fork
> > >   --------               ------------           ----
> > >   set i_private
> > >   unmap_mapping_range()
> > >   # zaps PTEs
> > >                        filemap_map_pages()
> > >                         # re-maps folio!
> > >                                               dup_mmap()
> > >                                               # child VMA
> > >                                               # in tree
> > >   shmem_undo_range()
> > >     lock folio
> > >     unmap_mapping_folio()
>                   ^^^ i_mmap_lock_read held, iterates VMAs
> > 	spin_lock(ptl);
>                   ^^^ child VMA's PTL
> > >     # child VMA:
> > >     #   no PTE, skip
> > 	spin_unlock(ptl);
>                     ^^^ child VMA done, iterator moves on
> 		        it will not re-visit the child.
> 
> > >                                             copy_page_range()
> >                                                spin_lock(dst_ptl);
>                                                    ^ Child PTL
> > 					       spin_lock(src_ptl);
>                                                    ^ Parent PTL
> > 						/* does not copy PTE. either
> > 						 * we find a zapped PTE, or unmap_mapping_folio()
> > 						 * finds two mappings instead of one. */
> 
> At this point, unmap_mapping_folio only processed the child VMA
> (no PTE, skip). The parent PTE *has not* been zapped.
> 
> copy_page_range() acquires src_ptl (parent) and reads a present PTE,
> and boom copies it to child.

Sure, but can child - parent happen when traversing the i_mmap tree? I don't
think so? (in mm/mmap.c)
	/* insert tmp into the share list, just after mpnt */
	vma_interval_tree_insert_after(tmp, mpnt,
			&mapping->i_mmap);

The function itself is somewhat straightforward - find the leftmost node at the
right of 'prev' (our parent) and link ourselves. So an in-order traversal should
always go parent - child. Unless there's some awful tree rotation that can
happen and screw us in the meanwhile.

> 
> When it reaches the parent VMA next, it zaps the parent PTE,
> but the child PTE (just installed) survives.  
> 
> > > 
> > > Fix both races with invalidate_lock.
> > > 
> > 
> > I don't see what you're seeing? Note that both map_pages and fault()
> > take the folio lock (map_pages does a trylock) to exclude against truncate
> > as well.
> > 
> 
> The folio lock serializes map_pages/fault against truncate - but the
> race isn't between those two. It's between truncate's unmap walk and
> fork's copy_page_range - and copy_page_range doesn't take folio lock.

If we observe everything parent - child, there is no way this is broken - if
fork observes the parent pte set, zap will have to observe parent *and* child,
since they hold the corresponding pte locks, and traversal is done in order.
If fork observes the parent pte as none, zap will have already traversed the
parent, and as such there will be no additional mapping of the folio.

If this is broken, then every filesystem out there using filemap_fault() and
filemap_fault_around() has to be broken, and I hope that's not true :p

_If_ there is indeed breakage here regarding tree rotations, I would suggest:

diff --git a/mm/mmap.c b/mm/mmap.c
index 5754d1c36462..7b4e39063d67 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1833,12 +1833,12 @@ __latent_entropy int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
                        vma_interval_tree_insert_after(tmp, mpnt,
                                        &mapping->i_mmap);
                        flush_dcache_mmap_unlock(mapping);
-                       i_mmap_unlock_write(mapping);
                }
 
                if (!(tmp->vm_flags & VM_WIPEONFORK))
                        retval = copy_page_range(tmp, mpnt);
-
+               if (file)
+                       i_mmap_unlock_write(mapping);
                if (retval) {
                        mpnt = vma_next(&vmi);
                        goto loop_out;


which should protect against concurrent rmap.

-- 
Pedro


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
  2026-03-26 16:26 [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race Gregory Price
  2026-03-26 17:07 ` Pedro Falcato
@ 2026-03-26 19:21 ` Matthew Wilcox
  2026-03-26 20:09   ` Gregory Price
  1 sibling, 1 reply; 7+ messages in thread
From: Matthew Wilcox @ 2026-03-26 19:21 UTC (permalink / raw)
  To: Gregory Price
  Cc: linux-mm, akpm, hughd, david, ljs, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, baolin.wang, linux-kernel, kernel-team, stable

On Thu, Mar 26, 2026 at 11:26:11AM -0500, Gregory Price wrote:
> This also requires removing the rcu_read_lock() from
> do_fault_around() so that .map_pages may use sleeping locks.

NACK.

->map_pages() is called when VM asks to map easy accessible pages.
Filesystem should find and map pages associated with offsets from "start_pgoff"
till "end_pgoff". ->map_pages() is called with the RCU lock held and must
not block.  If it's not possible to reach a page without blocking,
filesystem should skip it. Filesystem should use set_pte_range() to setup
page table entry. Pointer to entry associated with the page is passed in
"pte" field in vm_fault structure. Pointers to entries for other offsets
should be calculated relative to "pte".



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
  2026-03-26 19:16     ` Pedro Falcato
@ 2026-03-26 19:48       ` Gregory Price
  0 siblings, 0 replies; 7+ messages in thread
From: Gregory Price @ 2026-03-26 19:48 UTC (permalink / raw)
  To: Pedro Falcato
  Cc: linux-mm, akpm, hughd, david, ljs, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, baolin.wang, linux-kernel, kernel-team, stable

On Thu, Mar 26, 2026 at 07:16:05PM +0000, Pedro Falcato wrote:
> 
> Sure, but can child - parent happen when traversing the i_mmap tree? I don't
> think so? (in mm/mmap.c)
> 	/* insert tmp into the share list, just after mpnt */
> 	vma_interval_tree_insert_after(tmp, mpnt,
> 			&mapping->i_mmap);
> 
> The function itself is somewhat straightforward - find the leftmost node at the
> right of 'prev' (our parent) and link ourselves. So an in-order traversal should
> always go parent - child. Unless there's some awful tree rotation that can
> happen and screw us in the meanwhile.
> 

hm, i think you're right, i have this inverted.

But this patch objectively fixed my issue, I no longer see this BUG(),
I don't get softlocks, and I don't get the guest corruption I was seeing
previously. It could simply be that the contention added makes the race
less likely.

Let me dig into this and just smoke test your suggestion - but I think
your patch would cause some contention issues on unmaps.

It's been difficult to generate a reproducer for this without running
hundreds of VMs, whatever race is going on here is extremely narrow.

> 
> If this is broken, then every filesystem out there using filemap_fault() and
> filemap_fault_around() has to be broken, and I hope that's not true :p
> 

Me too, but i never rule anything out.

~Gregory


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race
  2026-03-26 19:21 ` Matthew Wilcox
@ 2026-03-26 20:09   ` Gregory Price
  0 siblings, 0 replies; 7+ messages in thread
From: Gregory Price @ 2026-03-26 20:09 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, akpm, hughd, david, ljs, Liam.Howlett, vbabka, rppt,
	surenb, mhocko, baolin.wang, linux-kernel, kernel-team, stable

On Thu, Mar 26, 2026 at 07:21:21PM +0000, Matthew Wilcox wrote:
> On Thu, Mar 26, 2026 at 11:26:11AM -0500, Gregory Price wrote:
> > This also requires removing the rcu_read_lock() from
> > do_fault_around() so that .map_pages may use sleeping locks.
> 
> NACK.
> 
> ->map_pages() is called when VM asks to map easy accessible pages.
> Filesystem should find and map pages associated with offsets from "start_pgoff"
> till "end_pgoff". ->map_pages() is called with the RCU lock held and must
> not block.  If it's not possible to reach a page without blocking,
> filesystem should skip it. Filesystem should use set_pte_range() to setup
> page table entry. Pointer to entry associated with the page is passed in
> "pte" field in vm_fault structure. Pointers to entries for other offsets
> should be calculated relative to "pte".
> 

Hm, I follow. I was originally thinking this was scoping issue given
we take the rcu_read_lock shortly after the call anyway, but I see.

If the invalidate lock ends up being needed then i could leave rcu
and just use trylock/fallback to fault.

But I need to test a few things, nothing else protects filemap_map_pages
with the invalidate lock at the moment but only shmem appears broken.

~Gregory


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-03-26 20:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26 16:26 [PATCH] mm/shmem: use invalidate_lock to fix hole-punch race Gregory Price
2026-03-26 17:07 ` Pedro Falcato
2026-03-26 18:37   ` Gregory Price
2026-03-26 19:16     ` Pedro Falcato
2026-03-26 19:48       ` Gregory Price
2026-03-26 19:21 ` Matthew Wilcox
2026-03-26 20:09   ` Gregory Price

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox