Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Insert instead of copy pages into shmem when shrinking
@ 2026-05-12 11:03 Thomas Hellström
  2026-05-12 11:03 ` [PATCH 1/2] mm/shmem: add shmem_insert_folio() Thomas Hellström
  2026-05-12 11:03 ` [PATCH 2/2] drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout Thomas Hellström
  0 siblings, 2 replies; 12+ messages in thread
From: Thomas Hellström @ 2026-05-12 11:03 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Hugh Dickins, Baolin Wang,
	Brendan Jackman, Johannes Weiner, Zi Yan, Christian Koenig,
	Huang Rui, Matthew Auld, Matthew Brost, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	dri-devel, linux-mm, linux-kernel

To be able to easily maintain pools of pages mapped uncached or
write-combined, TTM doesn't use shmem directly for buffer object memory,
but on shrinking, page contents are backed up to shmem objects so that
the content later can be swapped out.

At shrink time that puts some strain on the memory reserves. To copy
a high-order page, one either has to dip far into the kernel
reserves (one high-order page size) before any memory can be released,
or one can chose to split the high-order page into order 0 pages and
free them as soon as they are copied. The latter approach is used
by TTM but that tends to fragment higher-order pages.

One approach to get around this is to insert the higher-order pages
directly into shmem objects, so that if CONFIG_THP_SWAP is enabled,
they can be swapped out without splitting. And at shrink time there
will be no additional memory allocation save for the shmem radix tree
allocations.

Add a shmem interface to insert isolated pages, with enough
asserts to avoid a user of the interface inserting pages
confusing shmem. Then make TTM use that interface.

As an alternative, one could add an interface to insert pages
directly into the swap cache, but since the swap cache doesn't seem
intended for inserting pages for which we don't immediately
schedule a writeout, the shmem approach was chosen.

Thomas Hellström (2):
  mm/shmem: add shmem_insert_folio()
  drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout

 drivers/gpu/drm/ttm/ttm_backup.c |  92 ++++++++++-----------------
 drivers/gpu/drm/ttm/ttm_pool.c   |  67 ++++++++++++++------
 include/drm/ttm/ttm_backup.h     |  11 ++--
 include/linux/mm.h               |   1 +
 include/linux/shmem_fs.h         |   2 +
 mm/page_alloc.c                  |  21 +++++++
 mm/shmem.c                       | 105 +++++++++++++++++++++++++++++++
 7 files changed, 216 insertions(+), 83 deletions(-)

-- 
2.54.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-12 11:03 [PATCH 0/2] Insert instead of copy pages into shmem when shrinking Thomas Hellström
@ 2026-05-12 11:03 ` Thomas Hellström
  2026-05-12 11:07   ` David Hildenbrand (Arm)
  2026-05-12 11:03 ` [PATCH 2/2] drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout Thomas Hellström
  1 sibling, 1 reply; 12+ messages in thread
From: Thomas Hellström @ 2026-05-12 11:03 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Hugh Dickins, Baolin Wang,
	Brendan Jackman, Johannes Weiner, Zi Yan, Christian Koenig,
	Huang Rui, Matthew Auld, Matthew Brost, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	dri-devel, linux-mm, linux-kernel

Introduce shmem_insert_folio(), which transfers an isolated folio
zero-copy into a shmem file's page cache.  The folio is charged to
memcg, inserted into the address space, and placed on the anon LRU
for normal reclaim.  An optional writeback parameter requests
immediate swap writeback.

Higher-order folios are promoted to compound before insertion,
enabling THP-sized swap entries with CONFIG_THP_SWAP=y.  On failure
the folio is returned to its original state and the caller retains
ownership.

Assisted-by: GitHub_Copilot:claude-sonnet-4.6
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 include/linux/mm.h       |   1 +
 include/linux/shmem_fs.h |   2 +
 mm/page_alloc.c          |  21 ++++++++
 mm/shmem.c               | 105 +++++++++++++++++++++++++++++++++++++++
 4 files changed, 129 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index af23453e9dbd..e2e7b0c0998b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1676,6 +1676,7 @@ struct mmu_gather;
 struct inode;
 
 extern void prep_compound_page(struct page *page, unsigned int order);
+extern void undo_compound_page(struct page *page);
 
 static inline unsigned int folio_large_order(const struct folio *folio)
 {
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 93a0ba872ebe..2dc9355757fd 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -175,6 +175,8 @@ int shmem_get_folio(struct inode *inode, pgoff_t index, loff_t write_end,
 		struct folio **foliop, enum sgp_type sgp);
 struct folio *shmem_read_folio_gfp(struct address_space *mapping,
 		pgoff_t index, gfp_t gfp);
+int shmem_insert_folio(struct file *file, struct folio *folio, unsigned int order,
+		       pgoff_t index, bool writeback, gfp_t folio_gfp);
 
 static inline struct folio *shmem_read_folio(struct address_space *mapping,
 		pgoff_t index)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 227d58dc3de6..db82825a3348 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -705,6 +705,27 @@ void prep_compound_page(struct page *page, unsigned int order)
 	prep_compound_head(page, order);
 }
 
+/**
+ * undo_compound_page() - Reverse the effect of prep_compound_page().
+ * @page: The head page of a compound page to demote.
+ *
+ * Returns the pages to non-compound state as if prep_compound_page()
+ * had never been called.  split_page() must NOT have been called on
+ * the compound page; tail refcounts must be 0.  The caller must ensure
+ * no other users hold references to the compound page.
+ */
+void undo_compound_page(struct page *page)
+{
+	unsigned int i, nr = 1U << compound_order(page);
+
+	page[1].flags.f &= ~PAGE_FLAGS_SECOND;
+	for (i = 1; i < nr; i++) {
+		page[i].mapping = NULL;
+		clear_compound_head(&page[i]);
+	}
+	ClearPageHead(page);
+}
+
 static inline void set_buddy_order(struct page *page, unsigned int order)
 {
 	set_page_private(page, order);
diff --git a/mm/shmem.c b/mm/shmem.c
index 3b5dc21b323c..45e80a74f77c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -937,6 +937,111 @@ int shmem_add_to_page_cache(struct folio *folio,
 	return 0;
 }
 
+/**
+ * shmem_insert_folio() - Insert an isolated folio into a shmem file.
+ * @file: The shmem file created with shmem_file_setup().
+ * @folio: The folio to insert. Must be isolated (not on LRU), unlocked,
+ *         have exactly one reference (the caller's), have no page-table
+ *         mappings, and have folio->mapping == NULL.
+ * @order: The allocation order of @folio.  If @order > 0 and @folio is
+ *         not already a large (compound) folio, it will be promoted to a
+ *         compound folio of this order inside this function.  This requires
+ *         the standard post-alloc state: head refcount == 1, tail
+ *         refcounts == 0 (i.e. split_page() must NOT have been called).
+ *         On failure the promotion is reversed and the folio is returned
+ *         to its original non-compound state.
+ * @index: Page-cache index at which to insert. Must be aligned to
+ *         (1 << @order) and within the file's size.
+ * @writeback: If true, attempt immediate writeback to swap after insertion.
+ *             Best-effort; failure is silently ignored.
+ * @folio_gfp: The GFP flags to use for memory-cgroup charging.
+ *
+ * The folio is inserted zero-copy into the shmem page cache and placed on
+ * the anon LRU, where it participates in normal kernel reclaim (written to
+ * swap under memory pressure).  Any previous content at @index is discarded.
+ * On success the caller should release their reference with folio_put() and
+ * track the (@file, @index) pair for later recovery via shmem_read_folio()
+ * and release via shmem_truncate_range().
+ *
+ * Return: 0 on success.  On failure the folio is returned to its original
+ * state and the caller retains ownership.
+ */
+int shmem_insert_folio(struct file *file, struct folio *folio, unsigned int order,
+		       pgoff_t index, bool writeback, gfp_t folio_gfp)
+{
+	struct address_space *mapping = file->f_mapping;
+	struct inode *inode = mapping->host;
+	bool promoted;
+	long nr_pages;
+	int ret;
+
+	promoted = order > 0 && !folio_test_large(folio);
+	if (promoted)
+		prep_compound_page(&folio->page, order);
+	nr_pages = folio_nr_pages(folio);
+
+	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
+	VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
+	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
+	VM_BUG_ON_FOLIO(folio->mapping, folio);
+	VM_BUG_ON(index != round_down(index, nr_pages));
+
+	folio_lock(folio);
+	__folio_set_swapbacked(folio);
+	folio_mark_uptodate(folio);
+
+	folio_gfp &= GFP_RECLAIM_MASK;
+	ret = mem_cgroup_charge(folio, NULL, folio_gfp);
+	if (ret)
+		goto err_unlock;
+
+	ret = shmem_add_to_page_cache(folio, mapping, index, NULL, folio_gfp);
+	if (ret == -EEXIST) {
+		shmem_truncate_range(inode,
+				     (loff_t)index << PAGE_SHIFT,
+				     ((loff_t)(index + nr_pages) << PAGE_SHIFT) - 1);
+		ret = shmem_add_to_page_cache(folio, mapping, index, NULL,
+					      folio_gfp);
+	}
+	if (ret)
+		goto err_uncharge;
+
+	folio_mark_dirty(folio);
+
+	ret = shmem_inode_acct_blocks(inode, nr_pages);
+	if (ret) {
+		filemap_remove_folio(folio);
+		goto err_uncharge;
+	}
+
+	shmem_recalc_inode(inode, nr_pages, 0);
+
+	if (writeback) {
+		ret = shmem_writeout(folio, NULL, NULL);
+		if (ret == AOP_WRITEPAGE_ACTIVATE) {
+			/* No swap slot available; reclaim will retry. */
+			folio_add_lru(folio);
+			folio_unlock(folio);
+		}
+		/* ret == 0 or ret < 0: folio unlocked by shmem_writeout */
+	} else {
+		folio_add_lru(folio);
+		folio_unlock(folio);
+	}
+
+	return 0;
+
+err_uncharge:
+	mem_cgroup_uncharge(folio);
+err_unlock:
+	__folio_clear_swapbacked(folio);
+	folio_unlock(folio);
+	if (promoted)
+		undo_compound_page(&folio->page);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(shmem_insert_folio);
+
 /*
  * Somewhat like filemap_remove_folio, but substitutes swap for @folio.
  */
-- 
2.54.0



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout
  2026-05-12 11:03 [PATCH 0/2] Insert instead of copy pages into shmem when shrinking Thomas Hellström
  2026-05-12 11:03 ` [PATCH 1/2] mm/shmem: add shmem_insert_folio() Thomas Hellström
@ 2026-05-12 11:03 ` Thomas Hellström
  1 sibling, 0 replies; 12+ messages in thread
From: Thomas Hellström @ 2026-05-12 11:03 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Hugh Dickins, Baolin Wang,
	Brendan Jackman, Johannes Weiner, Zi Yan, Christian Koenig,
	Huang Rui, Matthew Auld, Matthew Brost, Maarten Lankhorst,
	Maxime Ripard, Thomas Zimmermann, David Airlie, Simona Vetter,
	dri-devel, linux-mm, linux-kernel

Add ttm_backup_insert_folio(), a thin wrapper around shmem_insert_folio()
that returns a handle, for use by drivers with large isolated folios.

Replace the alloc+copy ttm_backup_backup_page() path in ttm_pool_backup()
with the zero-copy ttm_backup_insert_folio() path.

On success NR_GPU_ACTIVE is decremented and the caller's reference is
released; shmem takes ownership.  The alloc_gfp argument used for
allocating shmem backing pages is no longer needed.

If insertion fails for a higher-order page, it is split into order-0
pages with ttm_pool_split_for_swap() and the loop retries each page
individually.

Assisted-by: GitHub_Copilot:claude-sonnet-4.6
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/ttm/ttm_backup.c | 92 ++++++++++++--------------------
 drivers/gpu/drm/ttm/ttm_pool.c   | 67 ++++++++++++++++-------
 include/drm/ttm/ttm_backup.h     | 11 ++--
 3 files changed, 87 insertions(+), 83 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_backup.c b/drivers/gpu/drm/ttm/ttm_backup.c
index 81df4cb5606b..a37e9404b895 100644
--- a/drivers/gpu/drm/ttm/ttm_backup.c
+++ b/drivers/gpu/drm/ttm/ttm_backup.c
@@ -6,7 +6,6 @@
 #include <drm/ttm/ttm_backup.h>
 
 #include <linux/export.h>
-#include <linux/page-flags.h>
 #include <linux/swap.h>
 
 /*
@@ -68,73 +67,50 @@ int ttm_backup_copy_page(struct file *backup, struct page *dst,
 }
 
 /**
- * ttm_backup_backup_page() - Backup a page
+ * ttm_backup_insert_folio() - Zero-copy insert of an isolated folio into backup.
  * @backup: The struct backup pointer to use.
- * @page: The page to back up.
- * @writeback: Whether to perform immediate writeback of the page.
- * This may have performance implications.
- * @idx: A unique integer for each page and each struct backup.
- * This allows the backup implementation to avoid managing
- * its address space separately.
- * @page_gfp: The gfp value used when the page was allocated.
- * This is used for accounting purposes.
- * @alloc_gfp: The gfp to be used when allocating memory.
+ * @folio: The folio to insert. Must be isolated (not on LRU), unlocked,
+ *         have exactly one reference (the caller's), and have no page-table
+ *         mappings.  The folio must not be swapbacked or in the swapcache,
+ *         and folio->private must have been cleared by the caller.
+ * @order: The allocation order of @folio.  If @order > 0 and @folio is not
+ *         already a large folio, it is promoted to a compound folio of this
+ *         order (see shmem_insert_folio()).  split_page() must NOT have been
+ *         called; tail-page refcounts must be 0.
+ * @writeback: Whether to attempt immediate writeback to swap after insertion.
+ *             Best-effort; failure is silently ignored.
+ * @idx: Page-cache index within @backup.  Must be aligned to (1 << @order).
+ * @folio_gfp: The gfp value used when the folio was allocated.
+ *             Used for memory-cgroup charging.
  *
- * Context: If called from reclaim context, the caller needs to
- * assert that the shrinker gfp has __GFP_FS set, to avoid
- * deadlocking on lock_page(). If @writeback is set to true and
- * called from reclaim context, the caller also needs to assert
- * that the shrinker gfp has __GFP_IO set, since without it,
- * we're not allowed to start backup IO.
+ * Context: May be called from reclaim context.  If @writeback is true, the
+ * caller must assert that the shrinker gfp has __GFP_IO set.
  *
- * Return: A handle on success. Negative error code on failure.
+ * The folio is transferred zero-copy into the shmem page cache.  On success
+ * the caller should release their reference with folio_put() and track the
+ * handle for later recovery via ttm_backup_copy_page() and release via
+ * ttm_backup_drop().  Handles for sub-pages of a compound folio follow
+ * sequentially: handle + j addresses sub-page j.
  *
- * Note: This function could be extended to back up a folio and
- * implementations would then split the folio internally if needed.
- * Drawback is that the caller would then have to keep track of
- * the folio size- and usage.
+ * Return: A positive handle on success. Negative error code on failure;
+ *         the folio is returned to its original non-compound state and the
+ *         caller retains ownership.
  */
 s64
-ttm_backup_backup_page(struct file *backup, struct page *page,
-		       bool writeback, pgoff_t idx, gfp_t page_gfp,
-		       gfp_t alloc_gfp)
+ttm_backup_insert_folio(struct file *backup, struct folio *folio,
+			unsigned int order, bool writeback, pgoff_t idx,
+			gfp_t folio_gfp)
 {
-	struct address_space *mapping = backup->f_mapping;
-	unsigned long handle = 0;
-	struct folio *to_folio;
 	int ret;
 
-	to_folio = shmem_read_folio_gfp(mapping, idx, alloc_gfp);
-	if (IS_ERR(to_folio))
-		return PTR_ERR(to_folio);
-
-	folio_mark_accessed(to_folio);
-	folio_lock(to_folio);
-	folio_mark_dirty(to_folio);
-	copy_highpage(folio_file_page(to_folio, idx), page);
-	handle = ttm_backup_shmem_idx_to_handle(idx);
-
-	if (writeback && !folio_mapped(to_folio) &&
-	    folio_clear_dirty_for_io(to_folio)) {
-		folio_set_reclaim(to_folio);
-		ret = shmem_writeout(to_folio, NULL, NULL);
-		if (!folio_test_writeback(to_folio))
-			folio_clear_reclaim(to_folio);
-		/*
-		 * If writeout succeeds, it unlocks the folio.	errors
-		 * are otherwise dropped, since writeout is only best
-		 * effort here.
-		 */
-		if (ret)
-			folio_unlock(to_folio);
-	} else {
-		folio_unlock(to_folio);
-	}
-
-	folio_put(to_folio);
-
-	return handle;
+	WARN_ON_ONCE(folio_get_private(folio));
+	ret = shmem_insert_folio(backup, folio, order, idx, writeback, folio_gfp);
+	if (ret)
+		return ret;
+
+	return ttm_backup_shmem_idx_to_handle(idx);
 }
+EXPORT_SYMBOL_GPL(ttm_backup_insert_folio);
 
 /**
  * ttm_backup_fini() - Free the struct backup resources after last use.
diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c
index d380a3c7fe40..8ea3a125c465 100644
--- a/drivers/gpu/drm/ttm/ttm_pool.c
+++ b/drivers/gpu/drm/ttm/ttm_pool.c
@@ -487,7 +487,7 @@ static void ttm_pool_split_for_swap(struct ttm_pool *pool, struct page *p)
 /**
  * DOC: Partial backup and restoration of a struct ttm_tt.
  *
- * Swapout using ttm_backup_backup_page() and swapin using
+ * Swapout using ttm_backup_insert_folio() and swapin using
  * ttm_backup_copy_page() may fail.
  * The former most likely due to lack of swap-space or memory, the latter due
  * to lack of memory or because of signal interruption during waits.
@@ -1045,12 +1045,11 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 {
 	struct file *backup = tt->backup;
 	struct page *page;
-	unsigned long handle;
-	gfp_t alloc_gfp;
 	gfp_t gfp;
 	int ret = 0;
 	pgoff_t shrunken = 0;
-	pgoff_t i, num_pages;
+	pgoff_t i, num_pages, npages;
+	unsigned long j;
 
 	if (WARN_ON(ttm_tt_is_backed_up(tt)))
 		return -EINVAL;
@@ -1070,7 +1069,8 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 			unsigned int order;
 
 			page = tt->pages[i];
-			if (unlikely(!page)) {
+			if (unlikely(!page ||
+				     ttm_backup_page_ptr_is_handle(page))) {
 				num_pages = 1;
 				continue;
 			}
@@ -1098,34 +1098,63 @@ long ttm_pool_backup(struct ttm_pool *pool, struct ttm_tt *tt,
 	else
 		gfp = GFP_HIGHUSER;
 
-	alloc_gfp = GFP_KERNEL | __GFP_HIGH | __GFP_NOWARN | __GFP_RETRY_MAYFAIL;
-
 	num_pages = tt->num_pages;
 
 	/* Pretend doing fault injection by shrinking only half of the pages. */
 	if (IS_ENABLED(CONFIG_FAULT_INJECTION) && should_fail(&backup_fault_inject, 1))
 		num_pages = DIV_ROUND_UP(num_pages, 2);
 
-	for (i = 0; i < num_pages; ++i) {
-		s64 shandle;
+	for (i = 0; i < num_pages; i += npages) {
+		unsigned int order;
+		s64 handle;
 
+		npages = 1;
 		page = tt->pages[i];
 		if (unlikely(!page))
 			continue;
 
-		ttm_pool_split_for_swap(pool, page);
+		/* Already-handled entry from a previous attempt. */
+		if (unlikely(ttm_backup_page_ptr_is_handle(page)))
+			continue;
+
+		order = ttm_pool_page_order(pool, page);
+		npages = 1UL << order;
 
-		shandle = ttm_backup_backup_page(backup, page, flags->writeback, i,
-						 gfp, alloc_gfp);
-		if (shandle < 0) {
-			/* We allow partially shrunken tts */
-			ret = shandle;
+		/*
+		 * If fault injection truncated num_pages mid-compound, skip
+		 * the partial tail rather than inserting it.
+		 */
+		if (unlikely(i + npages > num_pages))
+			break;
+
+		/*
+		 * Transfer this page zero-copy into shmem.  page->private
+		 * stores the TTM order; clear it before inserting.
+		 */
+		page->private = 0;
+		handle = ttm_backup_insert_folio(backup, page_folio(page),
+						 order, flags->writeback,
+						 i, gfp);
+		if (unlikely(handle < 0)) {
+			if (order) {
+				page->private = order;
+				ttm_pool_split_for_swap(pool, page);
+				npages = 0;
+				continue;
+			}
+			ret = (int)handle;
 			break;
 		}
-		handle = shandle;
-		tt->pages[i] = ttm_backup_handle_to_page_ptr(handle);
-		__free_pages_gpu_account(page, 0, false);
-		shrunken++;
+
+		/*
+		 * NR_GPU_ACTIVE is node-only; use mod_node_page_state()
+		 * directly after the folio becomes memcg-charged.
+		 */
+		mod_node_page_state(page_pgdat(page), NR_GPU_ACTIVE, -(1 << order));
+		folio_put(page_folio(page));
+		for (j = 0; j < npages; j++)
+			tt->pages[i + j] = ttm_backup_handle_to_page_ptr(handle + j);
+		shrunken += npages;
 	}
 
 	return shrunken ? shrunken : ret;
diff --git a/include/drm/ttm/ttm_backup.h b/include/drm/ttm/ttm_backup.h
index 29b9c855af77..0c2feed0bffb 100644
--- a/include/drm/ttm/ttm_backup.h
+++ b/include/drm/ttm/ttm_backup.h
@@ -13,9 +13,8 @@
  * ttm_backup_handle_to_page_ptr() - Convert handle to struct page pointer
  * @handle: The handle to convert.
  *
- * Converts an opaque handle received from the
- * ttm_backup_backup_page() function to an (invalid)
- * struct page pointer suitable for a struct page array.
+ * Converts an opaque handle received from ttm_backup_insert_folio()
+ * function to an (invalid) struct page pointer suitable for a struct page array.
  *
  * Return: An (invalid) struct page pointer.
  */
@@ -59,9 +58,9 @@ int ttm_backup_copy_page(struct file *backup, struct page *dst,
 			 pgoff_t handle, bool intr, gfp_t additional_gfp);
 
 s64
-ttm_backup_backup_page(struct file *backup, struct page *page,
-		       bool writeback, pgoff_t idx, gfp_t page_gfp,
-		       gfp_t alloc_gfp);
+ttm_backup_insert_folio(struct file *backup, struct folio *folio,
+			unsigned int order, bool writeback, pgoff_t idx,
+			gfp_t folio_gfp);
 
 void ttm_backup_fini(struct file *backup);
 
-- 
2.54.0



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-12 11:03 ` [PATCH 1/2] mm/shmem: add shmem_insert_folio() Thomas Hellström
@ 2026-05-12 11:07   ` David Hildenbrand (Arm)
  2026-05-12 11:31     ` Thomas Hellström
  0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-12 11:07 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan,
	Christian Koenig, Huang Rui, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, dri-devel, linux-mm, linux-kernel


>  
> +/**
> + * undo_compound_page() - Reverse the effect of prep_compound_page().
> + * @page: The head page of a compound page to demote.
> + *
> + * Returns the pages to non-compound state as if prep_compound_page()
> + * had never been called.  split_page() must NOT have been called on
> + * the compound page; tail refcounts must be 0.  The caller must ensure
> + * no other users hold references to the compound page.
> + */
> +void undo_compound_page(struct page *page)
> +{
> +	unsigned int i, nr = 1U << compound_order(page);
> +
> +	page[1].flags.f &= ~PAGE_FLAGS_SECOND;
> +	for (i = 1; i < nr; i++) {
> +		page[i].mapping = NULL;
> +		clear_compound_head(&page[i]);
> +	}
> +	ClearPageHead(page);
> +}
> +
>  static inline void set_buddy_order(struct page *page, unsigned int order)
>  {
>  	set_page_private(page, order);
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 3b5dc21b323c..45e80a74f77c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -937,6 +937,111 @@ int shmem_add_to_page_cache(struct folio *folio,
>  	return 0;
>  }
>  
> +/**
> + * shmem_insert_folio() - Insert an isolated folio into a shmem file.
> + * @file: The shmem file created with shmem_file_setup().
> + * @folio: The folio to insert. Must be isolated (not on LRU), unlocked,
> + *         have exactly one reference (the caller's), have no page-table
> + *         mappings, and have folio->mapping == NULL.
> + * @order: The allocation order of @folio.  If @order > 0 and @folio is
> + *         not already a large (compound) folio, it will be promoted to a
> + *         compound folio of this order inside this function.  This requires
> + *         the standard post-alloc state: head refcount == 1, tail
> + *         refcounts == 0 (i.e. split_page() must NOT have been called).
> + *         On failure the promotion is reversed and the folio is returned
> + *         to its original non-compound state.
> + * @index: Page-cache index at which to insert. Must be aligned to
> + *         (1 << @order) and within the file's size.
> + * @writeback: If true, attempt immediate writeback to swap after insertion.
> + *             Best-effort; failure is silently ignored.
> + * @folio_gfp: The GFP flags to use for memory-cgroup charging.
> + *
> + * The folio is inserted zero-copy into the shmem page cache and placed on
> + * the anon LRU, where it participates in normal kernel reclaim (written to
> + * swap under memory pressure).  Any previous content at @index is discarded.
> + * On success the caller should release their reference with folio_put() and
> + * track the (@file, @index) pair for later recovery via shmem_read_folio()
> + * and release via shmem_truncate_range().
> + *
> + * Return: 0 on success.  On failure the folio is returned to its original
> + * state and the caller retains ownership.
> + */
> +int shmem_insert_folio(struct file *file, struct folio *folio, unsigned int order,
> +		       pgoff_t index, bool writeback, gfp_t folio_gfp)
> +{
> +	struct address_space *mapping = file->f_mapping;
> +	struct inode *inode = mapping->host;
> +	bool promoted;
> +	long nr_pages;
> +	int ret;
> +
> +	promoted = order > 0 && !folio_test_large(folio);
> +	if (promoted)
> +		prep_compound_page(&folio->page, order);
> +	nr_pages = folio_nr_pages(folio);
> +
> +	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
> +	VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
> +	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
> +	VM_BUG_ON_FOLIO(folio->mapping, folio);
> +	VM_BUG_ON(index != round_down(index, nr_pages));

No new VM_BUG_ON_FOLIO etc.

But in general, pushing in random allocated pages into shmem, converting them to
folios is not something I particularly enjoy seeing.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-12 11:07   ` David Hildenbrand (Arm)
@ 2026-05-12 11:31     ` Thomas Hellström
  2026-05-12 20:03       ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 12+ messages in thread
From: Thomas Hellström @ 2026-05-12 11:31 UTC (permalink / raw)
  To: David Hildenbrand (Arm), intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan,
	Christian Koenig, Huang Rui, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, dri-devel, linux-mm, linux-kernel

Hi David,

Thanks for having a look.

On Tue, 2026-05-12 at 13:07 +0200, David Hildenbrand (Arm) wrote:
> 
> >  
> > +/**
> > + * undo_compound_page() - Reverse the effect of
> > prep_compound_page().
> > + * @page: The head page of a compound page to demote.
> > + *
> > + * Returns the pages to non-compound state as if
> > prep_compound_page()
> > + * had never been called.  split_page() must NOT have been called
> > on
> > + * the compound page; tail refcounts must be 0.  The caller must
> > ensure
> > + * no other users hold references to the compound page.
> > + */
> > +void undo_compound_page(struct page *page)
> > +{
> > +	unsigned int i, nr = 1U << compound_order(page);
> > +
> > +	page[1].flags.f &= ~PAGE_FLAGS_SECOND;
> > +	for (i = 1; i < nr; i++) {
> > +		page[i].mapping = NULL;
> > +		clear_compound_head(&page[i]);
> > +	}
> > +	ClearPageHead(page);
> > +}
> > +
> >  static inline void set_buddy_order(struct page *page, unsigned int
> > order)
> >  {
> >  	set_page_private(page, order);
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index 3b5dc21b323c..45e80a74f77c 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -937,6 +937,111 @@ int shmem_add_to_page_cache(struct folio
> > *folio,
> >  	return 0;
> >  }
> >  
> > +/**
> > + * shmem_insert_folio() - Insert an isolated folio into a shmem
> > file.
> > + * @file: The shmem file created with shmem_file_setup().
> > + * @folio: The folio to insert. Must be isolated (not on LRU),
> > unlocked,
> > + *         have exactly one reference (the caller's), have no
> > page-table
> > + *         mappings, and have folio->mapping == NULL.
> > + * @order: The allocation order of @folio.  If @order > 0 and
> > @folio is
> > + *         not already a large (compound) folio, it will be
> > promoted to a
> > + *         compound folio of this order inside this function. 
> > This requires
> > + *         the standard post-alloc state: head refcount == 1, tail
> > + *         refcounts == 0 (i.e. split_page() must NOT have been
> > called).
> > + *         On failure the promotion is reversed and the folio is
> > returned
> > + *         to its original non-compound state.
> > + * @index: Page-cache index at which to insert. Must be aligned to
> > + *         (1 << @order) and within the file's size.
> > + * @writeback: If true, attempt immediate writeback to swap after
> > insertion.
> > + *             Best-effort; failure is silently ignored.
> > + * @folio_gfp: The GFP flags to use for memory-cgroup charging.
> > + *
> > + * The folio is inserted zero-copy into the shmem page cache and
> > placed on
> > + * the anon LRU, where it participates in normal kernel reclaim
> > (written to
> > + * swap under memory pressure).  Any previous content at @index is
> > discarded.
> > + * On success the caller should release their reference with
> > folio_put() and
> > + * track the (@file, @index) pair for later recovery via
> > shmem_read_folio()
> > + * and release via shmem_truncate_range().
> > + *
> > + * Return: 0 on success.  On failure the folio is returned to its
> > original
> > + * state and the caller retains ownership.
> > + */
> > +int shmem_insert_folio(struct file *file, struct folio *folio,
> > unsigned int order,
> > +		       pgoff_t index, bool writeback, gfp_t
> > folio_gfp)
> > +{
> > +	struct address_space *mapping = file->f_mapping;
> > +	struct inode *inode = mapping->host;
> > +	bool promoted;
> > +	long nr_pages;
> > +	int ret;
> > +
> > +	promoted = order > 0 && !folio_test_large(folio);
> > +	if (promoted)
> > +		prep_compound_page(&folio->page, order);
> > +	nr_pages = folio_nr_pages(folio);
> > +
> > +	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
> > +	VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
> > +	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
> > +	VM_BUG_ON_FOLIO(folio->mapping, folio);
> > +	VM_BUG_ON(index != round_down(index, nr_pages));
> 
> No new VM_BUG_ON_FOLIO etc.

OK, can eliminate those. Is VM_WARN_ON_FOLIO() preferred,
or any other type of assert?

> 
> But in general, pushing in random allocated pages into shmem,
> converting them to
> folios is not something I particularly enjoy seeing.
> 

OK, let me understand the concern. The pages are allocated as multi-
page folios using alloc_pages(gfp, order), but typically not promoted
to compound pages, until inserted here. Is it that promotion that is of
concern or inserting pages of unknown origin into shmem? Anything we
can do to alleviate that concern?

Given the problem statement in the cover-letter, would there be a
better direction to take here? We could, for example, bypass shmem and
insert the folios directly into the swap-cache, (although there is an
issue with the swap-cache when the number of swap_entries are close to
being depleted).

https://patchwork.freedesktop.org/series/165518/

Thanks,
Thomas




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-12 11:31     ` Thomas Hellström
@ 2026-05-12 20:03       ` David Hildenbrand (Arm)
  2026-05-13  7:47         ` Christian König
  0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-12 20:03 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan,
	Christian Koenig, Huang Rui, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Maxime Ripard, Thomas Zimmermann, David Airlie,
	Simona Vetter, dri-devel, linux-mm, linux-kernel

On 5/12/26 13:31, Thomas Hellström wrote:
> Hi David,
> 
> Thanks for having a look.
> 
> On Tue, 2026-05-12 at 13:07 +0200, David Hildenbrand (Arm) wrote:
>>
>>>  
>>> +/**
>>> + * undo_compound_page() - Reverse the effect of
>>> prep_compound_page().
>>> + * @page: The head page of a compound page to demote.
>>> + *
>>> + * Returns the pages to non-compound state as if
>>> prep_compound_page()
>>> + * had never been called.  split_page() must NOT have been called
>>> on
>>> + * the compound page; tail refcounts must be 0.  The caller must
>>> ensure
>>> + * no other users hold references to the compound page.
>>> + */
>>> +void undo_compound_page(struct page *page)
>>> +{
>>> +	unsigned int i, nr = 1U << compound_order(page);
>>> +
>>> +	page[1].flags.f &= ~PAGE_FLAGS_SECOND;
>>> +	for (i = 1; i < nr; i++) {
>>> +		page[i].mapping = NULL;
>>> +		clear_compound_head(&page[i]);
>>> +	}
>>> +	ClearPageHead(page);
>>> +}
>>> +
>>>  static inline void set_buddy_order(struct page *page, unsigned int
>>> order)
>>>  {
>>>  	set_page_private(page, order);
>>> diff --git a/mm/shmem.c b/mm/shmem.c
>>> index 3b5dc21b323c..45e80a74f77c 100644
>>> --- a/mm/shmem.c
>>> +++ b/mm/shmem.c
>>> @@ -937,6 +937,111 @@ int shmem_add_to_page_cache(struct folio
>>> *folio,
>>>  	return 0;
>>>  }
>>>  
>>> +/**
>>> + * shmem_insert_folio() - Insert an isolated folio into a shmem
>>> file.
>>> + * @file: The shmem file created with shmem_file_setup().
>>> + * @folio: The folio to insert. Must be isolated (not on LRU),
>>> unlocked,
>>> + *         have exactly one reference (the caller's), have no
>>> page-table
>>> + *         mappings, and have folio->mapping == NULL.
>>> + * @order: The allocation order of @folio.  If @order > 0 and
>>> @folio is
>>> + *         not already a large (compound) folio, it will be
>>> promoted to a
>>> + *         compound folio of this order inside this function. 
>>> This requires
>>> + *         the standard post-alloc state: head refcount == 1, tail
>>> + *         refcounts == 0 (i.e. split_page() must NOT have been
>>> called).
>>> + *         On failure the promotion is reversed and the folio is
>>> returned
>>> + *         to its original non-compound state.
>>> + * @index: Page-cache index at which to insert. Must be aligned to
>>> + *         (1 << @order) and within the file's size.
>>> + * @writeback: If true, attempt immediate writeback to swap after
>>> insertion.
>>> + *             Best-effort; failure is silently ignored.
>>> + * @folio_gfp: The GFP flags to use for memory-cgroup charging.
>>> + *
>>> + * The folio is inserted zero-copy into the shmem page cache and
>>> placed on
>>> + * the anon LRU, where it participates in normal kernel reclaim
>>> (written to
>>> + * swap under memory pressure).  Any previous content at @index is
>>> discarded.
>>> + * On success the caller should release their reference with
>>> folio_put() and
>>> + * track the (@file, @index) pair for later recovery via
>>> shmem_read_folio()
>>> + * and release via shmem_truncate_range().
>>> + *
>>> + * Return: 0 on success.  On failure the folio is returned to its
>>> original
>>> + * state and the caller retains ownership.
>>> + */
>>> +int shmem_insert_folio(struct file *file, struct folio *folio,
>>> unsigned int order,
>>> +		       pgoff_t index, bool writeback, gfp_t
>>> folio_gfp)
>>> +{
>>> +	struct address_space *mapping = file->f_mapping;
>>> +	struct inode *inode = mapping->host;
>>> +	bool promoted;
>>> +	long nr_pages;
>>> +	int ret;
>>> +
>>> +	promoted = order > 0 && !folio_test_large(folio);
>>> +	if (promoted)
>>> +		prep_compound_page(&folio->page, order);
>>> +	nr_pages = folio_nr_pages(folio);
>>> +
>>> +	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
>>> +	VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
>>> +	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>>> +	VM_BUG_ON_FOLIO(folio->mapping, folio);
>>> +	VM_BUG_ON(index != round_down(index, nr_pages));
>>
>> No new VM_BUG_ON_FOLIO etc.
> 
> OK, can eliminate those. Is VM_WARN_ON_FOLIO() preferred,
> or any other type of assert?

VM_WARN_ON_FOLIO() is usually what you want, or VM_WARN_ON_ONCE().

> 
>>
>> But in general, pushing in random allocated pages into shmem,
>> converting them to
>> folios is not something I particularly enjoy seeing.
>>
> 
> OK, let me understand the concern. The pages are allocated as multi-
> page folios using alloc_pages(gfp, order), but typically not promoted
> to compound pages, until inserted here. Is it that promotion that is of
> concern or inserting pages of unknown origin into shmem? Anything we
> can do to alleviate that concern?

It's all rather questionable.

A couple of points:

a) The pages are allocated to be unmovable, but adding them to shmem effectively
   turns them movable. Now you interfere with the page allocator logic of
   placing movable and unmovable pages a reasonable way into
   pageblocks that group allocations of similar types.

b) A driver is not supposed to decide which folio size will be allocated for
   shmem. I am not even sure if there is a fencing on
   CONFIG_TRANSPARENT_HUGEPAGE somewhere when ending up with large folios. order
   > PMD_ORDER is currently essentially unsupported, and I suspect your code
   would  even allow for that (looking at ttm_pool_alloc_find_order).

   We also have some problems with the pagecache not actually supporting all
   MAX_PAGE_ORDER orders (see MAX_PAGECACHE_ORDER).

   You are bypassing shmem logic to decide on that completely.

   While these things might not actually cause harm for you today (although I
   suspect some of them might in shmem swapout code), we don't want drivers to
   make our life harder by doing completely unexpected things.

c) You pass folio + order, which is just the red flag that you are doing
   something extremely dodgy.

   You just cast something that is not a folio, and was not allocated to be a
   folio to a folio through page_folio(page). That will stop working completely
   in the future once we decouple struct page from struct folio.

   If it's not a folio with a proper set order, you should be passing page +
   order.

d) We are once more open-coding creation of a folio, by hand-crafting it
   ourselves.

   We have folio_alloc() and friends for a reason. Where we, for example, do a
   page_rmappable_folio().

   I am pretty sure that you are missing a call to page_rmappable_folio(),
   resulting in the large folios not getting folio_set_large_rmappable() set.

e) undo_compound_page(). No words.



*maybe* it would be a little less bad if you would just allocate a compound page
in your driver and use page_rmappable_folio() in there.

That wouldn't change a) or b), though.


> 
> Given the problem statement in the cover-letter, would there be a
> better direction to take here? We could, for example, bypass shmem and
> insert the folios directly into the swap-cache, (although there is an
> issue with the swap-cache when the number of swap_entries are close to
> being depleted).

Good question.
We'd have to keep swapoff and all of that working. For example, in
try_to_unuse(), we special-case shmem_unuse() to handle non-anonymous pages.

But then, the whole swapcache operates on folios ... so I am not sure if there
is a lot to be won by re-implementing what shmem already does?

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-12 20:03       ` David Hildenbrand (Arm)
@ 2026-05-13  7:47         ` Christian König
  2026-05-13  8:31           ` Thomas Hellström
  2026-05-13  8:37           ` David Hildenbrand (Arm)
  0 siblings, 2 replies; 12+ messages in thread
From: Christian König @ 2026-05-13  7:47 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Thomas Hellström, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, dri-devel,
	linux-mm, linux-kernel

Hi David & Thomas,

On 5/12/26 22:03, David Hildenbrand (Arm) wrote:
> On 5/12/26 13:31, Thomas Hellström wrote:
...
>>>> +int shmem_insert_folio(struct file *file, struct folio *folio,
>>>> unsigned int order,
>>>> +		       pgoff_t index, bool writeback, gfp_t
>>>> folio_gfp)
>>>> +{
>>>> +	struct address_space *mapping = file->f_mapping;
>>>> +	struct inode *inode = mapping->host;
>>>> +	bool promoted;
>>>> +	long nr_pages;
>>>> +	int ret;
>>>> +
>>>> +	promoted = order > 0 && !folio_test_large(folio);
>>>> +	if (promoted)
>>>> +		prep_compound_page(&folio->page, order);
>>>> +	nr_pages = folio_nr_pages(folio);
>>>> +
>>>> +	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
>>>> +	VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
>>>> +	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>>>> +	VM_BUG_ON_FOLIO(folio->mapping, folio);
>>>> +	VM_BUG_ON(index != round_down(index, nr_pages));
>>>
>>> No new VM_BUG_ON_FOLIO etc.
>>
>> OK, can eliminate those. Is VM_WARN_ON_FOLIO() preferred,
>> or any other type of assert?
> 
> VM_WARN_ON_FOLIO() is usually what you want, or VM_WARN_ON_ONCE().
> 
>>
>>>
>>> But in general, pushing in random allocated pages into shmem,
>>> converting them to
>>> folios is not something I particularly enjoy seeing.
>>>
>>
>> OK, let me understand the concern. The pages are allocated as multi-
>> page folios using alloc_pages(gfp, order), but typically not promoted
>> to compound pages, until inserted here. Is it that promotion that is of
>> concern or inserting pages of unknown origin into shmem? Anything we
>> can do to alleviate that concern?
> 
> It's all rather questionable.
> 
> A couple of points:
> 
> a) The pages are allocated to be unmovable, but adding them to shmem effectively
>    turns them movable. Now you interfere with the page allocator logic of
>    placing movable and unmovable pages a reasonable way into
>    pageblocks that group allocations of similar types.
> 
> b) A driver is not supposed to decide which folio size will be allocated for
>    shmem.

Exactly that is one of the major reasons why we aren't using a shmem as backing store for TTM buffers in the first place.

While HW today can usually work with everything down to 4k it needs higher order pages for optimal performance.

So for example for AMD GPUs you need 2M pages or otherwise the performance goes down by ~30% in quite a number of use cases.

Everything between 4k and 2M and above 2M is still preferred because it results in better L0/L1 reach, but if you can't get 2M the L2 reach goes down so rapidly that people start to complain immediately. 

And that stuff is very specific for each vendor and HW generation. Some have the sweet spot at 64k, some at 256k, most at 2M.

>    I am not even sure if there is a fencing on
>    CONFIG_TRANSPARENT_HUGEPAGE somewhere when ending up with large folios. order
>    > PMD_ORDER is currently essentially unsupported, and I suspect your code
>    would  even allow for that (looking at ttm_pool_alloc_find_order).
> 
>    We also have some problems with the pagecache not actually supporting all
>    MAX_PAGE_ORDER orders (see MAX_PAGECACHE_ORDER).
> 
>    You are bypassing shmem logic to decide on that completely.
> 
>    While these things might not actually cause harm for you today (although I
>    suspect some of them might in shmem swapout code), we don't want drivers to
>    make our life harder by doing completely unexpected things.

Yeah but that is the requirement the HW has.

I mean we can keep torturing the buddy allocator to give us 2M pages, but essentially we want to get away from those specialized solutions and has more of the functionality necessary to driver the HW in the common Linux memory management code because that prevents vendors from re-implementing that stuff in their specific driver over and over again.

Regards,
Christian.

> c) You pass folio + order, which is just the red flag that you are doing
>    something extremely dodgy.
> 
>    You just cast something that is not a folio, and was not allocated to be a
>    folio to a folio through page_folio(page). That will stop working completely
>    in the future once we decouple struct page from struct folio.
> 
>    If it's not a folio with a proper set order, you should be passing page +
>    order.
> 
> d) We are once more open-coding creation of a folio, by hand-crafting it
>    ourselves.
> 
>    We have folio_alloc() and friends for a reason. Where we, for example, do a
>    page_rmappable_folio().
> 
>    I am pretty sure that you are missing a call to page_rmappable_folio(),
>    resulting in the large folios not getting folio_set_large_rmappable() set.
> 
> e) undo_compound_page(). No words.
> 
> 
> 
> *maybe* it would be a little less bad if you would just allocate a compound page
> in your driver and use page_rmappable_folio() in there.
> 
> That wouldn't change a) or b), though.
> 
> 
>>
>> Given the problem statement in the cover-letter, would there be a
>> better direction to take here? We could, for example, bypass shmem and
>> insert the folios directly into the swap-cache, (although there is an
>> issue with the swap-cache when the number of swap_entries are close to
>> being depleted).
> 
> Good question.
> We'd have to keep swapoff and all of that working. For example, in
> try_to_unuse(), we special-case shmem_unuse() to handle non-anonymous pages.
> 
> But then, the whole swapcache operates on folios ... so I am not sure if there
> is a lot to be won by re-implementing what shmem already does?
> 



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-13  7:47         ` Christian König
@ 2026-05-13  8:31           ` Thomas Hellström
  2026-05-13  9:30             ` David Hildenbrand (Arm)
  2026-05-13  8:37           ` David Hildenbrand (Arm)
  1 sibling, 1 reply; 12+ messages in thread
From: Thomas Hellström @ 2026-05-13  8:31 UTC (permalink / raw)
  To: Christian König, David Hildenbrand (Arm), intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, dri-devel,
	linux-mm, linux-kernel

Hi, David & Christian,

Thanks for the feedback. I'll respin this to see if I can come up with
something that is more acceptable given the comments.

A couple of questions and comments below.

On Wed, 2026-05-13 at 09:47 +0200, Christian König wrote:
> Hi David & Thomas,
> 
> On 5/12/26 22:03, David Hildenbrand (Arm) wrote:
> > On 5/12/26 13:31, Thomas Hellström wrote:
> ...
> > > > > +int shmem_insert_folio(struct file *file, struct folio
> > > > > *folio,
> > > > > unsigned int order,
> > > > > +		       pgoff_t index, bool writeback, gfp_t
> > > > > folio_gfp)
> > > > > +{
> > > > > +	struct address_space *mapping = file->f_mapping;
> > > > > +	struct inode *inode = mapping->host;
> > > > > +	bool promoted;
> > > > > +	long nr_pages;
> > > > > +	int ret;
> > > > > +
> > > > > +	promoted = order > 0 && !folio_test_large(folio);
> > > > > +	if (promoted)
> > > > > +		prep_compound_page(&folio->page, order);
> > > > > +	nr_pages = folio_nr_pages(folio);
> > > > > +
> > > > > +	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
> > > > > +	VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
> > > > > +	VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
> > > > > +	VM_BUG_ON_FOLIO(folio->mapping, folio);
> > > > > +	VM_BUG_ON(index != round_down(index, nr_pages));
> > > > 
> > > > No new VM_BUG_ON_FOLIO etc.
> > > 
> > > OK, can eliminate those. Is VM_WARN_ON_FOLIO() preferred,
> > > or any other type of assert?
> > 
> > VM_WARN_ON_FOLIO() is usually what you want, or VM_WARN_ON_ONCE().
> > 
> > > 
> > > > 
> > > > But in general, pushing in random allocated pages into shmem,
> > > > converting them to
> > > > folios is not something I particularly enjoy seeing.
> > > > 
> > > 
> > > OK, let me understand the concern. The pages are allocated as
> > > multi-
> > > page folios using alloc_pages(gfp, order), but typically not
> > > promoted
> > > to compound pages, until inserted here. Is it that promotion that
> > > is of
> > > concern or inserting pages of unknown origin into shmem? Anything
> > > we
> > > can do to alleviate that concern?
> > 
> > It's all rather questionable.
> > 
> > A couple of points:
> > 
> > a) The pages are allocated to be unmovable, but adding them to
> > shmem effectively
> >    turns them movable. Now you interfere with the page allocator
> > logic of
> >    placing movable and unmovable pages a reasonable way into
> >    pageblocks that group allocations of similar types.
> > 
> > b) A driver is not supposed to decide which folio size will be
> > allocated for
> >    shmem.
> 
> Exactly that is one of the major reasons why we aren't using a shmem
> as backing store for TTM buffers in the first place.
> 
> While HW today can usually work with everything down to 4k it needs
> higher order pages for optimal performance.
> 
> So for example for AMD GPUs you need 2M pages or otherwise the
> performance goes down by ~30% in quite a number of use cases.
> 
> Everything between 4k and 2M and above 2M is still preferred because
> it results in better L0/L1 reach, but if you can't get 2M the L2
> reach goes down so rapidly that people start to complain immediately.
> 
> And that stuff is very specific for each vendor and HW generation.
> Some have the sweet spot at 64k, some at 256k, most at 2M.
> 
> >    I am not even sure if there is a fencing on
> >    CONFIG_TRANSPARENT_HUGEPAGE somewhere when ending up with large
> > folios. order
> >    > PMD_ORDER is currently essentially unsupported, and I suspect
> > your code
> >    would  even allow for that (looking at
> > ttm_pool_alloc_find_order).
> > 
> >    We also have some problems with the pagecache not actually
> > supporting all
> >    MAX_PAGE_ORDER orders (see MAX_PAGECACHE_ORDER).
> > 
> >    You are bypassing shmem logic to decide on that completely.
> > 
> >    While these things might not actually cause harm for you today
> > (although I
> >    suspect some of them might in shmem swapout code), we don't want
> > drivers to
> >    make our life harder by doing completely unexpected things.
> 
> Yeah but that is the requirement the HW has.
> 
> I mean we can keep torturing the buddy allocator to give us 2M pages,
> but essentially we want to get away from those specialized solutions
> and has more of the functionality necessary to driver the HW in the
> common Linux memory management code because that prevents vendors
> from re-implementing that stuff in their specific driver over and
> over again.

For the code at hand, if we insert an order 10 folio shmem will split
it at writeout time but spit out a warning (if enabled) at the same
time. For this particular use-case, I think it might make sense for the
drivers that use direct insertion to cap the page-allocator orders to
THP size (2M).

> 
> Regards,
> Christian.
> 
> > c) You pass folio + order, which is just the red flag that you are
> > doing
> >    something extremely dodgy.
> > 
> >    You just cast something that is not a folio, and was not
> > allocated to be a
> >    folio to a folio through page_folio(page). That will stop
> > working completely
> >    in the future once we decouple struct page from struct folio.
> > 
> >    If it's not a folio with a proper set order, you should be
> > passing page +
> >    order.
> > 
> > d) We are once more open-coding creation of a folio, by hand-
> > crafting it
> >    ourselves.
> > 
> >    We have folio_alloc() and friends for a reason. Where we, for
> > example, do a
> >    page_rmappable_folio().
> > 
> >    I am pretty sure that you are missing a call to
> > page_rmappable_folio(),
> >    resulting in the large folios not getting
> > folio_set_large_rmappable() set.
> > 
> > e) undo_compound_page(). No words.
> > 
> > 
> > 
> > *maybe* it would be a little less bad if you would just allocate a
> > compound page
> > in your driver and use page_rmappable_folio() in there.

OK, yes it sounds like a prereq for this is that the driver actually
allocates compound pages. It might be that the TTM comment about *not*
doing that is stale, but need to check.

Would it be acceptable to export a function from core mm to split an
isolated folio?

> > 
> > That wouldn't change a) or b), though.
> > 
> > 
> > > 
> > > Given the problem statement in the cover-letter, would there be a
> > > better direction to take here? We could, for example, bypass
> > > shmem and
> > > insert the folios directly into the swap-cache, (although there
> > > is an
> > > issue with the swap-cache when the number of swap_entries are
> > > close to
> > > being depleted).
> > 
> > Good question.
> > We'd have to keep swapoff and all of that working. For example, in
> > try_to_unuse(), we special-case shmem_unuse() to handle non-
> > anonymous pages.
> > 
> > But then, the whole swapcache operates on folios ... so I am not
> > sure if there
> > is a lot to be won by re-implementing what shmem already does?
> > 

Still that would alleviate a) and b), right? At least as long as we
keep folio sizes within the swap cache limits?

Thanks,
Thomas



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-13  7:47         ` Christian König
  2026-05-13  8:31           ` Thomas Hellström
@ 2026-05-13  8:37           ` David Hildenbrand (Arm)
  2026-05-13  8:51             ` Thomas Hellström
  1 sibling, 1 reply; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-13  8:37 UTC (permalink / raw)
  To: Christian König, Thomas Hellström, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, dri-devel,
	linux-mm, linux-kernel

On 5/13/26 09:47, Christian König wrote:
> Hi David & Thomas,
> 
> On 5/12/26 22:03, David Hildenbrand (Arm) wrote:
>> On 5/12/26 13:31, Thomas Hellström wrote:
> ...
>>>
>>> OK, can eliminate those. Is VM_WARN_ON_FOLIO() preferred,
>>> or any other type of assert?
>>
>> VM_WARN_ON_FOLIO() is usually what you want, or VM_WARN_ON_ONCE().
>>
>>>
>>>
>>> OK, let me understand the concern. The pages are allocated as multi-
>>> page folios using alloc_pages(gfp, order), but typically not promoted
>>> to compound pages, until inserted here. Is it that promotion that is of
>>> concern or inserting pages of unknown origin into shmem? Anything we
>>> can do to alleviate that concern?
>>
>> It's all rather questionable.
>>
>> A couple of points:
>>
>> a) The pages are allocated to be unmovable, but adding them to shmem effectively
>>    turns them movable. Now you interfere with the page allocator logic of
>>    placing movable and unmovable pages a reasonable way into
>>    pageblocks that group allocations of similar types.
>>
>> b) A driver is not supposed to decide which folio size will be allocated for
>>    shmem.
> 
> Exactly that is one of the major reasons why we aren't using a shmem as backing store for TTM buffers in the first place.

What was the problem with that the last time this was considered?

shmem nowadays supports THP (e.g., 2M) and even mTHP (e.g., 64K).

For internal mounts, it must be enabled accordingly
(/sys/kernel/mm/transparent_hugepage/.../shmem_enabled).

Some distributions still default to "never". I guess if an admin enables it, you
would just get THPs.

If "distro default" is the only problem, I guess we could think about how to
improve that. For example, just let internal GPU DRM objects allocate any folio
size available and supported etc.

Would that make it possible to just use shmem natively? (e.g., how would this
interact with shmem features like folio migration, would that be workable with
DRM objects?).

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-13  8:37           ` David Hildenbrand (Arm)
@ 2026-05-13  8:51             ` Thomas Hellström
  2026-05-13 10:03               ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 12+ messages in thread
From: Thomas Hellström @ 2026-05-13  8:51 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Christian König, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, dri-devel,
	linux-mm, linux-kernel

On Wed, 2026-05-13 at 10:37 +0200, David Hildenbrand (Arm) wrote:
> On 5/13/26 09:47, Christian König wrote:
> > Hi David & Thomas,
> > 
> > On 5/12/26 22:03, David Hildenbrand (Arm) wrote:
> > > On 5/12/26 13:31, Thomas Hellström wrote:
> > ...
> > > > 
> > > > OK, can eliminate those. Is VM_WARN_ON_FOLIO() preferred,
> > > > or any other type of assert?
> > > 
> > > VM_WARN_ON_FOLIO() is usually what you want, or
> > > VM_WARN_ON_ONCE().
> > > 
> > > > 
> > > > 
> > > > OK, let me understand the concern. The pages are allocated as
> > > > multi-
> > > > page folios using alloc_pages(gfp, order), but typically not
> > > > promoted
> > > > to compound pages, until inserted here. Is it that promotion
> > > > that is of
> > > > concern or inserting pages of unknown origin into shmem?
> > > > Anything we
> > > > can do to alleviate that concern?
> > > 
> > > It's all rather questionable.
> > > 
> > > A couple of points:
> > > 
> > > a) The pages are allocated to be unmovable, but adding them to
> > > shmem effectively
> > >    turns them movable. Now you interfere with the page allocator
> > > logic of
> > >    placing movable and unmovable pages a reasonable way into
> > >    pageblocks that group allocations of similar types.
> > > 
> > > b) A driver is not supposed to decide which folio size will be
> > > allocated for
> > >    shmem.
> > 
> > Exactly that is one of the major reasons why we aren't using a
> > shmem as backing store for TTM buffers in the first place.
> 
> What was the problem with that the last time this was considered?
> 
> shmem nowadays supports THP (e.g., 2M) and even mTHP (e.g., 64K).
> 
> For internal mounts, it must be enabled accordingly
> (/sys/kernel/mm/transparent_hugepage/.../shmem_enabled).
> 
> Some distributions still default to "never". I guess if an admin
> enables it, you
> would just get THPs.

FWIW, the i915 driver which uses shmem "natively" uses a special mount
here that gives back THPs.

> 
> If "distro default" is the only problem, I guess we could think about
> how to
> improve that. For example, just let internal GPU DRM objects allocate
> any folio
> size available and supported etc.
> 
> Would that make it possible to just use shmem natively? (e.g., how
> would this
> interact with shmem features like folio migration, would that be
> workable with
> DRM objects?).

Currently the drivers that use shmem in this way use
"mapping_set_unevictable()" as long as the object is bound to the GPU.
Then shrinkers can unbind from GPU and revert that setting.

The problem, (as also stated in the cover letter of this series) is for
drivers that need to change caching of the pages to WC or UC. That's an
extremely costly operation so TTM needs to pool such allocations.
That's where using shmem natively becomes very ugly, because you can't
really use a 1:1 mapping between shmem objects and DRM objects anymore.

Thanks,
Thomas




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-13  8:31           ` Thomas Hellström
@ 2026-05-13  9:30             ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-13  9:30 UTC (permalink / raw)
  To: Thomas Hellström, Christian König, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, dri-devel,
	linux-mm, linux-kernel

Hi,

>>
>> Yeah but that is the requirement the HW has.
>>
>> I mean we can keep torturing the buddy allocator to give us 2M pages,
>> but essentially we want to get away from those specialized solutions
>> and has more of the functionality necessary to driver the HW in the
>> common Linux memory management code because that prevents vendors
>> from re-implementing that stuff in their specific driver over and
>> over again.
> 
> For the code at hand, if we insert an order 10 folio shmem will split
> it at writeout time but spit out a warning (if enabled) at the same
> time. For this particular use-case, I think it might make sense for the
> drivers that use direct insertion to cap the page-allocator orders to
> THP size (2M).

I think this just points at the bigger problem: shmem should be allocating
folios, not someone else on shmem's behalf.

> 
>>
>> Regards,
>> Christian.
>>
>>> c) You pass folio + order, which is just the red flag that you are
>>> doing
>>>    something extremely dodgy.
>>>
>>>    You just cast something that is not a folio, and was not
>>> allocated to be a
>>>    folio to a folio through page_folio(page). That will stop
>>> working completely
>>>    in the future once we decouple struct page from struct folio.
>>>
>>>    If it's not a folio with a proper set order, you should be
>>> passing page +
>>>    order.
>>>
>>> d) We are once more open-coding creation of a folio, by hand-
>>> crafting it
>>>    ourselves.
>>>
>>>    We have folio_alloc() and friends for a reason. Where we, for
>>> example, do a
>>>    page_rmappable_folio().
>>>
>>>    I am pretty sure that you are missing a call to
>>> page_rmappable_folio(),
>>>    resulting in the large folios not getting
>>> folio_set_large_rmappable() set.
>>>
>>> e) undo_compound_page(). No words.
>>>
>>>
>>>
>>> *maybe* it would be a little less bad if you would just allocate a
>>> compound page
>>> in your driver and use page_rmappable_folio() in there.
> 
> OK, yes it sounds like a prereq for this is that the driver actually
> allocates compound pages. It might be that the TTM comment about *not*
> doing that is stale, but need to check.
> 
> Would it be acceptable to export a function from core mm to split an
> isolated folio?

The point is: an allocated page, including an allocated compound page, is
logically not a folio. We have work going on to decouple both concepts completely.

We do have functions to split folios. But it should be given a proper folio, not
something that can currently be cast to a folio.

> 
>>>
>>> That wouldn't change a) or b), though.
>>>
>>>
>>>
>>> Good question.
>>> We'd have to keep swapoff and all of that working. For example, in
>>> try_to_unuse(), we special-case shmem_unuse() to handle non-
>>> anonymous pages.
>>>
>>> But then, the whole swapcache operates on folios ... so I am not
>>> sure if there
>>> is a lot to be won by re-implementing what shmem already does?
>>>
> 
> Still that would alleviate a) and b), right? At least as long as we
> keep folio sizes within the swap cache limits?

Let's hear from Christian what would be required for DRM to use shmem natively.
Maybe there would be a possible solution to have a custom shmem-like intnal
thing that can better deal with large folios.

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] mm/shmem: add shmem_insert_folio()
  2026-05-13  8:51             ` Thomas Hellström
@ 2026-05-13 10:03               ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-13 10:03 UTC (permalink / raw)
  To: Thomas Hellström, Christian König, intel-xe
  Cc: Andrew Morton, Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Hugh Dickins,
	Baolin Wang, Brendan Jackman, Johannes Weiner, Zi Yan, Huang Rui,
	Matthew Auld, Matthew Brost, Maarten Lankhorst, Maxime Ripard,
	Thomas Zimmermann, David Airlie, Simona Vetter, dri-devel,
	linux-mm, linux-kernel

On 5/13/26 10:51, Thomas Hellström wrote:
> On Wed, 2026-05-13 at 10:37 +0200, David Hildenbrand (Arm) wrote:
>> On 5/13/26 09:47, Christian König wrote:
>>> Hi David & Thomas,
>>>
>>> ...
>>>
>>> Exactly that is one of the major reasons why we aren't using a
>>> shmem as backing store for TTM buffers in the first place.
>>
>> What was the problem with that the last time this was considered?
>>
>> shmem nowadays supports THP (e.g., 2M) and even mTHP (e.g., 64K).
>>
>> For internal mounts, it must be enabled accordingly
>> (/sys/kernel/mm/transparent_hugepage/.../shmem_enabled).
>>
>> Some distributions still default to "never". I guess if an admin
>> enables it, you
>> would just get THPs.
> 
> FWIW, the i915 driver which uses shmem "natively" uses a special mount
> here that gives back THPs.
> 
>>
>> If "distro default" is the only problem, I guess we could think about
>> how to
>> improve that. For example, just let internal GPU DRM objects allocate
>> any folio
>> size available and supported etc.
>>
>> Would that make it possible to just use shmem natively? (e.g., how
>> would this
>> interact with shmem features like folio migration, would that be
>> workable with
>> DRM objects?).
> 
> Currently the drivers that use shmem in this way use
> "mapping_set_unevictable()" as long as the object is bound to the GPU.
> Then shrinkers can unbind from GPU and revert that setting.

Right, but mapping_set_unevictable() only affects folio_evictable() - -reclaim
behavior. Not other properties (such as folio migration).

> 
> The problem, (as also stated in the cover letter of this series) is for
> drivers that need to change caching of the pages to WC or UC.

I assume you mean "To be able to easily maintain pools of pages mapped uncached
or write-combined".

Can you point me at the code that changes the caching of the pages?

> That's an
> extremely costly operation so TTM needs to pool such allocations.
> That's where using shmem natively becomes very ugly, because you can't
> really use a 1:1 mapping between shmem objects and DRM objects anymore.

So you would require different caching attributes within a DRM object?

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-05-13 10:03 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-12 11:03 [PATCH 0/2] Insert instead of copy pages into shmem when shrinking Thomas Hellström
2026-05-12 11:03 ` [PATCH 1/2] mm/shmem: add shmem_insert_folio() Thomas Hellström
2026-05-12 11:07   ` David Hildenbrand (Arm)
2026-05-12 11:31     ` Thomas Hellström
2026-05-12 20:03       ` David Hildenbrand (Arm)
2026-05-13  7:47         ` Christian König
2026-05-13  8:31           ` Thomas Hellström
2026-05-13  9:30             ` David Hildenbrand (Arm)
2026-05-13  8:37           ` David Hildenbrand (Arm)
2026-05-13  8:51             ` Thomas Hellström
2026-05-13 10:03               ` David Hildenbrand (Arm)
2026-05-12 11:03 ` [PATCH 2/2] drm/ttm: Use ttm_backup_insert_folio() for zero-copy swapout Thomas Hellström

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox