public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] Modify memfd_luo code
@ 2026-03-26  8:47 Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

I found several modifiable points while reading the code. Please review.

v3:
v2 patches remain unchanged, and v3 adds 3 additional patches.
<mm/memfd_luo: fix physical address conversion in put_folios cleanup>
<mm/memfd_luo: remove folio from page cache when accounting fails>
<mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios>

These three patches address issues identified by the AI review,
with the review link as follows:
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

v2:
https://lore.kernel.org/all/20260323140938.ba8943a5247c14b17bc70142@linux-foundation.org/

As suggested by Pratyush Yadav, add patch
<mm/memfd: use folio_nr_pages() for shmem inode accounting>.
https://lore.kernel.org/all/2vxzqzpebzi2.fsf@kernel.org/

<mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path>
Same as V1, no logic changes; depends on patch #1 for modifications.

<mm/memfd_luo: remove unnecessary memset in zero-size memfd path>
No modifications have been made.

<mm/memfd_luo: use i_size_write() to set inode size during retrieve>
Add consistency-related descriptions to the commit log.

v1:
https://lore.kernel.org/all/20260319012845.29570-1-duanchenghao@kylinos.cn/

Chenghao Duan (7):
  mm/memfd: use folio_nr_pages() for shmem inode accounting
  mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
  mm/memfd_luo: remove unnecessary memset in zero-size memfd path
  mm/memfd_luo: use i_size_write() to set inode size during retrieve
  mm/memfd_luo: fix physical address conversion in put_folios cleanup
  mm/memfd_luo: remove folio from page cache when accounting fails
  mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios

 mm/memfd_luo.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

memfd_luo_retrieve_folios() called shmem_inode_acct_blocks() and
shmem_recalc_inode() with hardcoded 1 instead of the actual folio
page count.  memfd may use large folios (THP/hugepages), causing
quota/limit under-accounting and incorrect stat output.

Fix by using folio_nr_pages(folio) for both functions.

Issue found by AI review and suggested by Pratyush Yadav <pratyush@kernel.org>.
https://sashiko.dev/#/patchset/20260319012845.29570-1-duanchenghao%40kylinos.cn

Suggested-by: Pratyush Yadav <pratyush@kernel.org>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index b8edb9f981d7..953440994ad2 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -395,6 +395,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	struct inode *inode = file_inode(file);
 	struct address_space *mapping = inode->i_mapping;
 	struct folio *folio;
+	long npages;
 	int err = -EIO;
 	long i;
 
@@ -441,14 +442,15 @@ static int memfd_luo_retrieve_folios(struct file *file,
 		if (flags & MEMFD_LUO_FOLIO_DIRTY)
 			folio_mark_dirty(folio);
 
-		err = shmem_inode_acct_blocks(inode, 1);
+		npages = folio_nr_pages(folio);
+		err = shmem_inode_acct_blocks(inode, npages);
 		if (err) {
-			pr_err("shmem: failed to account folio index %ld: %d\n",
-			       i, err);
+			pr_err("shmem: failed to account folio index %ld(%ld pages): %d\n",
+			       i, npages, err);
 			goto unlock_folio;
 		}
 
-		shmem_recalc_inode(inode, 1, 0);
+		shmem_recalc_inode(inode, npages, 0);
 		folio_add_lru(folio);
 		folio_unlock(folio);
 		folio_put(folio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios()
to improve performance when restoring large memfds.

Currently, shmem_recalc_inode() is called for each folio during restore,
which is O(n) expensive operations. This patch collects the number of
successfully added folios and calls shmem_recalc_inode() once after the
loop completes, reducing complexity to O(1).

Additionally, fix the error path to also call shmem_recalc_inode() for
the folios that were successfully added before the error occurred.

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 953440994ad2..2a01eaff03c2 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -395,7 +395,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	struct inode *inode = file_inode(file);
 	struct address_space *mapping = inode->i_mapping;
 	struct folio *folio;
-	long npages;
+	long npages, nr_added_pages = 0;
 	int err = -EIO;
 	long i;
 
@@ -450,12 +450,14 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			goto unlock_folio;
 		}
 
-		shmem_recalc_inode(inode, npages, 0);
+		nr_added_pages += npages;
 		folio_add_lru(folio);
 		folio_unlock(folio);
 		folio_put(folio);
 	}
 
+	shmem_recalc_inode(inode, nr_added_pages, 0);
+
 	return 0;
 
 unlock_folio:
@@ -474,6 +476,8 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			folio_put(folio);
 	}
 
+	shmem_recalc_inode(inode, nr_added_pages, 0);
+
 	return err;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

The memset(kho_vmalloc, 0, sizeof(*kho_vmalloc)) call in the zero-size
file handling path is unnecessary because the allocation of the ser
structure already uses the __GFP_ZERO flag, ensuring the memory is
already zero-initialized.

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 2a01eaff03c2..bf827e574bec 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -103,7 +103,6 @@ static int memfd_luo_preserve_folios(struct file *file,
 	if (!size) {
 		*nr_foliosp = 0;
 		*out_folios_ser = NULL;
-		memset(kho_vmalloc, 0, sizeof(*kho_vmalloc));
 		return 0;
 	}
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (2 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

Use i_size_write() instead of directly assigning to inode->i_size
when restoring the memfd size in memfd_luo_retrieve(), to keep code
consistency.

No functional change intended.

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index bf827e574bec..76edf9a3f5b5 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -499,7 +499,7 @@ static int memfd_luo_retrieve(struct liveupdate_file_op_args *args)
 	}
 
 	vfs_setpos(file, ser->pos, MAX_LFS_FILESIZE);
-	file->f_inode->i_size = ser->size;
+	i_size_write(file_inode(file), ser->size);
 
 	if (ser->nr_folios) {
 		folios_ser = kho_restore_vmalloc(&ser->folios);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (3 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

In memfd_luo_retrieve_folios()'s put_folios cleanup path:

1. kho_restore_folio() expects a phys_addr_t (physical address) but
   receives a raw PFN (pfolio->pfn). This causes kho_restore_page() to
   check the wrong physical address (pfn << PAGE_SHIFT instead of the
   actual physical address).

2. This loop lacks the !pfolio->pfn check that exists in the main
   retrieval loop and memfd_luo_discard_folios(), which could
   incorrectly process sparse file holes where pfn=0.

Fix by converting PFN to physical address with PFN_PHYS() and adding
the !pfolio->pfn check, matching the pattern used elsewhere in this file.

This issue was identified by the AI review.
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 76edf9a3f5b5..b4cea3670689 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -469,8 +469,13 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	 */
 	for (long j = i + 1; j < nr_folios; j++) {
 		const struct memfd_luo_folio_ser *pfolio = &folios_ser[j];
+		phys_addr_t phys;
+
+		if (!pfolio->pfn)
+			continue;
 
-		folio = kho_restore_folio(pfolio->pfn);
+		phys = PFN_PHYS(pfolio->pfn);
+		folio = kho_restore_folio(phys);
 		if (folio)
 			folio_put(folio);
 	}
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (4 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails
after successfully adding the folio to the page cache, the code jumps
to unlock_folio without removing the folio from the page cache.

This leaves the folio permanently abandoned in the page cache:
- The folio was added via shmem_add_to_page_cache() which set up
  mapping, index, and incremented nrpages/shmem stats.
- folio_unlock() and folio_put() do not remove it from the cache.
- folio_add_lru() was never called, so it cannot be reclaimed.

Fix by adding a remove_from_cache label that calls filemap_remove_folio()
before unlocking, matching the error handling pattern in
shmem_alloc_and_add_folio().

This issue was identified by the AI review.
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index b4cea3670689..f8e8f99b1848 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -446,7 +446,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 		if (err) {
 			pr_err("shmem: failed to account folio index %ld(%ld pages): %d\n",
 			       i, npages, err);
-			goto unlock_folio;
+			goto remove_from_cache;
 		}
 
 		nr_added_pages += npages;
@@ -459,6 +459,8 @@ static int memfd_luo_retrieve_folios(struct file *file,
 
 	return 0;
 
+remove_from_cache:
+	filemap_remove_folio(folio);
 unlock_folio:
 	folio_unlock(folio);
 	folio_put(folio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (5 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  6 siblings, 0 replies; 8+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

In memfd_luo_preserve_folios(), two variables had types that could cause
silent data loss with large files:

1. 'size' was declared as 'long', truncating the 64-bit result of
   i_size_read(). On 32-bit systems a 4GB file would be truncated to 0,
   causing the function to return early and discard all data.

2. 'max_folios' was declared as 'unsigned int', causing overflow for
   sparse files larger than 4TB. For example, a 16TB+4KB file would
   calculate 0x100000001 folios but truncate to 1 when assigned to
   max_folios, causing memfd_pin_folios() to pin only the first folio.

Fix by changing both variables to 'u64' to match the types returned
by i_size_read() and the folio count calculations.

This issue was identified by the AI review.
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index f8e8f99b1848..4b4fa2f658d9 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -88,8 +88,8 @@ static int memfd_luo_preserve_folios(struct file *file,
 {
 	struct inode *inode = file_inode(file);
 	struct memfd_luo_folio_ser *folios_ser;
-	unsigned int max_folios;
-	long i, size, nr_pinned;
+	u64 size, max_folios;
+	long i, nr_pinned;
 	struct folio **folios;
 	int err = -EINVAL;
 	pgoff_t offset;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2026-03-26  8:53 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox