public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] Modify memfd_luo code
@ 2026-03-26  8:47 Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

I found several modifiable points while reading the code. Please review.

v3:
v2 patches remain unchanged, and v3 adds 3 additional patches.
<mm/memfd_luo: fix physical address conversion in put_folios cleanup>
<mm/memfd_luo: remove folio from page cache when accounting fails>
<mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios>

These three patches address issues identified by the AI review,
with the review link as follows:
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

v2:
https://lore.kernel.org/all/20260323140938.ba8943a5247c14b17bc70142@linux-foundation.org/

As suggested by Pratyush Yadav, add patch
<mm/memfd: use folio_nr_pages() for shmem inode accounting>.
https://lore.kernel.org/all/2vxzqzpebzi2.fsf@kernel.org/

<mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path>
Same as V1, no logic changes; depends on patch #1 for modifications.

<mm/memfd_luo: remove unnecessary memset in zero-size memfd path>
No modifications have been made.

<mm/memfd_luo: use i_size_write() to set inode size during retrieve>
Add consistency-related descriptions to the commit log.

v1:
https://lore.kernel.org/all/20260319012845.29570-1-duanchenghao@kylinos.cn/

Chenghao Duan (7):
  mm/memfd: use folio_nr_pages() for shmem inode accounting
  mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
  mm/memfd_luo: remove unnecessary memset in zero-size memfd path
  mm/memfd_luo: use i_size_write() to set inode size during retrieve
  mm/memfd_luo: fix physical address conversion in put_folios cleanup
  mm/memfd_luo: remove folio from page cache when accounting fails
  mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios

 mm/memfd_luo.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-04-02  1:23   ` Pasha Tatashin
  2026-04-02 10:59   ` Pratyush Yadav
  2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

memfd_luo_retrieve_folios() called shmem_inode_acct_blocks() and
shmem_recalc_inode() with hardcoded 1 instead of the actual folio
page count.  memfd may use large folios (THP/hugepages), causing
quota/limit under-accounting and incorrect stat output.

Fix by using folio_nr_pages(folio) for both functions.

Issue found by AI review and suggested by Pratyush Yadav <pratyush@kernel.org>.
https://sashiko.dev/#/patchset/20260319012845.29570-1-duanchenghao%40kylinos.cn

Suggested-by: Pratyush Yadav <pratyush@kernel.org>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index b8edb9f981d7..953440994ad2 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -395,6 +395,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	struct inode *inode = file_inode(file);
 	struct address_space *mapping = inode->i_mapping;
 	struct folio *folio;
+	long npages;
 	int err = -EIO;
 	long i;
 
@@ -441,14 +442,15 @@ static int memfd_luo_retrieve_folios(struct file *file,
 		if (flags & MEMFD_LUO_FOLIO_DIRTY)
 			folio_mark_dirty(folio);
 
-		err = shmem_inode_acct_blocks(inode, 1);
+		npages = folio_nr_pages(folio);
+		err = shmem_inode_acct_blocks(inode, npages);
 		if (err) {
-			pr_err("shmem: failed to account folio index %ld: %d\n",
-			       i, err);
+			pr_err("shmem: failed to account folio index %ld(%ld pages): %d\n",
+			       i, npages, err);
 			goto unlock_folio;
 		}
 
-		shmem_recalc_inode(inode, 1, 0);
+		shmem_recalc_inode(inode, npages, 0);
 		folio_add_lru(folio);
 		folio_unlock(folio);
 		folio_put(folio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-04-02 11:02   ` Pratyush Yadav
  2026-03-26  8:47 ` [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios()
to improve performance when restoring large memfds.

Currently, shmem_recalc_inode() is called for each folio during restore,
which is O(n) expensive operations. This patch collects the number of
successfully added folios and calls shmem_recalc_inode() once after the
loop completes, reducing complexity to O(1).

Additionally, fix the error path to also call shmem_recalc_inode() for
the folios that were successfully added before the error occurred.

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 953440994ad2..2a01eaff03c2 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -395,7 +395,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	struct inode *inode = file_inode(file);
 	struct address_space *mapping = inode->i_mapping;
 	struct folio *folio;
-	long npages;
+	long npages, nr_added_pages = 0;
 	int err = -EIO;
 	long i;
 
@@ -450,12 +450,14 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			goto unlock_folio;
 		}
 
-		shmem_recalc_inode(inode, npages, 0);
+		nr_added_pages += npages;
 		folio_add_lru(folio);
 		folio_unlock(folio);
 		folio_put(folio);
 	}
 
+	shmem_recalc_inode(inode, nr_added_pages, 0);
+
 	return 0;
 
 unlock_folio:
@@ -474,6 +476,8 @@ static int memfd_luo_retrieve_folios(struct file *file,
 			folio_put(folio);
 	}
 
+	shmem_recalc_inode(inode, nr_added_pages, 0);
+
 	return err;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

The memset(kho_vmalloc, 0, sizeof(*kho_vmalloc)) call in the zero-size
file handling path is unnecessary because the allocation of the ser
structure already uses the __GFP_ZERO flag, ensuring the memory is
already zero-initialized.

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 2a01eaff03c2..bf827e574bec 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -103,7 +103,6 @@ static int memfd_luo_preserve_folios(struct file *file,
 	if (!size) {
 		*nr_foliosp = 0;
 		*out_folios_ser = NULL;
-		memset(kho_vmalloc, 0, sizeof(*kho_vmalloc));
 		return 0;
 	}
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (2 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

Use i_size_write() instead of directly assigning to inode->i_size
when restoring the memfd size in memfd_luo_retrieve(), to keep code
consistency.

No functional change intended.

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index bf827e574bec..76edf9a3f5b5 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -499,7 +499,7 @@ static int memfd_luo_retrieve(struct liveupdate_file_op_args *args)
 	}
 
 	vfs_setpos(file, ser->pos, MAX_LFS_FILESIZE);
-	file->f_inode->i_size = ser->size;
+	i_size_write(file_inode(file), ser->size);
 
 	if (ser->nr_folios) {
 		folios_ser = kho_restore_vmalloc(&ser->folios);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (3 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-04-02  1:30   ` Pasha Tatashin
  2026-04-02 11:06   ` Pratyush Yadav
  2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

In memfd_luo_retrieve_folios()'s put_folios cleanup path:

1. kho_restore_folio() expects a phys_addr_t (physical address) but
   receives a raw PFN (pfolio->pfn). This causes kho_restore_page() to
   check the wrong physical address (pfn << PAGE_SHIFT instead of the
   actual physical address).

2. This loop lacks the !pfolio->pfn check that exists in the main
   retrieval loop and memfd_luo_discard_folios(), which could
   incorrectly process sparse file holes where pfn=0.

Fix by converting PFN to physical address with PFN_PHYS() and adding
the !pfolio->pfn check, matching the pattern used elsewhere in this file.

This issue was identified by the AI review.
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index 76edf9a3f5b5..b4cea3670689 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -469,8 +469,13 @@ static int memfd_luo_retrieve_folios(struct file *file,
 	 */
 	for (long j = i + 1; j < nr_folios; j++) {
 		const struct memfd_luo_folio_ser *pfolio = &folios_ser[j];
+		phys_addr_t phys;
+
+		if (!pfolio->pfn)
+			continue;
 
-		folio = kho_restore_folio(pfolio->pfn);
+		phys = PFN_PHYS(pfolio->pfn);
+		folio = kho_restore_folio(phys);
 		if (folio)
 			folio_put(folio);
 	}
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (4 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-04-02  1:32   ` Pasha Tatashin
  2026-04-02 11:52   ` Pratyush Yadav
  2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
  2026-03-26 23:36 ` [PATCH v3 0/7] Modify memfd_luo code Andrew Morton
  7 siblings, 2 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails
after successfully adding the folio to the page cache, the code jumps
to unlock_folio without removing the folio from the page cache.

This leaves the folio permanently abandoned in the page cache:
- The folio was added via shmem_add_to_page_cache() which set up
  mapping, index, and incremented nrpages/shmem stats.
- folio_unlock() and folio_put() do not remove it from the cache.
- folio_add_lru() was never called, so it cannot be reclaimed.

Fix by adding a remove_from_cache label that calls filemap_remove_folio()
before unlocking, matching the error handling pattern in
shmem_alloc_and_add_folio().

This issue was identified by the AI review.
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index b4cea3670689..f8e8f99b1848 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -446,7 +446,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
 		if (err) {
 			pr_err("shmem: failed to account folio index %ld(%ld pages): %d\n",
 			       i, npages, err);
-			goto unlock_folio;
+			goto remove_from_cache;
 		}
 
 		nr_added_pages += npages;
@@ -459,6 +459,8 @@ static int memfd_luo_retrieve_folios(struct file *file,
 
 	return 0;
 
+remove_from_cache:
+	filemap_remove_folio(folio);
 unlock_folio:
 	folio_unlock(folio);
 	folio_put(folio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (5 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
@ 2026-03-26  8:47 ` Chenghao Duan
  2026-04-02  1:39   ` Pasha Tatashin
  2026-04-02 12:06   ` Pratyush Yadav
  2026-03-26 23:36 ` [PATCH v3 0/7] Modify memfd_luo code Andrew Morton
  7 siblings, 2 replies; 23+ messages in thread
From: Chenghao Duan @ 2026-03-26  8:47 UTC (permalink / raw)
  To: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm
  Cc: jianghaoran, duanchenghao

In memfd_luo_preserve_folios(), two variables had types that could cause
silent data loss with large files:

1. 'size' was declared as 'long', truncating the 64-bit result of
   i_size_read(). On 32-bit systems a 4GB file would be truncated to 0,
   causing the function to return early and discard all data.

2. 'max_folios' was declared as 'unsigned int', causing overflow for
   sparse files larger than 4TB. For example, a 16TB+4KB file would
   calculate 0x100000001 folios but truncate to 1 when assigned to
   max_folios, causing memfd_pin_folios() to pin only the first folio.

Fix by changing both variables to 'u64' to match the types returned
by i_size_read() and the folio count calculations.

This issue was identified by the AI review.
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
---
 mm/memfd_luo.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
index f8e8f99b1848..4b4fa2f658d9 100644
--- a/mm/memfd_luo.c
+++ b/mm/memfd_luo.c
@@ -88,8 +88,8 @@ static int memfd_luo_preserve_folios(struct file *file,
 {
 	struct inode *inode = file_inode(file);
 	struct memfd_luo_folio_ser *folios_ser;
-	unsigned int max_folios;
-	long i, size, nr_pinned;
+	u64 size, max_folios;
+	long i, nr_pinned;
 	struct folio **folios;
 	int err = -EINVAL;
 	pgoff_t offset;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 0/7] Modify memfd_luo code
  2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
                   ` (6 preceding siblings ...)
  2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
@ 2026-03-26 23:36 ` Andrew Morton
  7 siblings, 0 replies; 23+ messages in thread
From: Andrew Morton @ 2026-03-26 23:36 UTC (permalink / raw)
  To: Chenghao Duan
  Cc: pasha.tatashin, rppt, pratyush, linux-kernel, linux-mm,
	jianghaoran

On Thu, 26 Mar 2026 16:47:20 +0800 Chenghao Duan <duanchenghao@kylinos.cn> wrote:

> Subject: [PATCH v3 0/7] Modify memfd_luo code

I'd like to see a more desriptive title than this.  Maybe "memfd_luo:
various fixes and cleanups".  This isn't a big deal and "Modify
memfd_luo code" is good enough ;)

> Date: Thu, 26 Mar 2026 16:47:20 +0800
> X-Mailer: git-send-email 2.25.1
> 
> I found several modifiable points while reading the code. Please review.
> 
> v3:
> v2 patches remain unchanged, and v3 adds 3 additional patches.
> <mm/memfd_luo: fix physical address conversion in put_folios cleanup>
> <mm/memfd_luo: remove folio from page cache when accounting fails>
> <mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios>
> 
> These three patches address issues identified by the AI review,
> with the review link as follows:
> https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

OK, I'll add this.  I'm trying to seriously slow things down now but
this series does address a bunch of issues.

Maintainers, there are a few more patches here than were in v2, so
please take a look?

Thanks.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
@ 2026-04-02  1:23   ` Pasha Tatashin
  2026-04-02 10:59   ` Pratyush Yadav
  1 sibling, 0 replies; 23+ messages in thread
From: Pasha Tatashin @ 2026-04-02  1:23 UTC (permalink / raw)
  To: Chenghao Duan; +Cc: rppt, pratyush, akpm, linux-kernel, linux-mm, jianghaoran

> memfd_luo_retrieve_folios() called shmem_inode_acct_blocks() and
> shmem_recalc_inode() with hardcoded 1 instead of the actual folio
> page count.  memfd may use large folios (THP/hugepages), causing
> quota/limit under-accounting and incorrect stat output.
>
> Fix by using folio_nr_pages(folio) for both functions.
>
> Issue found by AI review and suggested by Pratyush Yadav <pratyush@kernel.org>.
> https://sashiko.dev/#/patchset/20260319012845.29570-1-duanchenghao%40kylinos.cn
>
> Suggested-by: Pratyush Yadav <pratyush@kernel.org>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup
  2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
@ 2026-04-02  1:30   ` Pasha Tatashin
  2026-04-02 11:06   ` Pratyush Yadav
  1 sibling, 0 replies; 23+ messages in thread
From: Pasha Tatashin @ 2026-04-02  1:30 UTC (permalink / raw)
  To: Chenghao Duan; +Cc: rppt, pratyush, akpm, linux-kernel, linux-mm, jianghaoran

On Thu, Mar 26, 2026 at 4:48 AM Chenghao Duan <duanchenghao@kylinos.cn> wrote:
>
> In memfd_luo_retrieve_folios()'s put_folios cleanup path:
>
> 1. kho_restore_folio() expects a phys_addr_t (physical address) but
>    receives a raw PFN (pfolio->pfn). This causes kho_restore_page() to
>    check the wrong physical address (pfn << PAGE_SHIFT instead of the
>    actual physical address).
>
> 2. This loop lacks the !pfolio->pfn check that exists in the main
>    retrieval loop and memfd_luo_discard_folios(), which could
>    incorrectly process sparse file holes where pfn=0.
>
> Fix by converting PFN to physical address with PFN_PHYS() and adding
> the !pfolio->pfn check, matching the pattern used elsewhere in this file.
>
> This issue was identified by the AI review.
> https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
> ---
>  mm/memfd_luo.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
> index 76edf9a3f5b5..b4cea3670689 100644
> --- a/mm/memfd_luo.c
> +++ b/mm/memfd_luo.c
> @@ -469,8 +469,13 @@ static int memfd_luo_retrieve_folios(struct file *file,
>          */
>         for (long j = i + 1; j < nr_folios; j++) {
>                 const struct memfd_luo_folio_ser *pfolio = &folios_ser[j];
> +               phys_addr_t phys;
> +
> +               if (!pfolio->pfn)
> +                       continue;
>
> -               folio = kho_restore_folio(pfolio->pfn);
> +               phys = PFN_PHYS(pfolio->pfn);
> +               folio = kho_restore_folio(phys);

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Thanks,
Pasha

>                 if (folio)
>                         folio_put(folio);
>         }
> --
> 2.25.1
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails
  2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
@ 2026-04-02  1:32   ` Pasha Tatashin
  2026-04-02 11:52   ` Pratyush Yadav
  1 sibling, 0 replies; 23+ messages in thread
From: Pasha Tatashin @ 2026-04-02  1:32 UTC (permalink / raw)
  To: Chenghao Duan; +Cc: rppt, pratyush, akpm, linux-kernel, linux-mm, jianghaoran

On Thu, Mar 26, 2026 at 4:48 AM Chenghao Duan <duanchenghao@kylinos.cn> wrote:
>
> In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails
> after successfully adding the folio to the page cache, the code jumps
> to unlock_folio without removing the folio from the page cache.
>
> This leaves the folio permanently abandoned in the page cache:
> - The folio was added via shmem_add_to_page_cache() which set up
>   mapping, index, and incremented nrpages/shmem stats.
> - folio_unlock() and folio_put() do not remove it from the cache.
> - folio_add_lru() was never called, so it cannot be reclaimed.
>
> Fix by adding a remove_from_cache label that calls filemap_remove_folio()
> before unlocking, matching the error handling pattern in
> shmem_alloc_and_add_folio().
>
> This issue was identified by the AI review.
> https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
> ---
>  mm/memfd_luo.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
> index b4cea3670689..f8e8f99b1848 100644
> --- a/mm/memfd_luo.c
> +++ b/mm/memfd_luo.c
> @@ -446,7 +446,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
>                 if (err) {
>                         pr_err("shmem: failed to account folio index %ld(%ld pages): %d\n",
>                                i, npages, err);
> -                       goto unlock_folio;
> +                       goto remove_from_cache;
>                 }
>
>                 nr_added_pages += npages;
> @@ -459,6 +459,8 @@ static int memfd_luo_retrieve_folios(struct file *file,
>
>         return 0;
>
> +remove_from_cache:
> +       filemap_remove_folio(folio);
>  unlock_folio:

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>

Thanks,
Pasha


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios
  2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
@ 2026-04-02  1:39   ` Pasha Tatashin
  2026-04-02 12:06   ` Pratyush Yadav
  1 sibling, 0 replies; 23+ messages in thread
From: Pasha Tatashin @ 2026-04-02  1:39 UTC (permalink / raw)
  To: Chenghao Duan; +Cc: rppt, pratyush, akpm, linux-kernel, linux-mm, jianghaoran

On Thu, Mar 26, 2026 at 4:48 AM Chenghao Duan <duanchenghao@kylinos.cn> wrote:
>
> In memfd_luo_preserve_folios(), two variables had types that could cause
> silent data loss with large files:
>
> 1. 'size' was declared as 'long', truncating the 64-bit result of
>    i_size_read(). On 32-bit systems a 4GB file would be truncated to 0,

This is not an issue, KHO only supports 64-bit systems, but using the
correct type is a good idea anyway.

>    causing the function to return early and discard all data.
>
> 2. 'max_folios' was declared as 'unsigned int', causing overflow for
>    sparse files larger than 4TB. For example, a 16TB+4KB file would
>    calculate 0x100000001 folios but truncate to 1 when assigned to
>    max_folios, causing memfd_pin_folios() to pin only the first folio.
>
> Fix by changing both variables to 'u64' to match the types returned
> by i_size_read() and the folio count calculations.

Strictly speaking, i_size_read() returns loff_t which is 'long long',
so s64 not u64, but anyways u64 works here.

Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting
  2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
  2026-04-02  1:23   ` Pasha Tatashin
@ 2026-04-02 10:59   ` Pratyush Yadav
  1 sibling, 0 replies; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-02 10:59 UTC (permalink / raw)
  To: Chenghao Duan
  Cc: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm,
	jianghaoran

On Thu, Mar 26 2026, Chenghao Duan wrote:

> memfd_luo_retrieve_folios() called shmem_inode_acct_blocks() and
> shmem_recalc_inode() with hardcoded 1 instead of the actual folio
> page count.  memfd may use large folios (THP/hugepages), causing
> quota/limit under-accounting and incorrect stat output.
>
> Fix by using folio_nr_pages(folio) for both functions.
>
> Issue found by AI review and suggested by Pratyush Yadav <pratyush@kernel.org>.
> https://sashiko.dev/#/patchset/20260319012845.29570-1-duanchenghao%40kylinos.cn
>
> Suggested-by: Pratyush Yadav <pratyush@kernel.org>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
  2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
@ 2026-04-02 11:02   ` Pratyush Yadav
  0 siblings, 0 replies; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-02 11:02 UTC (permalink / raw)
  To: Chenghao Duan
  Cc: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm,
	jianghaoran

On Thu, Mar 26 2026, Chenghao Duan wrote:

> Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios()
> to improve performance when restoring large memfds.
>
> Currently, shmem_recalc_inode() is called for each folio during restore,
> which is O(n) expensive operations. This patch collects the number of
> successfully added folios and calls shmem_recalc_inode() once after the
> loop completes, reducing complexity to O(1).
>
> Additionally, fix the error path to also call shmem_recalc_inode() for
> the folios that were successfully added before the error occurred.
>
> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

BTW, can we also do the same for shmem_inode_acct_blocks() it the call
to it can also be aggregated in the same way? You don't have to do it in
this series, but possibly as a follow up.

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup
  2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
  2026-04-02  1:30   ` Pasha Tatashin
@ 2026-04-02 11:06   ` Pratyush Yadav
  2026-04-02 17:43     ` Andrew Morton
  1 sibling, 1 reply; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-02 11:06 UTC (permalink / raw)
  To: Chenghao Duan
  Cc: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm,
	jianghaoran

On Thu, Mar 26 2026, Chenghao Duan wrote:

> In memfd_luo_retrieve_folios()'s put_folios cleanup path:
>
> 1. kho_restore_folio() expects a phys_addr_t (physical address) but
>    receives a raw PFN (pfolio->pfn). This causes kho_restore_page() to
>    check the wrong physical address (pfn << PAGE_SHIFT instead of the
>    actual physical address).
>
> 2. This loop lacks the !pfolio->pfn check that exists in the main
>    retrieval loop and memfd_luo_discard_folios(), which could
>    incorrectly process sparse file holes where pfn=0.
>
> Fix by converting PFN to physical address with PFN_PHYS() and adding
> the !pfolio->pfn check, matching the pattern used elsewhere in this file.
>
> This issue was identified by the AI review.
> https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

Andrew, can you please add:

Fixes: b3749f174d68 ("mm: memfd_luo: allow preserving memfd")
Cc: stable@vger.kernel.org

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails
  2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
  2026-04-02  1:32   ` Pasha Tatashin
@ 2026-04-02 11:52   ` Pratyush Yadav
  2026-04-02 17:54     ` Andrew Morton
  1 sibling, 1 reply; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-02 11:52 UTC (permalink / raw)
  To: Chenghao Duan
  Cc: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm,
	jianghaoran

On Thu, Mar 26 2026, Chenghao Duan wrote:

> In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails
> after successfully adding the folio to the page cache, the code jumps
> to unlock_folio without removing the folio from the page cache.
>
> This leaves the folio permanently abandoned in the page cache:
> - The folio was added via shmem_add_to_page_cache() which set up
>   mapping, index, and incremented nrpages/shmem stats.
> - folio_unlock() and folio_put() do not remove it from the cache.
> - folio_add_lru() was never called, so it cannot be reclaimed.

This is just not true. The folio is _not_ "permanently abandoned" in the
page cache. When fput() is called by memfd_luo_retrieve(), it will
eventually call shmem_undo_range() on the whole mapping and free all the
folios in there.

I went and looked at shmem_undo_range() and the accompanying accounting
logic, and all that seems to be impervious to this type of superfluous
folio in the filemap. Main reason being that shmem_recalc_inode()
directly uses mapping->nrpages after truncation so even if you don't
account for the folio, as long as you get rid of the whole file (which
we do) it doesn't matter.

I think the only place I can see this causing trouble is maybe in LRU
accounting, though I really don't understand how any of that works so
dunno.

Anyway, I do think this patch is worth having. It keeps the filemap
clean and gets rid of the need of this complex reasoning to figure out
if this is safe.

So I think the commit message needs reworking. Perhaps something like
the below:

    mm/memfd_luo: remove folio from page cache when accounting fails

    In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails
    after successfully adding the folio to the page cache, the code jumps
    to unlock_folio without removing the folio from the page cache.

    While the folio eventually will be freed when the file is released by
    memfd_luo_retrieve(), it is a good idea to directly remove a folio that
    was not fully added to the file. This avoids the possibility of
    accounting mismatches in shmem or filemap core.

    Fix by adding a remove_from_cache label that calls filemap_remove_folio()
    before unlocking, matching the error handling pattern in
    shmem_alloc_and_add_folio().

    This issue was identified by the AI review.
    https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

With that,

Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

>
> Fix by adding a remove_from_cache label that calls filemap_remove_folio()
> before unlocking, matching the error handling pattern in
> shmem_alloc_and_add_folio().
>
> This issue was identified by the AI review.
> https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios
  2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
  2026-04-02  1:39   ` Pasha Tatashin
@ 2026-04-02 12:06   ` Pratyush Yadav
  2026-04-02 17:58     ` Andrew Morton
  1 sibling, 1 reply; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-02 12:06 UTC (permalink / raw)
  To: Chenghao Duan
  Cc: pasha.tatashin, rppt, pratyush, akpm, linux-kernel, linux-mm,
	jianghaoran

On Thu, Mar 26 2026, Chenghao Duan wrote:

> In memfd_luo_preserve_folios(), two variables had types that could cause
> silent data loss with large files:
>
> 1. 'size' was declared as 'long', truncating the 64-bit result of
>    i_size_read(). On 32-bit systems a 4GB file would be truncated to 0,
>    causing the function to return early and discard all data.

As Pasha said, KHO and LUO are not expected to run on 32-bit systems.
Plus, since i_size_read() returns loff_t, why use u64 when you can just
match the type and just use loff_t (which on 64-bit is long anyway)? I
don't get why u64 is any better than long or loff_t.

>
> 2. 'max_folios' was declared as 'unsigned int', causing overflow for
>    sparse files larger than 4TB. For example, a 16TB+4KB file would
>    calculate 0x100000001 folios but truncate to 1 when assigned to
>    max_folios, causing memfd_pin_folios() to pin only the first folio.

Using unsigned int was intentional. We pass max_folios to
memfd_pin_folios(), which expects an unsigned int. So this change is
pointless unless you go and update memfd_pin_folios() too.

I think making memfd_pin_folios() use unsigned long for max_folios makes
a lot of sense, so can you please go update that first before making
this change? And when you do, please match the type of the argument to
the type you use here instead of using u64. This can be a separate,
independent patch series.

>
> Fix by changing both variables to 'u64' to match the types returned
> by i_size_read() and the folio count calculations.
>
> This issue was identified by the AI review.
> https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup
  2026-04-02 11:06   ` Pratyush Yadav
@ 2026-04-02 17:43     ` Andrew Morton
  0 siblings, 0 replies; 23+ messages in thread
From: Andrew Morton @ 2026-04-02 17:43 UTC (permalink / raw)
  To: Pratyush Yadav
  Cc: Chenghao Duan, pasha.tatashin, rppt, linux-kernel, linux-mm,
	jianghaoran

On Thu, 02 Apr 2026 11:06:23 +0000 Pratyush Yadav <pratyush@kernel.org> wrote:

> > Fix by converting PFN to physical address with PFN_PHYS() and adding
> > the !pfolio->pfn check, matching the pattern used elsewhere in this file.
> >
> > This issue was identified by the AI review.
> > https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn
> >
> > Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
> 
> Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

Thanks.

> Andrew, can you please add:
> 
> Fixes: b3749f174d68 ("mm: memfd_luo: allow preserving memfd")
> Cc: stable@vger.kernel.org

Done.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails
  2026-04-02 11:52   ` Pratyush Yadav
@ 2026-04-02 17:54     ` Andrew Morton
  2026-04-03  9:07       ` Pratyush Yadav
  0 siblings, 1 reply; 23+ messages in thread
From: Andrew Morton @ 2026-04-02 17:54 UTC (permalink / raw)
  To: Pratyush Yadav
  Cc: Chenghao Duan, pasha.tatashin, rppt, linux-kernel, linux-mm,
	jianghaoran

On Thu, 02 Apr 2026 11:52:57 +0000 Pratyush Yadav <pratyush@kernel.org> wrote:

> So I think the commit message needs reworking. Perhaps something like
> the below:
>
> ...
>
> With that,
> 
> Reviewed-by: Pratyush Yadav <pratyush@kernel.org>

Thanks, I did this:

From: Chenghao Duan <duanchenghao@kylinos.cn>
Subject: mm/memfd_luo: remove folio from page cache when accounting fails
Date: Thu, 26 Mar 2026 16:47:26 +0800

In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails
after successfully adding the folio to the page cache, the code jumps
to unlock_folio without removing the folio from the page cache.

While the folio eventually will be freed when the file is released by
memfd_luo_retrieve(), it is a good idea to directly remove a folio that
was not fully added to the file.  This avoids the possibility of
accounting mismatches in shmem or filemap core.

Fix by adding a remove_from_cache label that calls
filemap_remove_folio() before unlocking, matching the error handling
pattern in shmem_alloc_and_add_folio().

This issue was identified by AI review:
https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn

[pratyush@kernel.org: changelog alterations]
  Link: https://lkml.kernel.org/r/2vxzzf3lfujq.fsf@kernel.org
Link: https://lkml.kernel.org/r/20260326084727.118437-7-duanchenghao@kylinos.cn
Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
Cc: Haoran Jiang <jianghaoran@kylinos.cn>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/memfd_luo.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/mm/memfd_luo.c~mm-memfd_luo-remove-folio-from-page-cache-when-accounting-fails
+++ a/mm/memfd_luo.c
@@ -461,7 +461,7 @@ static int memfd_luo_retrieve_folios(str
 		if (err) {
 			pr_err("shmem: failed to account folio index %ld(%ld pages): %d\n",
 			       i, npages, err);
-			goto unlock_folio;
+			goto remove_from_cache;
 		}
 
 		nr_added_pages += npages;
@@ -474,6 +474,8 @@ static int memfd_luo_retrieve_folios(str
 
 	return 0;
 
+remove_from_cache:
+	filemap_remove_folio(folio);
 unlock_folio:
 	folio_unlock(folio);
 	folio_put(folio);
_



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios
  2026-04-02 12:06   ` Pratyush Yadav
@ 2026-04-02 17:58     ` Andrew Morton
  2026-04-03  9:06       ` Pratyush Yadav
  0 siblings, 1 reply; 23+ messages in thread
From: Andrew Morton @ 2026-04-02 17:58 UTC (permalink / raw)
  To: Pratyush Yadav
  Cc: Chenghao Duan, pasha.tatashin, rppt, linux-kernel, linux-mm,
	jianghaoran

On Thu, 02 Apr 2026 12:06:58 +0000 Pratyush Yadav <pratyush@kernel.org> wrote:

> On Thu, Mar 26 2026, Chenghao Duan wrote:
> 
> > In memfd_luo_preserve_folios(), two variables had types that could cause
> > silent data loss with large files:
> >
> > 1. 'size' was declared as 'long', truncating the 64-bit result of
> >    i_size_read(). On 32-bit systems a 4GB file would be truncated to 0,
> >    causing the function to return early and discard all data.
> 
> As Pasha said, KHO and LUO are not expected to run on 32-bit systems.
> Plus, since i_size_read() returns loff_t, why use u64 when you can just
> match the type and just use loff_t (which on 64-bit is long anyway)? I
> don't get why u64 is any better than long or loff_t.
> 
> >
> > 2. 'max_folios' was declared as 'unsigned int', causing overflow for
> >    sparse files larger than 4TB. For example, a 16TB+4KB file would
> >    calculate 0x100000001 folios but truncate to 1 when assigned to
> >    max_folios, causing memfd_pin_folios() to pin only the first folio.
> 
> Using unsigned int was intentional. We pass max_folios to
> memfd_pin_folios(), which expects an unsigned int. So this change is
> pointless unless you go and update memfd_pin_folios() too.
> 
> I think making memfd_pin_folios() use unsigned long for max_folios makes
> a lot of sense, so can you please go update that first before making
> this change? And when you do, please match the type of the argument to
> the type you use here instead of using u64. This can be a separate,
> independent patch series.

Thanks.  I'll drop this patch.  The preceding six patches are looking
well-reviewed and ready to go?

Chenghao, please prepare any update for this patch against the
preceding six.  Or against tomorrow's mm-unstable or mm-new or
linux-next.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios
  2026-04-02 17:58     ` Andrew Morton
@ 2026-04-03  9:06       ` Pratyush Yadav
  0 siblings, 0 replies; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-03  9:06 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Pratyush Yadav, Chenghao Duan, pasha.tatashin, rppt, linux-kernel,
	linux-mm, jianghaoran

On Thu, Apr 02 2026, Andrew Morton wrote:

> On Thu, 02 Apr 2026 12:06:58 +0000 Pratyush Yadav <pratyush@kernel.org> wrote:
>
>> On Thu, Mar 26 2026, Chenghao Duan wrote:
>> 
>> > In memfd_luo_preserve_folios(), two variables had types that could cause
>> > silent data loss with large files:
>> >
>> > 1. 'size' was declared as 'long', truncating the 64-bit result of
>> >    i_size_read(). On 32-bit systems a 4GB file would be truncated to 0,
>> >    causing the function to return early and discard all data.
>> 
>> As Pasha said, KHO and LUO are not expected to run on 32-bit systems.
>> Plus, since i_size_read() returns loff_t, why use u64 when you can just
>> match the type and just use loff_t (which on 64-bit is long anyway)? I
>> don't get why u64 is any better than long or loff_t.
>> 
>> >
>> > 2. 'max_folios' was declared as 'unsigned int', causing overflow for
>> >    sparse files larger than 4TB. For example, a 16TB+4KB file would
>> >    calculate 0x100000001 folios but truncate to 1 when assigned to
>> >    max_folios, causing memfd_pin_folios() to pin only the first folio.
>> 
>> Using unsigned int was intentional. We pass max_folios to
>> memfd_pin_folios(), which expects an unsigned int. So this change is
>> pointless unless you go and update memfd_pin_folios() too.
>> 
>> I think making memfd_pin_folios() use unsigned long for max_folios makes
>> a lot of sense, so can you please go update that first before making
>> this change? And when you do, please match the type of the argument to
>> the type you use here instead of using u64. This can be a separate,
>> independent patch series.
>
> Thanks.  I'll drop this patch.  The preceding six patches are looking
> well-reviewed and ready to go?

Yes. The first six patches are good to go.

I think the changes in this one can be split off as a separate series
since it will be a bit more involved.

>
> Chenghao, please prepare any update for this patch against the
> preceding six.  Or against tomorrow's mm-unstable or mm-new or
> linux-next.
>

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails
  2026-04-02 17:54     ` Andrew Morton
@ 2026-04-03  9:07       ` Pratyush Yadav
  0 siblings, 0 replies; 23+ messages in thread
From: Pratyush Yadav @ 2026-04-03  9:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Pratyush Yadav, Chenghao Duan, pasha.tatashin, rppt, linux-kernel,
	linux-mm, jianghaoran

On Thu, Apr 02 2026, Andrew Morton wrote:

> On Thu, 02 Apr 2026 11:52:57 +0000 Pratyush Yadav <pratyush@kernel.org> wrote:
>
>> So I think the commit message needs reworking. Perhaps something like
>> the below:
>>
>> ...
>>
>> With that,
>> 
>> Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
>
> Thanks, I did this:

LGTM. Thanks!

[...]

-- 
Regards,
Pratyush Yadav


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2026-04-03  9:07 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-26  8:47 [PATCH v3 0/7] Modify memfd_luo code Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 1/7] mm/memfd: use folio_nr_pages() for shmem inode accounting Chenghao Duan
2026-04-02  1:23   ` Pasha Tatashin
2026-04-02 10:59   ` Pratyush Yadav
2026-03-26  8:47 ` [PATCH v3 2/7] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
2026-04-02 11:02   ` Pratyush Yadav
2026-03-26  8:47 ` [PATCH v3 3/7] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 4/7] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
2026-03-26  8:47 ` [PATCH v3 5/7] mm/memfd_luo: fix physical address conversion in put_folios cleanup Chenghao Duan
2026-04-02  1:30   ` Pasha Tatashin
2026-04-02 11:06   ` Pratyush Yadav
2026-04-02 17:43     ` Andrew Morton
2026-03-26  8:47 ` [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails Chenghao Duan
2026-04-02  1:32   ` Pasha Tatashin
2026-04-02 11:52   ` Pratyush Yadav
2026-04-02 17:54     ` Andrew Morton
2026-04-03  9:07       ` Pratyush Yadav
2026-03-26  8:47 ` [PATCH v3 7/7] mm/memfd_luo: fix integer overflow in memfd_luo_preserve_folios Chenghao Duan
2026-04-02  1:39   ` Pasha Tatashin
2026-04-02 12:06   ` Pratyush Yadav
2026-04-02 17:58     ` Andrew Morton
2026-04-03  9:06       ` Pratyush Yadav
2026-03-26 23:36 ` [PATCH v3 0/7] Modify memfd_luo code Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox