public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Pratyush Yadav <pratyush@kernel.org>
To: Chenghao Duan <duanchenghao@kylinos.cn>
Cc: pasha.tatashin@soleen.com,  rppt@kernel.org,
	 pratyush@kernel.org, akpm@linux-foundation.org,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	 jianghaoran@kylinos.cn
Subject: Re: [PATCH v1 1/3] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path
Date: Fri, 20 Mar 2026 10:02:49 +0000	[thread overview]
Message-ID: <2vxzikaqbyye.fsf@kernel.org> (raw)
In-Reply-To: <20260319012845.29570-2-duanchenghao@kylinos.cn> (Chenghao Duan's message of "Thu, 19 Mar 2026 09:28:43 +0800")

On Thu, Mar 19 2026, Chenghao Duan wrote:

> Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios()
> to improve performance when restoring large memfds.
>
> Currently, shmem_recalc_inode() is called for each folio during restore,
> which is O(n) expensive operations. This patch collects the number of
> successfully added folios and calls shmem_recalc_inode() once after the
> loop completes, reducing complexity to O(1).
>
> Additionally, fix the error path to also call shmem_recalc_inode() for
> the folios that were successfully added before the error occurred.
>
> Signed-off-by: Chenghao Duan <duanchenghao@kylinos.cn>
> ---
>  mm/memfd_luo.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c
> index b8edb9f981d7..5ddd3657d8be 100644
> --- a/mm/memfd_luo.c
> +++ b/mm/memfd_luo.c
> @@ -397,6 +397,7 @@ static int memfd_luo_retrieve_folios(struct file *file,
>  	struct folio *folio;
>  	int err = -EIO;
>  	long i;
> +	u64 nr_added = 0;
>  
>  	for (i = 0; i < nr_folios; i++) {
>  		const struct memfd_luo_folio_ser *pfolio = &folios_ser[i];
> @@ -448,12 +449,15 @@ static int memfd_luo_retrieve_folios(struct file *file,
>  			goto unlock_folio;
>  		}
>  
> -		shmem_recalc_inode(inode, 1, 0);
> +		nr_added++;

https://sashiko.dev/#/patchset/20260319012845.29570-1-duanchenghao%40kylinos.cn

AI review picked up a real bug here:

    Since memfd files can use large folios, should nr_added track the
    number of pages instead of the number of folios?

    shmem_recalc_inode() expects the number of pages. Passing the number
    of folios might under-account blocks and bypass tmpfs limits or
    quotas.

    Also, shmem_inode_acct_blocks() earlier in the loop is hardcoded to
    1, which might have the same issue.

If THP is being used, we should account for nr_pages instead of
nr_folios. Can you please also add a fix for this with your series? Just
so we fix the bugs the code already has before refactoring it.

>  		folio_add_lru(folio);
>  		folio_unlock(folio);
>  		folio_put(folio);
>  	}
>  
> +	if (nr_added)
> +		shmem_recalc_inode(inode, nr_added, 0);

Nit: it is very very likely that nr_added > 0. And shmem_recalc_inode()
can deal with 0 nr_added. So please drop this if check and call it
directly.

Other than this, the patch LGTM. Thanks for working on this!

> +
>  	return 0;
>  
>  unlock_folio:
> @@ -472,6 +476,9 @@ static int memfd_luo_retrieve_folios(struct file *file,
>  			folio_put(folio);
>  	}
>  
> +	if (nr_added)
> +		shmem_recalc_inode(inode, nr_added, 0);
> +
>  	return err;
>  }

-- 
Regards,
Pratyush Yadav


  parent reply	other threads:[~2026-03-20 10:02 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-19  1:28 [PATCH v1 0/3] Modify memfd_luo code Chenghao Duan
2026-03-19  1:28 ` [PATCH v1 1/3] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path Chenghao Duan
2026-03-19 15:28   ` Pasha Tatashin
2026-03-20  9:53     ` Pratyush Yadav
2026-03-20 10:02   ` Pratyush Yadav [this message]
2026-03-19  1:28 ` [PATCH v1 2/3] mm/memfd_luo: remove unnecessary memset in zero-size memfd path Chenghao Duan
2026-03-19 16:20   ` Pasha Tatashin
2026-03-20 10:04   ` Pratyush Yadav
2026-03-20 11:37   ` Mike Rapoport
2026-03-19  1:28 ` [PATCH v1 3/3] mm/memfd_luo: use i_size_write() to set inode size during retrieve Chenghao Duan
2026-03-19 16:24   ` Pasha Tatashin
2026-03-20  9:51   ` Pratyush Yadav
2026-03-20 11:35     ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2vxzikaqbyye.fsf@kernel.org \
    --to=pratyush@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=duanchenghao@kylinos.cn \
    --cc=jianghaoran@kylinos.cn \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=rppt@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox