From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98E6A3563F0 for ; Fri, 20 Mar 2026 10:02:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774000973; cv=none; b=m5NMrJ5tqBLF9YzzkvVOmsv60u5GPEnmeVtDrGxJQJUc+Gij90qsPnqcwtI3hJx3WLpsn3EHBh6qW6Cj9VZ70C1UKAK4F8hcU282Ui2ZIJNqQ2vO4BOLObPXx/9/NHmKehtSZ7Pdf6JIuKQlOahXijlToMSUuZhrV6Gh2lfBOFk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774000973; c=relaxed/simple; bh=U6xZIHEFtVRU4Lo9nyPvzlr8eePYqnne5MPeP9lGDHw=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=WurjmCQy5j0O/d5D1jJMWsU/5fDnyGZxcw7x4zMowLLrVaUMb59bb4OUdqfN+DKm+RjjHT2G1II7P9Mr5YLaC5C9cWrn9HTMoh9KeavfGND6QW+m/tBhdz51xDWQmXcq1XfPQ4OBc3fi/HugFwFgoZD6ptF2bti/SxHNbMyM08Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IFlwAlOA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IFlwAlOA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AEBAFC4CEF7; Fri, 20 Mar 2026 10:02:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774000973; bh=U6xZIHEFtVRU4Lo9nyPvzlr8eePYqnne5MPeP9lGDHw=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=IFlwAlOACBc1t0eTA75hgfLQoomakaAr0V++zYog30tub5f0bGHHlgXjZfX7+lx+m 8L7n5fUfkLsF3nQU/Smmq0LqztVRyoxyiFMKDY0eERzbsa5CJprVeK9S+B5Gq9BaPz j2ld+5pBA+XBGCSMXVOJgeRgbM4XcqSrKC+Iqc6okQRlL6VGPOyofdCrF0b1yJ3TiK QEwy1gdo27047Db308U+70mAo3nnS3DIOWsfliNrDNw5Ui0Ucea2ksVRlMfm20mmEj ygjUhnEYFctt/oHQVvtVDZOMAnRjgZ+auCvDtpHomOLQ1T718lwE9yjUa4kFJD300s ZCXBClMgfZyMA== From: Pratyush Yadav To: Chenghao Duan Cc: pasha.tatashin@soleen.com, rppt@kernel.org, pratyush@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, jianghaoran@kylinos.cn Subject: Re: [PATCH v1 1/3] mm/memfd_luo: optimize shmem_recalc_inode calls in retrieve path In-Reply-To: <20260319012845.29570-2-duanchenghao@kylinos.cn> (Chenghao Duan's message of "Thu, 19 Mar 2026 09:28:43 +0800") References: <20260319012845.29570-1-duanchenghao@kylinos.cn> <20260319012845.29570-2-duanchenghao@kylinos.cn> Date: Fri, 20 Mar 2026 10:02:49 +0000 Message-ID: <2vxzikaqbyye.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Thu, Mar 19 2026, Chenghao Duan wrote: > Move shmem_recalc_inode() out of the loop in memfd_luo_retrieve_folios() > to improve performance when restoring large memfds. > > Currently, shmem_recalc_inode() is called for each folio during restore, > which is O(n) expensive operations. This patch collects the number of > successfully added folios and calls shmem_recalc_inode() once after the > loop completes, reducing complexity to O(1). > > Additionally, fix the error path to also call shmem_recalc_inode() for > the folios that were successfully added before the error occurred. > > Signed-off-by: Chenghao Duan > --- > mm/memfd_luo.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/mm/memfd_luo.c b/mm/memfd_luo.c > index b8edb9f981d7..5ddd3657d8be 100644 > --- a/mm/memfd_luo.c > +++ b/mm/memfd_luo.c > @@ -397,6 +397,7 @@ static int memfd_luo_retrieve_folios(struct file *file, > struct folio *folio; > int err = -EIO; > long i; > + u64 nr_added = 0; > > for (i = 0; i < nr_folios; i++) { > const struct memfd_luo_folio_ser *pfolio = &folios_ser[i]; > @@ -448,12 +449,15 @@ static int memfd_luo_retrieve_folios(struct file *file, > goto unlock_folio; > } > > - shmem_recalc_inode(inode, 1, 0); > + nr_added++; https://sashiko.dev/#/patchset/20260319012845.29570-1-duanchenghao%40kylinos.cn AI review picked up a real bug here: Since memfd files can use large folios, should nr_added track the number of pages instead of the number of folios? shmem_recalc_inode() expects the number of pages. Passing the number of folios might under-account blocks and bypass tmpfs limits or quotas. Also, shmem_inode_acct_blocks() earlier in the loop is hardcoded to 1, which might have the same issue. If THP is being used, we should account for nr_pages instead of nr_folios. Can you please also add a fix for this with your series? Just so we fix the bugs the code already has before refactoring it. > folio_add_lru(folio); > folio_unlock(folio); > folio_put(folio); > } > > + if (nr_added) > + shmem_recalc_inode(inode, nr_added, 0); Nit: it is very very likely that nr_added > 0. And shmem_recalc_inode() can deal with 0 nr_added. So please drop this if check and call it directly. Other than this, the patch LGTM. Thanks for working on this! > + > return 0; > > unlock_folio: > @@ -472,6 +476,9 @@ static int memfd_luo_retrieve_folios(struct file *file, > folio_put(folio); > } > > + if (nr_added) > + shmem_recalc_inode(inode, nr_added, 0); > + > return err; > } -- Regards, Pratyush Yadav