From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 715853DEAF3 for ; Thu, 2 Apr 2026 11:53:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775130782; cv=none; b=V9dnXOqFCFJr/AHDvQYoUjPcUUQdObphNd0M+w7htiA/BdXVIDa7Xz8N7PbkkJbGf9XG1WdjA6GKcdKqdtOYGcwowDT9DgeiVDfwIqOLN+vCHPwuc5ulk/T4fQCogJPbS+AMCYrcQyElHJg8xfevVwsOMY14RqWtgsONd0h5GkQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775130782; c=relaxed/simple; bh=xFKo0RxOD1QivUiQP/oktEu9oeMoK1I+zrqYjCUBHV0=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=lghb948tNq/lQNYaFpIa9pr3paX6QE1rMfPm5jXG2rOEfHjdK809r69fnWv/QHoYJa6oYHFRFuKqIwbvs3iLilahEEGlFwGvsroSAP3Adk/j7gpD2ro6FM4EE6KzOSlUGkiy9CHiC4m3R4p9wLREjG3lbyFpUe+bNWNmctWbrtE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=X9zqSrLs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="X9zqSrLs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 465B4C116C6; Thu, 2 Apr 2026 11:52:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775130781; bh=xFKo0RxOD1QivUiQP/oktEu9oeMoK1I+zrqYjCUBHV0=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=X9zqSrLsEyUZd4OFf5vWkrzZm96rI5bvpyxzVpqUlDtv53Rg187U9V2FH3kBYTH1s h2blaunJFqigJrhle8d/Lbok4CFqI5g0o4kHisLf/gtQbbN1vA2nNHC7Cp2kt1ifVP wvzju1eD3iugAO5EmyY/JCClKkiTQBPKOZB0TlZqNNKtAviBJ3G98rCk6Lr0wj3L+z kuqe2fhiBqTtDKRVCtxjGyXJm+xjwASQBj8Bl/5O9QlrjwO6f7vtgXaOoWC4XtAacK Rf0qd32dLF0Mzlw/Ux+9A6A/nvbtyHNTnvhQZbKeEZ4PBMhAECMzw760iodLDNqgjg BMxVJk0xvXhjQ== From: Pratyush Yadav To: Chenghao Duan Cc: pasha.tatashin@soleen.com, rppt@kernel.org, pratyush@kernel.org, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, jianghaoran@kylinos.cn Subject: Re: [PATCH v3 6/7] mm/memfd_luo: remove folio from page cache when accounting fails In-Reply-To: <20260326084727.118437-7-duanchenghao@kylinos.cn> (Chenghao Duan's message of "Thu, 26 Mar 2026 16:47:26 +0800") References: <20260326084727.118437-1-duanchenghao@kylinos.cn> <20260326084727.118437-7-duanchenghao@kylinos.cn> Date: Thu, 02 Apr 2026 11:52:57 +0000 Message-ID: <2vxzzf3lfujq.fsf@kernel.org> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Thu, Mar 26 2026, Chenghao Duan wrote: > In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails > after successfully adding the folio to the page cache, the code jumps > to unlock_folio without removing the folio from the page cache. > > This leaves the folio permanently abandoned in the page cache: > - The folio was added via shmem_add_to_page_cache() which set up > mapping, index, and incremented nrpages/shmem stats. > - folio_unlock() and folio_put() do not remove it from the cache. > - folio_add_lru() was never called, so it cannot be reclaimed. This is just not true. The folio is _not_ "permanently abandoned" in the page cache. When fput() is called by memfd_luo_retrieve(), it will eventually call shmem_undo_range() on the whole mapping and free all the folios in there. I went and looked at shmem_undo_range() and the accompanying accounting logic, and all that seems to be impervious to this type of superfluous folio in the filemap. Main reason being that shmem_recalc_inode() directly uses mapping->nrpages after truncation so even if you don't account for the folio, as long as you get rid of the whole file (which we do) it doesn't matter. I think the only place I can see this causing trouble is maybe in LRU accounting, though I really don't understand how any of that works so dunno. Anyway, I do think this patch is worth having. It keeps the filemap clean and gets rid of the need of this complex reasoning to figure out if this is safe. So I think the commit message needs reworking. Perhaps something like the below: mm/memfd_luo: remove folio from page cache when accounting fails In memfd_luo_retrieve_folios(), when shmem_inode_acct_blocks() fails after successfully adding the folio to the page cache, the code jumps to unlock_folio without removing the folio from the page cache. While the folio eventually will be freed when the file is released by memfd_luo_retrieve(), it is a good idea to directly remove a folio that was not fully added to the file. This avoids the possibility of accounting mismatches in shmem or filemap core. Fix by adding a remove_from_cache label that calls filemap_remove_folio() before unlocking, matching the error handling pattern in shmem_alloc_and_add_folio(). This issue was identified by the AI review. https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn With that, Reviewed-by: Pratyush Yadav > > Fix by adding a remove_from_cache label that calls filemap_remove_folio() > before unlocking, matching the error handling pattern in > shmem_alloc_and_add_folio(). > > This issue was identified by the AI review. > https://sashiko.dev/#/patchset/20260323110747.193569-1-duanchenghao@kylinos.cn > > Signed-off-by: Chenghao Duan [...] -- Regards, Pratyush Yadav