From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA2A6333456; Thu, 27 Nov 2025 15:01:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764255678; cv=none; b=AbEt6ySh+nY30mRoggxi27UDB+jLWwsuGsD5NoJoKN5MAc+lewxK33FG+CjQfKx9RMjq7w+TFV/0j/Nd/B5jspH0awaVv6q9O1Zip6Eald49A+xAK5VbtzkcsCoEK6rhQILXL8SHchGKpTCOWuK4sSUp5ouFSS8txY1Q+U7EXcI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764255678; c=relaxed/simple; bh=9dega0Ce2RGehyrWfGSgAFH3kf1xhB+5cHm8p0+/FDk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=H+eoqbDWND1sh3AJ/QKmC5vhGELCmTQ85ghGMMypOfGS3Tt3rLnnZyOLuRTuf9EH33HMNNNwXx67vSpgUhYjww7vVAn4/O1t0ThX2dyP6CKVpBtaRjrGXUP8TWUTlcganNyHzMO1G6scBBIGSYHAac+WQGj4onxCRNacHIcBok0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=vTEWjEn1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="vTEWjEn1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 31073C4CEF8; Thu, 27 Nov 2025 15:01:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1764255678; bh=9dega0Ce2RGehyrWfGSgAFH3kf1xhB+5cHm8p0+/FDk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vTEWjEn17lur1KJgHvjstmnZ97S1dtFt91hCDXT60KLCpg28/Wc36hKRsSp4GQLzK F66BkX80yXRDIBuMed9I5BJVkcsqxH+RhSJ9vfi8mgJwTNiyMjQNtt8pCXCXQCTluI mzwWrf623a6kVEBExa92YSuOoyxRGWNUKOW67Kos= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Kiryl Shutsemau , Al Viro , Baolin Wang , Christian Brauner , "Darrick J. Wong" , Dave Chinner , David Hildenbrand , Hugh Dickins , Johannes Weiner , Liam Howlett , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , Michal Hocko , Mike Rapoport , Rik van Riel , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Andrew Morton Subject: [PATCH 6.17 077/175] mm/truncate: unmap large folio on split failure Date: Thu, 27 Nov 2025 15:45:30 +0100 Message-ID: <20251127144045.778843047@linuxfoundation.org> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251127144042.945669935@linuxfoundation.org> References: <20251127144042.945669935@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Kiryl Shutsemau commit fa04f5b60fda62c98a53a60de3a1e763f11feb41 upstream. Accesses within VMA, but beyond i_size rounded up to PAGE_SIZE are supposed to generate SIGBUS. This behavior might not be respected on truncation. During truncation, the kernel splits a large folio in order to reclaim memory. As a side effect, it unmaps the folio and destroys PMD mappings of the folio. The folio will be refaulted as PTEs and SIGBUS semantics are preserved. However, if the split fails, PMD mappings are preserved and the user will not receive SIGBUS on any accesses within the PMD. Unmap the folio on split failure. It will lead to refault as PTEs and preserve SIGBUS semantics. Make an exception for shmem/tmpfs that for long time intentionally mapped with PMDs across i_size. Link: https://lkml.kernel.org/r/20251027115636.82382-3-kirill@shutemov.name Fixes: b9a8a4195c7d ("truncate,shmem: Handle truncates that split large folios") Signed-off-by: Kiryl Shutsemau Cc: Al Viro Cc: Baolin Wang Cc: Christian Brauner Cc: "Darrick J. Wong" Cc: Dave Chinner Cc: David Hildenbrand Cc: Hugh Dickins Cc: Johannes Weiner Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Mike Rapoport Cc: Rik van Riel Cc: Shakeel Butt Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/truncate.c | 35 +++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) --- a/mm/truncate.c +++ b/mm/truncate.c @@ -177,6 +177,32 @@ int truncate_inode_folio(struct address_ return 0; } +static int try_folio_split_or_unmap(struct folio *folio, struct page *split_at, + unsigned long min_order) +{ + enum ttu_flags ttu_flags = + TTU_SYNC | + TTU_SPLIT_HUGE_PMD | + TTU_IGNORE_MLOCK; + int ret; + + ret = try_folio_split_to_order(folio, split_at, min_order); + + /* + * If the split fails, unmap the folio, so it will be refaulted + * with PTEs to respect SIGBUS semantics. + * + * Make an exception for shmem/tmpfs that for long time + * intentionally mapped with PMDs across i_size. + */ + if (ret && !shmem_mapping(folio->mapping)) { + try_to_unmap(folio, ttu_flags); + WARN_ON(folio_mapped(folio)); + } + + return ret; +} + /* * Handle partial folios. The folio may be entirely within the * range if a split has raced with us. If not, we zero the part of the @@ -226,7 +252,7 @@ bool truncate_inode_partial_folio(struct min_order = mapping_min_folio_order(folio->mapping); split_at = folio_page(folio, PAGE_ALIGN_DOWN(offset) / PAGE_SIZE); - if (!try_folio_split_to_order(folio, split_at, min_order)) { + if (!try_folio_split_or_unmap(folio, split_at, min_order)) { /* * try to split at offset + length to make sure folios within * the range can be dropped, especially to avoid memory waste @@ -250,13 +276,10 @@ bool truncate_inode_partial_folio(struct if (!folio_trylock(folio2)) goto out; - /* - * make sure folio2 is large and does not change its mapping. - * Its split result does not matter here. - */ + /* make sure folio2 is large and does not change its mapping */ if (folio_test_large(folio2) && folio2->mapping == folio->mapping) - try_folio_split_to_order(folio2, split_at2, min_order); + try_folio_split_or_unmap(folio2, split_at2, min_order); folio_unlock(folio2); out: