From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93E79335067; Fri, 21 Nov 2025 13:59:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763733586; cv=none; b=MXGijeMEKTSPuzxmlX0fWyuZpgbb2/ArbMK2eS/mDzTAhPOH8P0C7WEeP1WF2iqxe4wY8byjvcYAOi1NCzlto0Js7/FXb5xJCKkm2A37s9YtN9gawhYQ/8fI8jM1AEN2OgQLkQmk0p3lWUin3SJf2Yr+e8FHTkWWnPbq6LODZJ8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763733586; c=relaxed/simple; bh=jNDC92NLbOPos8xOkI0mO1YAydEQdpdKvLakbov0NtE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=huHvsd/Bm/NycyujB8EHnvxayFzrey6qngp9K0hlcfgDLyYOg9zWO/7XOCMxsnDLSbBNqTv1W+7zJwNIlRohgtlncQQ1KEpnYl/nKDoMaRHIAweP6gj0gmxgkOb7S+R0Ewuz9xYa2JyfYGKAu3jxRbzq25099loCbhgrpVbAe/I= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=ck/Bqvct; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="ck/Bqvct" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2BA8C4CEF1; Fri, 21 Nov 2025 13:59:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1763733586; bh=jNDC92NLbOPos8xOkI0mO1YAydEQdpdKvLakbov0NtE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ck/BqvctbsEPAcIzUeMGY3GuXUUpin62Xt6GR/T8ziLnpnlJYw53KEpC4E71wxI1H o38UFu/vDokGCHeZjC0RypkoPX/pYmuBtUJ1GHXLJuhFGkpBUKHNQuxjQ0i4D6ELs+ jxpSqOdyagNYp1sJ/KA4M2Z3ZpxAoXqKphuBaj4g= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Kiryl Shutsemau , "Darrick J. Wong" , Al Viro , Baolin Wang , Christian Brauner , Dave Chinner , David Hildenbrand , Hugh Dickins , Johannes Weiner , Liam Howlett , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , Michal Hocko , Mike Rapoport , Rik van Riel , Shakeel Butt , Suren Baghdasaryan , Vlastimil Babka , Andrew Morton Subject: [PATCH 6.6 519/529] mm/memory: do not populate page table entries beyond i_size Date: Fri, 21 Nov 2025 14:13:38 +0100 Message-ID: <20251121130249.511462152@linuxfoundation.org> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251121130230.985163914@linuxfoundation.org> References: <20251121130230.985163914@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.6-stable review patch. If anyone has any objections, please let me know. ------------------ From: Kiryl Shutsemau commit 74207de2ba10c2973334906822dc94d2e859ffc5 upstream. Patch series "Fix SIGBUS semantics with large folios", v3. Accessing memory within a VMA, but beyond i_size rounded up to the next page size, is supposed to generate SIGBUS. Darrick reported[1] an xfstests regression in v6.18-rc1. generic/749 failed due to missing SIGBUS. This was caused by my recent changes that try to fault in the whole folio where possible: 19773df031bc ("mm/fault: try to map the entire file folio in finish_fault()") 357b92761d94 ("mm/filemap: map entire large folio faultaround") These changes did not consider i_size when setting up PTEs, leading to xfstest breakage. However, the problem has been present in the kernel for a long time - since huge tmpfs was introduced in 2016. The kernel happily maps PMD-sized folios as PMD without checking i_size. And huge=always tmpfs allocates PMD-size folios on any writes. I considered this corner case when I implemented a large tmpfs, and my conclusion was that no one in their right mind should rely on receiving a SIGBUS signal when accessing beyond i_size. I cannot imagine how it could be useful for the workload. But apparently filesystem folks care a lot about preserving strict SIGBUS semantics. Generic/749 was introduced last year with reference to POSIX, but no real workloads were mentioned. It also acknowledged the tmpfs deviation from the test case. POSIX indeed says[3]: References within the address range starting at pa and continuing for len bytes to whole pages following the end of an object shall result in delivery of a SIGBUS signal. The patchset fixes the regression introduced by recent changes as well as more subtle SIGBUS breakage due to split failure on truncation. This patch (of 2): Accesses within VMA, but beyond i_size rounded up to PAGE_SIZE are supposed to generate SIGBUS. Recent changes attempted to fault in full folio where possible. They did not respect i_size, which led to populating PTEs beyond i_size and breaking SIGBUS semantics. Darrick reported generic/749 breakage because of this. However, the problem existed before the recent changes. With huge=always tmpfs, any write to a file leads to PMD-size allocation. Following the fault-in of the folio will install PMD mapping regardless of i_size. Fix filemap_map_pages() and finish_fault() to not install: - PTEs beyond i_size; - PMD mappings across i_size; Make an exception for shmem/tmpfs that for long time intentionally mapped with PMDs across i_size. Link: https://lkml.kernel.org/r/20251027115636.82382-1-kirill@shutemov.name Link: https://lkml.kernel.org/r/20251027115636.82382-2-kirill@shutemov.name Signed-off-by: Kiryl Shutsemau Fixes: 6795801366da ("xfs: Support large folios") Reported-by: "Darrick J. Wong" Cc: Al Viro Cc: Baolin Wang Cc: Christian Brauner Cc: Dave Chinner Cc: David Hildenbrand Cc: Hugh Dickins Cc: Johannes Weiner Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Mike Rapoport Cc: Rik van Riel Cc: Shakeel Butt Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Kiryl Shutsemau Signed-off-by: Greg Kroah-Hartman --- mm/filemap.c | 20 +++++++++++++++----- mm/memory.c | 24 +++++++++++++++++++++++- 2 files changed, 38 insertions(+), 6 deletions(-) --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3614,13 +3614,27 @@ vm_fault_t filemap_map_pages(struct vm_f struct folio *folio; vm_fault_t ret = 0; unsigned int nr_pages = 0, mmap_miss = 0, mmap_miss_saved; + bool can_map_large; rcu_read_lock(); folio = next_uptodate_folio(&xas, mapping, end_pgoff); if (!folio) goto out; - if (filemap_map_pmd(vmf, folio, start_pgoff)) { + file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; + end_pgoff = min(end_pgoff, file_end); + + /* + * Do not allow to map with PTEs beyond i_size and with PMD + * across i_size to preserve SIGBUS semantics. + * + * Make an exception for shmem/tmpfs that for long time + * intentionally mapped with PMDs across i_size. + */ + can_map_large = shmem_mapping(mapping) || + file_end >= folio_next_index(folio); + + if (can_map_large && filemap_map_pmd(vmf, folio, start_pgoff)) { ret = VM_FAULT_NOPAGE; goto out; } @@ -3633,10 +3647,6 @@ vm_fault_t filemap_map_pages(struct vm_f goto out; } - file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE) - 1; - if (end_pgoff > file_end) - end_pgoff = file_end; - do { unsigned long end; --- a/mm/memory.c +++ b/mm/memory.c @@ -67,6 +67,7 @@ #include #include #include +#include #include #include #include @@ -4435,6 +4436,8 @@ static bool vmf_pte_changed(struct vm_fa vm_fault_t finish_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; + bool needs_fallback = false; + struct folio *folio; struct page *page; vm_fault_t ret; @@ -4444,6 +4447,8 @@ vm_fault_t finish_fault(struct vm_fault else page = vmf->page; + folio = page_folio(page); + /* * check even for read faults because we might have lost our CoWed * page @@ -4454,8 +4459,25 @@ vm_fault_t finish_fault(struct vm_fault return ret; } + if (!needs_fallback && vma->vm_file) { + struct address_space *mapping = vma->vm_file->f_mapping; + pgoff_t file_end; + + file_end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); + + /* + * Do not allow to map with PTEs beyond i_size and with PMD + * across i_size to preserve SIGBUS semantics. + * + * Make an exception for shmem/tmpfs that for long time + * intentionally mapped with PMDs across i_size. + */ + needs_fallback = !shmem_mapping(mapping) && + file_end < folio_next_index(folio); + } + if (pmd_none(*vmf->pmd)) { - if (PageTransCompound(page)) { + if (!needs_fallback && PageTransCompound(page)) { ret = do_set_pmd(vmf, page); if (ret != VM_FAULT_FALLBACK) return ret;