From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D79E42F12B4; Fri, 17 Oct 2025 15:25:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760714736; cv=none; b=hCy33cyc3QSK5tm26J6MV0rQqChK5YnZ/vOCfsDbmLcSX9wzYNQbIm6v0IpEY3EnfnVJHY+19QOBmqzjnLfxGv+KWccHAyRLf3YOS3gTf4h2gD7t20mBEUyiB1XIb7gIzI0P/b5Uoajow3/7GGpLk2z2Zy7Ti0VpRvGugco6B7c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760714736; c=relaxed/simple; bh=8u+WcFdeDgUdZPrD/84H96U385GQqcTzIbbiT7t7RzU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ah0peO7pgQN2Z/3hohkssTXdnIHQ16T1Ppz7rR30wEIavWpigvmsh3hXkbonCRYvhza8d4SYLL5+snx9TZrPqBssoXDSXJqwwlr9IDj7bcJ2jozJMzH3EwnAZdLfm+tm4W/fpLXF9ieJhQKhesSTZ2ZknvMA1EKgF54nurVaYJE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=qW9OwIq9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="qW9OwIq9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB5DEC4CEE7; Fri, 17 Oct 2025 15:25:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1760714736; bh=8u+WcFdeDgUdZPrD/84H96U385GQqcTzIbbiT7t7RzU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qW9OwIq9rC/PgpZGdoOfGaoRPFql16s0XXABMMGNnmy++vmPIEACGJ6c3jP5qHeBU loWJP7G/xkoH2J5sP6QN26cr2TXdmAuwfg8wBmZjxh1yxhA9sqOyfT7HEXU5OB8mDJ Z2/GdjwJx9Mx54hQhlPzf2G3NruySghgluAvDNYk= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Lance Yang , Qun-wei Lin , David Hildenbrand , Zi Yan , Usama Arif , Catalin Marinas , Wei Yang , Alistair Popple , "andrew.yang" , Baolin Wang , Barry Song , Byungchul Park , Charlie Jenkins , Chinwen Chang , Dev Jain , Domenico Cerasuolo , Gregory Price , "Huang, Ying" , Hugh Dickins , Johannes Weiner , Joshua Hahn , Kairui Song , Kalesh Singh , Liam Howlett , Lorenzo Stoakes , Mariano Pache , Mathew Brost , "Matthew Wilcox (Oracle)" , Mike Rapoport , Palmer Dabbelt , Rakie Kim , Rik van Riel , Roman Gushchin , Ryan Roberts , Samuel Holland , Shakeel Butt , Suren Baghdasaryan , Yu Zhao , Andrew Morton Subject: [PATCH 6.12 205/277] mm/thp: fix MTE tag mismatch when replacing zero-filled subpages Date: Fri, 17 Oct 2025 16:53:32 +0200 Message-ID: <20251017145154.609266655@linuxfoundation.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251017145147.138822285@linuxfoundation.org> References: <20251017145147.138822285@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Lance Yang commit 1ce6473d17e78e3cb9a40147658231731a551828 upstream. When both THP and MTE are enabled, splitting a THP and replacing its zero-filled subpages with the shared zeropage can cause MTE tag mismatch faults in userspace. Remapping zero-filled subpages to the shared zeropage is unsafe, as the zeropage has a fixed tag of zero, which may not match the tag expected by the userspace pointer. KSM already avoids this problem by using memcmp_pages(), which on arm64 intentionally reports MTE-tagged pages as non-identical to prevent unsafe merging. As suggested by David[1], this patch adopts the same pattern, replacing the memchr_inv() byte-level check with a call to pages_identical(). This leverages existing architecture-specific logic to determine if a page is truly identical to the shared zeropage. Having both the THP shrinker and KSM rely on pages_identical() makes the design more future-proof, IMO. Instead of handling quirks in generic code, we just let the architecture decide what makes two pages identical. [1] https://lore.kernel.org/all/ca2106a3-4bb2-4457-81af-301fd99fbef4@redhat.com Link: https://lkml.kernel.org/r/20250922021458.68123-1-lance.yang@linux.dev Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang Reported-by: Qun-wei Lin Closes: https://lore.kernel.org/all/a7944523fcc3634607691c35311a5d59d1a3f8d4.camel@mediatek.com Suggested-by: David Hildenbrand Acked-by: Zi Yan Acked-by: David Hildenbrand Acked-by: Usama Arif Reviewed-by: Catalin Marinas Reviewed-by: Wei Yang Cc: Alistair Popple Cc: andrew.yang Cc: Baolin Wang Cc: Barry Song Cc: Byungchul Park Cc: Charlie Jenkins Cc: Chinwen Chang Cc: Dev Jain Cc: Domenico Cerasuolo Cc: Gregory Price Cc: "Huang, Ying" Cc: Hugh Dickins Cc: Johannes Weiner Cc: Joshua Hahn Cc: Kairui Song Cc: Kalesh Singh Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Mariano Pache Cc: Mathew Brost Cc: Matthew Wilcox (Oracle) Cc: Mike Rapoport Cc: Palmer Dabbelt Cc: Rakie Kim Cc: Rik van Riel Cc: Roman Gushchin Cc: Ryan Roberts Cc: Samuel Holland Cc: Shakeel Butt Cc: Suren Baghdasaryan Cc: Yu Zhao Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/huge_memory.c | 15 +++------------ mm/migrate.c | 8 +------- 2 files changed, 4 insertions(+), 19 deletions(-) --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3715,32 +3715,23 @@ static unsigned long deferred_split_coun static bool thp_underused(struct folio *folio) { int num_zero_pages = 0, num_filled_pages = 0; - void *kaddr; int i; if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1) return false; for (i = 0; i < folio_nr_pages(folio); i++) { - kaddr = kmap_local_folio(folio, i * PAGE_SIZE); - if (!memchr_inv(kaddr, 0, PAGE_SIZE)) { - num_zero_pages++; - if (num_zero_pages > khugepaged_max_ptes_none) { - kunmap_local(kaddr); + if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) { + if (++num_zero_pages > khugepaged_max_ptes_none) return true; - } } else { /* * Another path for early exit once the number * of non-zero filled pages exceeds threshold. */ - num_filled_pages++; - if (num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) { - kunmap_local(kaddr); + if (++num_filled_pages >= HPAGE_PMD_NR - khugepaged_max_ptes_none) return false; - } } - kunmap_local(kaddr); } return false; } --- a/mm/migrate.c +++ b/mm/migrate.c @@ -202,9 +202,7 @@ static bool try_to_map_unused_to_zeropag unsigned long idx) { struct page *page = folio_page(folio, idx); - bool contains_data; pte_t newpte; - void *addr; if (PageCompound(page)) return false; @@ -221,11 +219,7 @@ static bool try_to_map_unused_to_zeropag * this subpage has been non present. If the subpage is only zero-filled * then map it to the shared zeropage. */ - addr = kmap_local_page(page); - contains_data = memchr_inv(addr, 0, PAGE_SIZE); - kunmap_local(addr); - - if (contains_data) + if (!pages_identical(page, ZERO_PAGE(0))) return false; newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address),