From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E153726CE32; Mon, 13 Apr 2026 16:27:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776097641; cv=none; b=Z8nR82QfwjVxCxq3yBHRhlGznF5k3AYcLkyLQwU4RqNeuV0AAfE3Fw40ms+WiZCm1o9KCuGe52dxeB4RUh7ZDCw8UoQsxSIqbFBwjO6h2jHYNcb621no0plVSDRO1qDBNVFx/1xrVQTcXjTB6bWVUZMEflb041wCvzmzyOKn4Ig= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776097641; c=relaxed/simple; bh=fupDeiP3fy2UCToG6LE822mqZ+KhM9BVvLRBQHdj+28=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hGZGISpEFgtzJWODpsf8VtoCEtca9ICUJOFScHWxdWO8HlICFraaAPdw5MMDUQCjGythH9PsHhoQ/L7P2zFwR2z23u1K98rk2p3W+8rF5LKBC8l87vXTqF7wVyo6SzwF3Vv7V8bFWuv2vHzNC7zCCwKr0HWA5V6P/vtkrJu6I8I= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=R043QSV0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="R043QSV0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 774E1C2BCAF; Mon, 13 Apr 2026 16:27:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1776097640; bh=fupDeiP3fy2UCToG6LE822mqZ+KhM9BVvLRBQHdj+28=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R043QSV08K9VfbVqGotjQ7RklLrhXp+icH6LJwEiPdMfSCqZ1GVKPOHflgPG5d0x9 kWmpvRi77hjftwv+sk5rXI9sNPIyXPtpLoO3yJFb1b360tEWhdLKVwePpsxF8eWwhC 2ElXy0cLy/AQ0Tnhc6PYx2bADCPZipOU9XYMMH+M= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Jane Chu , Harry Yoo , Oscar Salvador , David Hildenbrand , Jann Horn , Liu Shixin , Muchun Song , Andrew Morton , "David Hildenbrand (Arm)" Subject: [PATCH 5.15 215/570] mm/hugetlb: fix copy_hugetlb_page_range() to use ->pt_share_count Date: Mon, 13 Apr 2026 17:55:46 +0200 Message-ID: <20260413155838.509457242@linuxfoundation.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260413155830.386096114@linuxfoundation.org> References: <20260413155830.386096114@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Jane Chu commit 14967a9c7d247841b0312c48dcf8cd29e55a4cc8 upstream. commit 59d9094df3d79 ("mm: hugetlb: independent PMD page table shared count") introduced ->pt_share_count dedicated to hugetlb PMD share count tracking, but omitted fixing copy_hugetlb_page_range(), leaving the function relying on page_count() for tracking that no longer works. When lazy page table copy for hugetlb is disabled, that is, revert commit bcd51a3c679d ("hugetlb: lazy page table copies in fork()") fork()'ing with hugetlb PMD sharing quickly lockup - [ 239.446559] watchdog: BUG: soft lockup - CPU#75 stuck for 27s! [ 239.446611] RIP: 0010:native_queued_spin_lock_slowpath+0x7e/0x2e0 [ 239.446631] Call Trace: [ 239.446633] [ 239.446636] _raw_spin_lock+0x3f/0x60 [ 239.446639] copy_hugetlb_page_range+0x258/0xb50 [ 239.446645] copy_page_range+0x22b/0x2c0 [ 239.446651] dup_mmap+0x3e2/0x770 [ 239.446654] dup_mm.constprop.0+0x5e/0x230 [ 239.446657] copy_process+0xd17/0x1760 [ 239.446660] kernel_clone+0xc0/0x3e0 [ 239.446661] __do_sys_clone+0x65/0xa0 [ 239.446664] do_syscall_64+0x82/0x930 [ 239.446668] ? count_memcg_events+0xd2/0x190 [ 239.446671] ? syscall_trace_enter+0x14e/0x1f0 [ 239.446676] ? syscall_exit_work+0x118/0x150 [ 239.446677] ? arch_exit_to_user_mode_prepare.constprop.0+0x9/0xb0 [ 239.446681] ? clear_bhb_loop+0x30/0x80 [ 239.446684] ? clear_bhb_loop+0x30/0x80 [ 239.446686] entry_SYSCALL_64_after_hwframe+0x76/0x7e There are two options to resolve the potential latent issue: 1. warn against PMD sharing in copy_hugetlb_page_range(), 2. fix it. This patch opts for the second option. While at it, simplify the comment, the details are not actually relevant anymore. Link: https://lkml.kernel.org/r/20250916004520.1604530-1-jane.chu@oracle.com Fixes: 59d9094df3d7 ("mm: hugetlb: independent PMD page table shared count") Signed-off-by: Jane Chu Reviewed-by: Harry Yoo Acked-by: Oscar Salvador Acked-by: David Hildenbrand Cc: Jann Horn Cc: Liu Shixin Cc: Muchun Song Signed-off-by: Andrew Morton [ David: We don't have ptdesc and the wrappers, so work directly on the page->pt_share_count. CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING is still called CONFIG_ARCH_WANT_HUGE_PMD_SHARE. ] Signed-off-by: David Hildenbrand (Arm) Signed-off-by: Greg Kroah-Hartman --- mm/hugetlb.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4341,16 +4341,11 @@ int copy_hugetlb_page_range(struct mm_st break; } - /* - * If the pagetables are shared don't copy or take references. - * - * dst_pte == src_pte is the common case of src/dest sharing. - * However, src could have 'unshared' and dst shares with - * another vma. So page_count of ptep page is checked instead - * to reliably determine whether pte is shared. - */ - if (page_count(virt_to_page(dst_pte)) > 1) +#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE + /* If the pagetables are shared, there is nothing to do */ + if (atomic_read(&virt_to_page(dst_pte)->pt_share_count)) continue; +#endif dst_ptl = huge_pte_lock(h, dst, dst_pte); src_ptl = huge_pte_lockptr(h, src, src_pte);