From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F72ADF59; Wed, 25 Mar 2026 00:03:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774396993; cv=none; b=eHLjQSMasueihvMBbf6HV8o+m9pAUkebtaonjIq21ywfQ8USSPx40hC1DASijrw5Hw5vcU/reEwEgSFFuyicd6EM3mp3yl2HgkmpbnKZG/eQxPjljrYGyZWFhaIi89zhgJ62hXpKTj0cvEtHDIxeZZ05at4u34jBChDS1QhDwkQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774396993; c=relaxed/simple; bh=9f1wmmDRQ/xST5bhgNGmjzcn3IciZmk6Nk6ssBF2BL4=; h=Date:From:To:Cc:Subject:Message-Id:In-Reply-To:References: Mime-Version:Content-Type; b=g3lx4frmZmrzdfggy7pul39xgPcHyPBgScD7vN+5wqkFiFxMkgRfIlTq/L+Nk/b5uKNgbkocLiEr57LldqljJ74ckxzAZCAd4ukFgdbMyS2r6v7kGmKppDot3/OAJhX0N3R75NoqHUL0ZNMpRPJw/L4C1J5DbwWJ5r/nTp1MGWw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=TRskfX5O; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="TRskfX5O" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 45AD0C19424; Wed, 25 Mar 2026 00:03:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1774396992; bh=9f1wmmDRQ/xST5bhgNGmjzcn3IciZmk6Nk6ssBF2BL4=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=TRskfX5O3wTobE3L7tFsGmIeqmyMGpwrGJ6xjnepIeXmhMkDOQdMYtdjnoFJwU3WR W0OO14ZyvCUXs267Rh00QnZGAhBMu6OpK/4ZU9l5xnCNiZ9Ar4VCWqu+FK4qFI48qu qNWx8LsYrHLs4HTwFcfXlIKbWkn/VM2pqHBTquF4= Date: Tue, 24 Mar 2026 17:03:11 -0700 From: Andrew Morton To: Jianhui Zhou Cc: jane.chu@oracle.com, Muchun Song , Oscar Salvador , Mike Rapoport , David Hildenbrand , Peter Xu , Andrea Arcangeli , Mike Kravetz , SeongJae Park , Hugh Dickins , Sidhartha Kumar , Jonas Zhou , linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com Subject: Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation Message-Id: <20260324170311.dc5b54fe0765f2e680e3cc90@linux-foundation.org> In-Reply-To: References: <20260306140332.171078-1-jianhuizzzzz@gmail.com> <20260310110526.335749-1-jianhuizzzzz@gmail.com> <12e822c4-a4f2-4447-80b9-2eec35a03188@oracle.com> X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu) Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou wrote: > On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote: > > Just wondering whether making the shift explicit here instead of > > introducing another hugetlb helper might be sufficient? > > > > idx >>= huge_page_order(hstate_vma(vma)); > > That would work for hugetlb VMAs since both (address - vm_start) and > vm_pgoff are guaranteed to be huge page aligned. However, David > suggested introducing hugetlb_linear_page_index() to provide a cleaner > API that mirrors linear_page_index(), so I kept this approach. > Thanks. Would anyone like to review this cc:stable patch for us? From: Jianhui Zhou Subject: mm/userfaultfd: fix hugetlb fault mutex hash calculation Date: Tue, 10 Mar 2026 19:05:26 +0800 In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the page index for hugetlb_fault_mutex_hash(). However, linear_page_index() returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash() expects the index in huge page units. This mismatch means that different addresses within the same huge page can produce different hash values, leading to the use of different mutexes for the same huge page. This can cause races between faulting threads, which can corrupt the reservation map and trigger the BUG_ON in resv_map_release(). Fix this by introducing hugetlb_linear_page_index(), which returns the page index in huge page granularity, and using it in place of linear_page_index(). Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c") Signed-off-by: Jianhui Zhou Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7 Cc: Andrea Arcangeli Cc: David Hildenbrand Cc: Hugh Dickins Cc: JonasZhou Cc: Mike Rapoport Cc: Muchun Song Cc: Oscar Salvador Cc: Peter Xu Cc: SeongJae Park Cc: Sidhartha Kumar Cc: Signed-off-by: Andrew Morton --- include/linux/hugetlb.h | 17 +++++++++++++++++ mm/userfaultfd.c | 2 +- 2 files changed, 18 insertions(+), 1 deletion(-) --- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation +++ a/include/linux/hugetlb.h @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s return h->order + PAGE_SHIFT; } +/** + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb + * page size granularity. + * @vma: the hugetlb VMA + * @address: the virtual address within the VMA + * + * Return: the page offset within the mapping in huge page units. + */ +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma, + unsigned long address) +{ + struct hstate *h = hstate_vma(vma); + + return ((address - vma->vm_start) >> huge_page_shift(h)) + + (vma->vm_pgoff >> huge_page_order(h)); +} + static inline bool order_is_gigantic(unsigned int order) { return order > MAX_PAGE_ORDER; --- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation +++ a/mm/userfaultfd.c @@ -573,7 +573,7 @@ retry: * in the case of shared pmds. fault mutex prevents * races with other faulting threads. */ - idx = linear_page_index(dst_vma, dst_addr); + idx = hugetlb_linear_page_index(dst_vma, dst_addr); mapping = dst_vma->vm_file->f_mapping; hash = hugetlb_fault_mutex_hash(mapping, idx); mutex_lock(&hugetlb_fault_mutex_table[hash]); _