From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66D1613D51E; Wed, 25 Mar 2026 19:11:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774465882; cv=none; b=nQMZaLb+NAdH9sic35PFxXc3Q7BlbrbmnWQQ5tPDeGFBrSHQyVIJ912WF9pS7qORaJymzSnCsTPYQuxOQVL8dR7JuBg0hyhBsOxKiVzjH8EuYLC4rtX9OpA+OF79H3Fmq8FAJvoRqYsfuVmAQaozqkRQqyb1pObrr7Xf7Vmn7yU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774465882; c=relaxed/simple; bh=+7WMGti3dXwAOyhG5CeCqAhjakNMSRvjhZh/2jZRxoY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=rzWnBU+FMQi3U4+SV1t+u+UtytlfwUIk810PH5gupsKbWNfFPax7sb7IuzsE0AbZqr5XgpDcQz7oajjhpajBukj/0TcaFxXGDRwcBYtDvQ8Ij9mR4Rd24RkqKUTKgnFwcgb0ZdfyFwqtIU405dK3mKRyCQFPJ7mHfT8weihwzKU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tISE3rgV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tISE3rgV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19C55C4CEF7; Wed, 25 Mar 2026 19:10:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774465882; bh=+7WMGti3dXwAOyhG5CeCqAhjakNMSRvjhZh/2jZRxoY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=tISE3rgV54h2NbgNcJwFnAMi0cL06Z7rnwc1d9XbX+CnLsXyLle4Q4Y8WdNNLL5l0 7bEEM7w6i/vpH21LFGWA5EznC65bhF7oqFBfLUBGNHWVFL7ngdrl0n4bc5j8/kb0sF fR22PbMNFGCx8vc5V0y4uEcX1G8IkjcxLJuzeoWaRp0nl0Mbv6csL7vdOUvtO7wgWx 33s7bEkT+2GV3j6dkyWIJ4RpSDkV1lMGNKKYftfwxHuYxmNILb6214YB3c49QgjM4+ I4wye5WfNV4yx/TtDtTaNamJUYakAZg8YxP5fb1A0yQD6WR1BEiB1/0msYQKQJcXCG zMd3be9feOANA== Date: Wed, 25 Mar 2026 21:10:37 +0200 From: Mike Rapoport To: Andrew Morton Cc: Jianhui Zhou , jane.chu@oracle.com, Muchun Song , Oscar Salvador , David Hildenbrand , Peter Xu , Andrea Arcangeli , Mike Kravetz , SeongJae Park , Hugh Dickins , Sidhartha Kumar , Jonas Zhou , linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com Subject: Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation Message-ID: References: <20260306140332.171078-1-jianhuizzzzz@gmail.com> <20260310110526.335749-1-jianhuizzzzz@gmail.com> <12e822c4-a4f2-4447-80b9-2eec35a03188@oracle.com> <20260324170311.dc5b54fe0765f2e680e3cc90@linux-foundation.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260324170311.dc5b54fe0765f2e680e3cc90@linux-foundation.org> On Tue, Mar 24, 2026 at 05:03:11PM -0700, Andrew Morton wrote: > On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou wrote: > > > On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote: > > > Just wondering whether making the shift explicit here instead of > > > introducing another hugetlb helper might be sufficient? > > > > > > idx >>= huge_page_order(hstate_vma(vma)); > > > > That would work for hugetlb VMAs since both (address - vm_start) and > > vm_pgoff are guaranteed to be huge page aligned. However, David > > suggested introducing hugetlb_linear_page_index() to provide a cleaner > > API that mirrors linear_page_index(), so I kept this approach. > > > > Thanks. > > Would anyone like to review this cc:stable patch for us? > > > From: Jianhui Zhou > Subject: mm/userfaultfd: fix hugetlb fault mutex hash calculation > Date: Tue, 10 Mar 2026 19:05:26 +0800 > > In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the > page index for hugetlb_fault_mutex_hash(). However, linear_page_index() > returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash() > expects the index in huge page units. This mismatch means that different > addresses within the same huge page can produce different hash values, > leading to the use of different mutexes for the same huge page. This can > cause races between faulting threads, which can corrupt the reservation > map and trigger the BUG_ON in resv_map_release(). > > Fix this by introducing hugetlb_linear_page_index(), which returns the > page index in huge page granularity, and using it in place of > linear_page_index(). > > Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com > Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c") > Signed-off-by: Jianhui Zhou > Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com > Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7 > Cc: Andrea Arcangeli > Cc: David Hildenbrand > Cc: Hugh Dickins > Cc: JonasZhou > Cc: Mike Rapoport > Cc: Muchun Song > Cc: Oscar Salvador > Cc: Peter Xu > Cc: SeongJae Park > Cc: Sidhartha Kumar > Cc: > Signed-off-by: Andrew Morton Looks fine from uffd perspective, and simple enough for stable@. Acked-by: Mike Rapoport (Microsoft) > --- > > include/linux/hugetlb.h | 17 +++++++++++++++++ > mm/userfaultfd.c | 2 +- > 2 files changed, 18 insertions(+), 1 deletion(-) > > --- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation > +++ a/include/linux/hugetlb.h > @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s > return h->order + PAGE_SHIFT; > } > > +/** > + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb > + * page size granularity. > + * @vma: the hugetlb VMA > + * @address: the virtual address within the VMA > + * > + * Return: the page offset within the mapping in huge page units. > + */ > +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma, > + unsigned long address) > +{ > + struct hstate *h = hstate_vma(vma); > + > + return ((address - vma->vm_start) >> huge_page_shift(h)) + > + (vma->vm_pgoff >> huge_page_order(h)); > +} > + > static inline bool order_is_gigantic(unsigned int order) > { > return order > MAX_PAGE_ORDER; > --- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation > +++ a/mm/userfaultfd.c > @@ -573,7 +573,7 @@ retry: > * in the case of shared pmds. fault mutex prevents > * races with other faulting threads. > */ > - idx = linear_page_index(dst_vma, dst_addr); > + idx = hugetlb_linear_page_index(dst_vma, dst_addr); > mapping = dst_vma->vm_file->f_mapping; > hash = hugetlb_fault_mutex_hash(mapping, idx); > mutex_lock(&hugetlb_fault_mutex_table[hash]); > _ > -- Sincerely yours, Mike.