* [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
@ 2026-03-06 14:03 Jianhui Zhou
2026-03-06 16:53 ` Peter Xu
` (4 more replies)
0 siblings, 5 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-06 14:03 UTC (permalink / raw)
To: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport
Cc: David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
linux-mm, linux-kernel, Jonas Zhou, Jianhui Zhou,
syzbot+f525fd79634858f478e7, stable
In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units (as calculated by
vma_hugecache_offset()). This mismatch means that different addresses
within the same huge page can produce different hash values, leading to
the use of different mutexes for the same huge page. This can cause
races between faulting threads, which can corrupt the reservation map
and trigger the BUG_ON in resv_map_release().
Fix this by replacing linear_page_index() with vma_hugecache_offset()
and applying huge_page_mask() to align the address properly. To make
vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
include/linux/hugetlb.h as a static inline function.
Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Cc: stable@vger.kernel.org
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
---
include/linux/hugetlb.h | 17 +++++++++++++++++
mm/hugetlb.c | 11 -----------
mm/userfaultfd.c | 5 ++++-
3 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 65910437be1c..3f994f3e839c 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
return h->order + PAGE_SHIFT;
}
+/*
+ * Convert the address within this vma to the page offset within
+ * the mapping, huge page units here.
+ */
+static inline pgoff_t vma_hugecache_offset(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ return ((address - vma->vm_start) >> huge_page_shift(h)) +
+ (vma->vm_pgoff >> huge_page_order(h));
+}
+
static inline bool order_is_gigantic(unsigned int order)
{
return order > MAX_PAGE_ORDER;
@@ -1197,6 +1208,12 @@ static inline unsigned int huge_page_shift(struct hstate *h)
return PAGE_SHIFT;
}
+static inline pgoff_t vma_hugecache_offset(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ return linear_page_index(vma, address);
+}
+
static inline bool hstate_is_gigantic(struct hstate *h)
{
return false;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0beb6e22bc26..b87ed652c748 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1006,17 +1006,6 @@ static long region_count(struct resv_map *resv, long f, long t)
return chg;
}
-/*
- * Convert the address within this vma to the page offset within
- * the mapping, huge page units here.
- */
-static pgoff_t vma_hugecache_offset(struct hstate *h,
- struct vm_area_struct *vma, unsigned long address)
-{
- return ((address - vma->vm_start) >> huge_page_shift(h)) +
- (vma->vm_pgoff >> huge_page_order(h));
-}
-
/**
* vma_kernel_pagesize - Page size granularity for this VMA.
* @vma: The user mapping.
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 927086bb4a3c..8efebc47a410 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -507,6 +507,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
pgoff_t idx;
u32 hash;
struct address_space *mapping;
+ struct hstate *h;
/*
* There is no default zero huge page for all huge page sizes as
@@ -564,6 +565,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
goto out_unlock;
}
+ h = hstate_vma(dst_vma);
+
while (src_addr < src_start + len) {
VM_WARN_ON_ONCE(dst_addr >= dst_start + len);
@@ -573,7 +576,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
* in the case of shared pmds. fault mutex prevents
* races with other faulting threads.
*/
- idx = linear_page_index(dst_vma, dst_addr);
+ idx = vma_hugecache_offset(h, dst_vma, dst_addr & huge_page_mask(h));
mapping = dst_vma->vm_file->f_mapping;
hash = hugetlb_fault_mutex_hash(mapping, idx);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 14:03 [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Jianhui Zhou
@ 2026-03-06 16:53 ` Peter Xu
2026-03-07 13:37 ` 周建辉
2026-03-07 13:59 ` Jianhui Zhou
2026-03-07 3:27 ` SeongJae Park
` (3 subsequent siblings)
4 siblings, 2 replies; 27+ messages in thread
From: Peter Xu @ 2026-03-06 16:53 UTC (permalink / raw)
To: Jianhui Zhou
Cc: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport,
David Hildenbrand, Andrea Arcangeli, Mike Kravetz, linux-mm,
linux-kernel, Jonas Zhou, syzbot+f525fd79634858f478e7, stable
On Fri, Mar 06, 2026 at 10:03:32PM +0800, Jianhui Zhou wrote:
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units (as calculated by
> vma_hugecache_offset()). This mismatch means that different addresses
> within the same huge page can produce different hash values, leading to
> the use of different mutexes for the same huge page. This can cause
> races between faulting threads, which can corrupt the reservation map
> and trigger the BUG_ON in resv_map_release().
>
> Fix this by replacing linear_page_index() with vma_hugecache_offset()
> and applying huge_page_mask() to align the address properly. To make
> vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
> include/linux/hugetlb.h as a static inline function.
>
> Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: stable@vger.kernel.org
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
Good catch.. only one trivial comment below.
> ---
> include/linux/hugetlb.h | 17 +++++++++++++++++
> mm/hugetlb.c | 11 -----------
> mm/userfaultfd.c | 5 ++++-
> 3 files changed, 21 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 65910437be1c..3f994f3e839c 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
> return h->order + PAGE_SHIFT;
> }
>
> +/*
> + * Convert the address within this vma to the page offset within
> + * the mapping, huge page units here.
> + */
> +static inline pgoff_t vma_hugecache_offset(struct hstate *h,
> + struct vm_area_struct *vma, unsigned long address)
> +{
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
> +}
> +
> static inline bool order_is_gigantic(unsigned int order)
> {
> return order > MAX_PAGE_ORDER;
> @@ -1197,6 +1208,12 @@ static inline unsigned int huge_page_shift(struct hstate *h)
> return PAGE_SHIFT;
> }
>
> +static inline pgoff_t vma_hugecache_offset(struct hstate *h,
> + struct vm_area_struct *vma, unsigned long address)
> +{
> + return linear_page_index(vma, address);
> +}
IIUC we don't need this; the userfaultfd.c reference should only happen
when CONFIG_HUGETLB_PAGE. Please double check.
Thanks,
> +
> static inline bool hstate_is_gigantic(struct hstate *h)
> {
> return false;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0beb6e22bc26..b87ed652c748 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1006,17 +1006,6 @@ static long region_count(struct resv_map *resv, long f, long t)
> return chg;
> }
>
> -/*
> - * Convert the address within this vma to the page offset within
> - * the mapping, huge page units here.
> - */
> -static pgoff_t vma_hugecache_offset(struct hstate *h,
> - struct vm_area_struct *vma, unsigned long address)
> -{
> - return ((address - vma->vm_start) >> huge_page_shift(h)) +
> - (vma->vm_pgoff >> huge_page_order(h));
> -}
> -
> /**
> * vma_kernel_pagesize - Page size granularity for this VMA.
> * @vma: The user mapping.
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 927086bb4a3c..8efebc47a410 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -507,6 +507,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> pgoff_t idx;
> u32 hash;
> struct address_space *mapping;
> + struct hstate *h;
>
> /*
> * There is no default zero huge page for all huge page sizes as
> @@ -564,6 +565,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> goto out_unlock;
> }
>
> + h = hstate_vma(dst_vma);
> +
> while (src_addr < src_start + len) {
> VM_WARN_ON_ONCE(dst_addr >= dst_start + len);
>
> @@ -573,7 +576,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> * in the case of shared pmds. fault mutex prevents
> * races with other faulting threads.
> */
> - idx = linear_page_index(dst_vma, dst_addr);
> + idx = vma_hugecache_offset(h, dst_vma, dst_addr & huge_page_mask(h));
> mapping = dst_vma->vm_file->f_mapping;
> hash = hugetlb_fault_mutex_hash(mapping, idx);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> --
> 2.43.0
>
--
Peter Xu
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 16:53 ` Peter Xu
@ 2026-03-07 13:37 ` 周建辉
2026-03-07 13:59 ` Jianhui Zhou
1 sibling, 0 replies; 27+ messages in thread
From: 周建辉 @ 2026-03-07 13:37 UTC (permalink / raw)
To: Peter Xu
Cc: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport,
David Hildenbrand, Andrea Arcangeli, Mike Kravetz, linux-mm,
linux-kernel, Jonas Zhou, syzbot+f525fd79634858f478e7, stable
On Thu, Mar 06, 2026 at 04:53:00PM +0000, Peter Xu wrote:
> IIUC we don't need this; the userfaultfd.c reference should only happen
> when CONFIG_HUGETLB_PAGE. Please double check.
You are right. mfill_atomic_hugetlb() is guarded by
#ifdef CONFIG_HUGETLB_PAGE in mm/userfaultfd.c, so the stub under
!CONFIG_HUGETLB_PAGE is not needed. I will remove it in v2.
Thanks for the review!
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 16:53 ` Peter Xu
2026-03-07 13:37 ` 周建辉
@ 2026-03-07 13:59 ` Jianhui Zhou
1 sibling, 0 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-07 13:59 UTC (permalink / raw)
To: peterx
Cc: aarcange, akpm, david, jianhuizzzzz, jonaszhou, linux-kernel,
linux-mm, mike.kravetz, muchun.song, osalvador, rppt, stable,
syzbot+f525fd79634858f478e7
On Thu, Mar 06, 2026 at 04:53:00PM +0000, Peter Xu wrote:
> IIUC we don't need this; the userfaultfd.c reference should only happen
> when CONFIG_HUGETLB_PAGE. Please double check.
You are right. mfill_atomic_hugetlb() is guarded by
#ifdef CONFIG_HUGETLB_PAGE in mm/userfaultfd.c, so the stub under
!CONFIG_HUGETLB_PAGE is not needed. I will remove it in v2.
Thanks for the review!
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 14:03 [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Jianhui Zhou
2026-03-06 16:53 ` Peter Xu
@ 2026-03-07 3:27 ` SeongJae Park
2026-03-08 13:41 ` Jianhui Zhou
2026-03-07 14:35 ` [PATCH v2] " Jianhui Zhou
` (2 subsequent siblings)
4 siblings, 1 reply; 27+ messages in thread
From: SeongJae Park @ 2026-03-07 3:27 UTC (permalink / raw)
To: Jianhui Zhou
Cc: SeongJae Park, Muchun Song, Oscar Salvador, Andrew Morton,
Mike Rapoport, David Hildenbrand, Peter Xu, Andrea Arcangeli,
Mike Kravetz, linux-mm, linux-kernel, Jonas Zhou,
syzbot+f525fd79634858f478e7, stable
Hello Jianhui,
On Fri, 6 Mar 2026 22:03:32 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units (as calculated by
> vma_hugecache_offset()). This mismatch means that different addresses
> within the same huge page can produce different hash values, leading to
> the use of different mutexes for the same huge page. This can cause
> races between faulting threads, which can corrupt the reservation map
> and trigger the BUG_ON in resv_map_release().
>
> Fix this by replacing linear_page_index() with vma_hugecache_offset()
> and applying huge_page_mask() to align the address properly. To make
> vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
> include/linux/hugetlb.h as a static inline function.
>
> Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: stable@vger.kernel.org
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
> ---
[...]
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
[...]
> +static inline pgoff_t vma_hugecache_offset(struct hstate *h,
> + struct vm_area_struct *vma, unsigned long address)
> +{
> + return linear_page_index(vma, address);
> +}
> +
I just found this patch makes UML build fails as below.
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=8
ERROR:root:In file included from ../io_uring/rsrc.c:9:
../include/linux/hugetlb.h: In function ‘vma_hugecache_offset’:
../include/linux/hugetlb.h:1214:16: error: implicit declaration of function ‘linear_page_index’ [-Wimplicit-function-declaration]
1214 | return linear_page_index(vma, address);
| ^~~~~~~~~~~~~~~~~
Maybe we need to include pagemap.h? I confirmed below attaching patch fix the
error on my setup.
Thanks,
SJ
[...]
=== >8 ===
From f55581ba154d6c8aaaf1f1d33cc317b5bf463147 Mon Sep 17 00:00:00 2001
From: SeongJae Park <sj@kernel.org>
Date: Fri, 6 Mar 2026 19:23:28 -0800
Subject: [PATCH] mm/hugetlb: include pagemap.h to fix build error
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Without this, UML build fails as below:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=8
ERROR:root:In file included from ../io_uring/rsrc.c:9:
../include/linux/hugetlb.h: In function ‘vma_hugecache_offset’:
../include/linux/hugetlb.h:1214:16: error: implicit declaration of function ‘linear_page_index’ [-Wimplicit-function-declaration]
1214 | return linear_page_index(vma, address);
| ^~~~~~~~~~~~~~~~~
Signed-off-by: SeongJae Park <sj@kernel.org>
---
include/linux/hugetlb.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 3f994f3e839cf..63426bd716839 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -15,6 +15,7 @@
#include <linux/gfp.h>
#include <linux/userfaultfd_k.h>
#include <linux/nodemask.h>
+#include <linux/pagemap.h>
struct mmu_gather;
struct node;
--
2.47.3
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-07 3:27 ` SeongJae Park
@ 2026-03-08 13:41 ` Jianhui Zhou
2026-03-08 22:57 ` SeongJae Park
0 siblings, 1 reply; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-08 13:41 UTC (permalink / raw)
To: sj
Cc: aarcange, akpm, david, jianhuizzzzz, jonaszhou, linux-kernel,
linux-mm, mike.kravetz, muchun.song, osalvador, peterx, rppt,
stable, syzbot+f525fd79634858f478e7
On Fri, Mar 07, 2026 at 03:27:00AM +0000, SeongJae Park wrote:
> I just found this patch makes UML build fails as below.
>
> ../include/linux/hugetlb.h:1214:16: error: implicit declaration of
> function 'linear_page_index' [-Wimplicit-function-declaration]
Thanks for catching this! As Peter pointed out, the
!CONFIG_HUGETLB_PAGE stub is actually unnecessary since
mfill_atomic_hugetlb() is only compiled when CONFIG_HUGETLB_PAGE is
enabled. I have removed it in v2, which also fixes this build error.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-08 13:41 ` Jianhui Zhou
@ 2026-03-08 22:57 ` SeongJae Park
0 siblings, 0 replies; 27+ messages in thread
From: SeongJae Park @ 2026-03-08 22:57 UTC (permalink / raw)
To: Jianhui Zhou
Cc: SeongJae Park, aarcange, akpm, david, jonaszhou, linux-kernel,
linux-mm, mike.kravetz, muchun.song, osalvador, peterx, rppt,
stable, syzbot+f525fd79634858f478e7
On Sun, 8 Mar 2026 21:41:51 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
> On Fri, Mar 07, 2026 at 03:27:00AM +0000, SeongJae Park wrote:
> > I just found this patch makes UML build fails as below.
> >
> > ../include/linux/hugetlb.h:1214:16: error: implicit declaration of
> > function 'linear_page_index' [-Wimplicit-function-declaration]
>
> Thanks for catching this! As Peter pointed out, the
> !CONFIG_HUGETLB_PAGE stub is actually unnecessary since
> mfill_atomic_hugetlb() is only compiled when CONFIG_HUGETLB_PAGE is
> enabled. I have removed it in v2, which also fixes this build error.
Makes sense, thank you for the kind update, Jianhui!
I also confirmed mm-new that updated with the v2 doesn't cause the build error.
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v2] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 14:03 [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Jianhui Zhou
2026-03-06 16:53 ` Peter Xu
2026-03-07 3:27 ` SeongJae Park
@ 2026-03-07 14:35 ` Jianhui Zhou
2026-03-09 2:08 ` Hugh Dickins
2026-03-09 16:47 ` David Hildenbrand (Arm)
2026-03-09 3:30 ` [PATCH v3] " Jianhui Zhou
2026-03-10 11:05 ` [PATCH v4] " Jianhui Zhou
4 siblings, 2 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-07 14:35 UTC (permalink / raw)
To: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport
Cc: David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Jonas Zhou, linux-mm, linux-kernel, stable,
syzbot+f525fd79634858f478e7, Jianhui Zhou
In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units (as calculated by
vma_hugecache_offset()). This mismatch means that different addresses
within the same huge page can produce different hash values, leading to
the use of different mutexes for the same huge page. This can cause
races between faulting threads, which can corrupt the reservation map
and trigger the BUG_ON in resv_map_release().
Fix this by replacing linear_page_index() with vma_hugecache_offset()
and applying huge_page_mask() to align the address properly. To make
vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
include/linux/hugetlb.h as a static inline function.
Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Cc: stable@vger.kernel.org
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
---
v2:
- Remove unnecessary !CONFIG_HUGETLB_PAGE stub for vma_hugecache_offset()
(Peter Xu, SeongJae Park)
include/linux/hugetlb.h | 11 +++++++++++
mm/hugetlb.c | 11 -----------
mm/userfaultfd.c | 5 ++++-
3 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 65910437be1c..f003afe0cc91 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
return h->order + PAGE_SHIFT;
}
+/*
+ * Convert the address within this vma to the page offset within
+ * the mapping, huge page units here.
+ */
+static inline pgoff_t vma_hugecache_offset(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ return ((address - vma->vm_start) >> huge_page_shift(h)) +
+ (vma->vm_pgoff >> huge_page_order(h));
+}
+
static inline bool order_is_gigantic(unsigned int order)
{
return order > MAX_PAGE_ORDER;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0beb6e22bc26..b87ed652c748 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1006,17 +1006,6 @@ static long region_count(struct resv_map *resv, long f, long t)
return chg;
}
-/*
- * Convert the address within this vma to the page offset within
- * the mapping, huge page units here.
- */
-static pgoff_t vma_hugecache_offset(struct hstate *h,
- struct vm_area_struct *vma, unsigned long address)
-{
- return ((address - vma->vm_start) >> huge_page_shift(h)) +
- (vma->vm_pgoff >> huge_page_order(h));
-}
-
/**
* vma_kernel_pagesize - Page size granularity for this VMA.
* @vma: The user mapping.
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 927086bb4a3c..8efebc47a410 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -507,6 +507,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
pgoff_t idx;
u32 hash;
struct address_space *mapping;
+ struct hstate *h;
/*
* There is no default zero huge page for all huge page sizes as
@@ -564,6 +565,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
goto out_unlock;
}
+ h = hstate_vma(dst_vma);
+
while (src_addr < src_start + len) {
VM_WARN_ON_ONCE(dst_addr >= dst_start + len);
@@ -573,7 +576,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
* in the case of shared pmds. fault mutex prevents
* races with other faulting threads.
*/
- idx = linear_page_index(dst_vma, dst_addr);
+ idx = vma_hugecache_offset(h, dst_vma, dst_addr & huge_page_mask(h));
mapping = dst_vma->vm_file->f_mapping;
hash = hugetlb_fault_mutex_hash(mapping, idx);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v2] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-07 14:35 ` [PATCH v2] " Jianhui Zhou
@ 2026-03-09 2:08 ` Hugh Dickins
2026-03-09 3:08 ` Jianhui Zhou
2026-03-09 16:47 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 27+ messages in thread
From: Hugh Dickins @ 2026-03-09 2:08 UTC (permalink / raw)
To: Jianhui Zhou
Cc: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport,
David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Jonas Zhou, Sidhartha Kumar, linux-mm,
linux-kernel, stable, syzbot+f525fd79634858f478e7
On Sat, 7 Mar 2026, Jianhui Zhou wrote:
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units (as calculated by
> vma_hugecache_offset()). This mismatch means that different addresses
> within the same huge page can produce different hash values, leading to
> the use of different mutexes for the same huge page. This can cause
> races between faulting threads, which can corrupt the reservation map
> and trigger the BUG_ON in resv_map_release().
>
> Fix this by replacing linear_page_index() with vma_hugecache_offset()
> and applying huge_page_mask() to align the address properly. To make
> vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
> include/linux/hugetlb.h as a static inline function.
>
> Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
I have not thought it through, nor checked (someone else please do so
before this might reach stable trees); but I believe it's very likely
that that Fixes attribution to a 4.11 commit is wrong - more likely 6.7's
a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c").
Hugh
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v2] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-09 2:08 ` Hugh Dickins
@ 2026-03-09 3:08 ` Jianhui Zhou
0 siblings, 0 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-09 3:08 UTC (permalink / raw)
To: Hugh Dickins
Cc: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport,
David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Jonas Zhou, Sidhartha Kumar, linux-mm,
linux-kernel, stable, syzbot+f525fd79634858f478e7
On Sun, Mar 08, 2026, Hugh Dickins wrote:
> I have not thought it through, nor checked (someone else please do so
> before this might reach stable trees); but I believe it's very likely
> that that Fixes attribution to a 4.11 commit is wrong - more likely 6.7's
> a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c").
You are right. Before a08c7193e4f1, linear_page_index() called
linear_hugepage_index() for hugetlb VMAs, which returned the index in
huge page units. The bug was introduced when a08c7193e4f1 removed that
special casing but missed updating the caller in mm/userfaultfd.c.
I will fix the Fixes tag in v3. Thanks!
Hugh Dickins <hughd@google.com> 于2026年3月9日周一 10:09写道:
>
> On Sat, 7 Mar 2026, Jianhui Zhou wrote:
>
> > In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> > page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> > returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> > expects the index in huge page units (as calculated by
> > vma_hugecache_offset()). This mismatch means that different addresses
> > within the same huge page can produce different hash values, leading to
> > the use of different mutexes for the same huge page. This can cause
> > races between faulting threads, which can corrupt the reservation map
> > and trigger the BUG_ON in resv_map_release().
> >
> > Fix this by replacing linear_page_index() with vma_hugecache_offset()
> > and applying huge_page_mask() to align the address properly. To make
> > vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
> > include/linux/hugetlb.h as a static inline function.
> >
> > Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
>
> I have not thought it through, nor checked (someone else please do so
> before this might reach stable trees); but I believe it's very likely
> that that Fixes attribution to a 4.11 commit is wrong - more likely 6.7's
> a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c").
>
> Hugh
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v2] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-07 14:35 ` [PATCH v2] " Jianhui Zhou
2026-03-09 2:08 ` Hugh Dickins
@ 2026-03-09 16:47 ` David Hildenbrand (Arm)
2026-03-10 10:24 ` Jianhui Zhou
1 sibling, 1 reply; 27+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-09 16:47 UTC (permalink / raw)
To: Jianhui Zhou, Muchun Song, Oscar Salvador, Andrew Morton,
Mike Rapoport
Cc: Peter Xu, Andrea Arcangeli, Mike Kravetz, SeongJae Park,
Jonas Zhou, linux-mm, linux-kernel, stable,
syzbot+f525fd79634858f478e7
On 3/7/26 15:35, Jianhui Zhou wrote:
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units (as calculated by
> vma_hugecache_offset()). This mismatch means that different addresses
> within the same huge page can produce different hash values, leading to
> the use of different mutexes for the same huge page. This can cause
> races between faulting threads, which can corrupt the reservation map
> and trigger the BUG_ON in resv_map_release().
>
> Fix this by replacing linear_page_index() with vma_hugecache_offset()
> and applying huge_page_mask() to align the address properly. To make
> vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
> include/linux/hugetlb.h as a static inline function.
>
> Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: stable@vger.kernel.org
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
> ---
> v2:
> - Remove unnecessary !CONFIG_HUGETLB_PAGE stub for vma_hugecache_offset()
> (Peter Xu, SeongJae Park)
>
> include/linux/hugetlb.h | 11 +++++++++++
> mm/hugetlb.c | 11 -----------
> mm/userfaultfd.c | 5 ++++-
> 3 files changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 65910437be1c..f003afe0cc91 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
> return h->order + PAGE_SHIFT;
> }
>
> +/*
> + * Convert the address within this vma to the page offset within
> + * the mapping, huge page units here.
> + */
> +static inline pgoff_t vma_hugecache_offset(struct hstate *h,
> + struct vm_area_struct *vma, unsigned long address)
> +{
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
> +}
It's hard to put my disgust about the terminology "hugecache" into
words. Not your fault, but we should do better :)
If you're starting to use that from other MM code then hugetlb.c, please
find a better name.
Further, I wonder whether we can avoid passing in "struct hstate *h" and
simply call hstate_vma() internally.
Something like the following to mimic linear_page_index() ?
/**
* hugetlb_linear_page_index - linear_page_index() but in hugetlb page
* size granularity
* @vma: ...
* @address: ...
*
* Returns: ...
*/
static inline void hugetlb_linear_page_index(struct vm_area_struct *vma,
unsigned long address)
{
struct hstate *h = hstate_vma(vma);
...
}
--
Cheers,
David
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v2] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-09 16:47 ` David Hildenbrand (Arm)
@ 2026-03-10 10:24 ` Jianhui Zhou
0 siblings, 0 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-10 10:24 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport,
Peter Xu, Andrea Arcangeli, Mike Kravetz, SeongJae Park,
Jonas Zhou, linux-mm, linux-kernel, stable,
syzbot+f525fd79634858f478e7
On Mon, Mar 09, 2026 at 05:47:26PM +0100, David Hildenbrand wrote:
> It's hard to put my disgust about the terminology "hugecache" into
> words. Not your fault, but we should do better :)
>
> If you're starting to use that from other MM code then hugetlb.c, please
> find a better name.
>
> Further, I wonder whether we can avoid passing in "struct hstate *h" and
> simply call hstate_vma() internally.
>
> Something like the following to mimic linear_page_index() ?
Agreed. I'll add hugetlb_linear_page_index() in include/linux/hugetlb.h
with hstate_vma() called internally, and keep vma_hugecache_offset() as
a static function in mm/hugetlb.c untouched. Will send v4.
Thanks!
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 14:03 [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Jianhui Zhou
` (2 preceding siblings ...)
2026-03-07 14:35 ` [PATCH v2] " Jianhui Zhou
@ 2026-03-09 3:30 ` Jianhui Zhou
2026-03-10 11:05 ` [PATCH v4] " Jianhui Zhou
4 siblings, 0 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-09 3:30 UTC (permalink / raw)
To: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport
Cc: David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Hugh Dickins, Sidhartha Kumar, Jonas Zhou,
linux-mm, linux-kernel, stable, syzbot+f525fd79634858f478e7,
Jianhui Zhou
In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units (as calculated by
vma_hugecache_offset()). This mismatch means that different addresses
within the same huge page can produce different hash values, leading to
the use of different mutexes for the same huge page. This can cause
races between faulting threads, which can corrupt the reservation map
and trigger the BUG_ON in resv_map_release().
Fix this by replacing linear_page_index() with vma_hugecache_offset()
and applying huge_page_mask() to align the address properly. To make
vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
include/linux/hugetlb.h as a static inline function.
Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Cc: stable@vger.kernel.org
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
---
v3:
- Fix Fixes tag to a08c7193e4f1 (Hugh Dickins)
v2:
- Remove unnecessary !CONFIG_HUGETLB_PAGE stub for vma_hugecache_offset()
(Peter Xu, SeongJae Park)
include/linux/hugetlb.h | 11 +++++++++++
mm/hugetlb.c | 11 -----------
mm/userfaultfd.c | 5 ++++-
3 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 65910437be1c..f003afe0cc91 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
return h->order + PAGE_SHIFT;
}
+/*
+ * Convert the address within this vma to the page offset within
+ * the mapping, huge page units here.
+ */
+static inline pgoff_t vma_hugecache_offset(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ return ((address - vma->vm_start) >> huge_page_shift(h)) +
+ (vma->vm_pgoff >> huge_page_order(h));
+}
+
static inline bool order_is_gigantic(unsigned int order)
{
return order > MAX_PAGE_ORDER;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0beb6e22bc26..b87ed652c748 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1006,17 +1006,6 @@ static long region_count(struct resv_map *resv, long f, long t)
return chg;
}
-/*
- * Convert the address within this vma to the page offset within
- * the mapping, huge page units here.
- */
-static pgoff_t vma_hugecache_offset(struct hstate *h,
- struct vm_area_struct *vma, unsigned long address)
-{
- return ((address - vma->vm_start) >> huge_page_shift(h)) +
- (vma->vm_pgoff >> huge_page_order(h));
-}
-
/**
* vma_kernel_pagesize - Page size granularity for this VMA.
* @vma: The user mapping.
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 927086bb4a3c..8efebc47a410 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -507,6 +507,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
pgoff_t idx;
u32 hash;
struct address_space *mapping;
+ struct hstate *h;
/*
* There is no default zero huge page for all huge page sizes as
@@ -564,6 +565,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
goto out_unlock;
}
+ h = hstate_vma(dst_vma);
+
while (src_addr < src_start + len) {
VM_WARN_ON_ONCE(dst_addr >= dst_start + len);
@@ -573,7 +576,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
* in the case of shared pmds. fault mutex prevents
* races with other faulting threads.
*/
- idx = linear_page_index(dst_vma, dst_addr);
+ idx = vma_hugecache_offset(h, dst_vma, dst_addr & huge_page_mask(h));
mapping = dst_vma->vm_file->f_mapping;
hash = hugetlb_fault_mutex_hash(mapping, idx);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-06 14:03 [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Jianhui Zhou
` (3 preceding siblings ...)
2026-03-09 3:30 ` [PATCH v3] " Jianhui Zhou
@ 2026-03-10 11:05 ` Jianhui Zhou
2026-03-10 19:47 ` jane.chu
4 siblings, 1 reply; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-10 11:05 UTC (permalink / raw)
To: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport
Cc: David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Hugh Dickins, Sidhartha Kumar, Jonas Zhou,
linux-mm, linux-kernel, stable, syzbot+f525fd79634858f478e7,
Jianhui Zhou
In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units. This mismatch means that different
addresses within the same huge page can produce different hash values,
leading to the use of different mutexes for the same huge page. This can
cause races between faulting threads, which can corrupt the reservation
map and trigger the BUG_ON in resv_map_release().
Fix this by introducing hugetlb_linear_page_index(), which returns the
page index in huge page granularity, and using it in place of
linear_page_index().
Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Cc: stable@vger.kernel.org
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
---
v4:
- Introduce hugetlb_linear_page_index() instead of exposing
vma_hugecache_offset(); call hstate_vma() internally to simplify
the API (David Hildenbrand)
v3:
- Fix Fixes tag to a08c7193e4f1 (Hugh Dickins)
v2:
- Remove unnecessary !CONFIG_HUGETLB_PAGE stub for vma_hugecache_offset()
(Peter Xu, SeongJae Park)
include/linux/hugetlb.h | 17 +++++++++++++++++
mm/userfaultfd.c | 2 +-
2 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 65910437be1c..67d4f0924646 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(struct hstate *h)
return h->order + PAGE_SHIFT;
}
+/**
+ * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
+ * page size granularity.
+ * @vma: the hugetlb VMA
+ * @address: the virtual address within the VMA
+ *
+ * Return: the page offset within the mapping in huge page units.
+ */
+static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ struct hstate *h = hstate_vma(vma);
+
+ return ((address - vma->vm_start) >> huge_page_shift(h)) +
+ (vma->vm_pgoff >> huge_page_order(h));
+}
+
static inline bool order_is_gigantic(unsigned int order)
{
return order > MAX_PAGE_ORDER;
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 927086bb4a3c..5590989e18c7 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -573,7 +573,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
* in the case of shared pmds. fault mutex prevents
* races with other faulting threads.
*/
- idx = linear_page_index(dst_vma, dst_addr);
+ idx = hugetlb_linear_page_index(dst_vma, dst_addr);
mapping = dst_vma->vm_file->f_mapping;
hash = hugetlb_fault_mutex_hash(mapping, idx);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-10 11:05 ` [PATCH v4] " Jianhui Zhou
@ 2026-03-10 19:47 ` jane.chu
2026-03-11 10:54 ` Jianhui Zhou
0 siblings, 1 reply; 27+ messages in thread
From: jane.chu @ 2026-03-10 19:47 UTC (permalink / raw)
To: Jianhui Zhou, Muchun Song, Oscar Salvador, Andrew Morton,
Mike Rapoport
Cc: David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Hugh Dickins, Sidhartha Kumar, Jonas Zhou,
linux-mm, linux-kernel, stable, syzbot+f525fd79634858f478e7
On 3/10/2026 4:05 AM, Jianhui Zhou wrote:
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units. This mismatch means that different
> addresses within the same huge page can produce different hash values,
> leading to the use of different mutexes for the same huge page. This can
> cause races between faulting threads, which can corrupt the reservation
> map and trigger the BUG_ON in resv_map_release().
>
> Fix this by introducing hugetlb_linear_page_index(), which returns the
> page index in huge page granularity, and using it in place of
> linear_page_index().
>
> Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: stable@vger.kernel.org
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
> ---
> v4:
> - Introduce hugetlb_linear_page_index() instead of exposing
> vma_hugecache_offset(); call hstate_vma() internally to simplify
> the API (David Hildenbrand)
>
> v3:
> - Fix Fixes tag to a08c7193e4f1 (Hugh Dickins)
>
> v2:
> - Remove unnecessary !CONFIG_HUGETLB_PAGE stub for vma_hugecache_offset()
> (Peter Xu, SeongJae Park)
>
> include/linux/hugetlb.h | 17 +++++++++++++++++
> mm/userfaultfd.c | 2 +-
> 2 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 65910437be1c..67d4f0924646 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(struct hstate *h)
> return h->order + PAGE_SHIFT;
> }
>
> +/**
> + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
> + * page size granularity.
> + * @vma: the hugetlb VMA
> + * @address: the virtual address within the VMA
> + *
> + * Return: the page offset within the mapping in huge page units.
> + */
> +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
> + unsigned long address)
> +{
> + struct hstate *h = hstate_vma(vma);
> +
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
> +}
> +
> static inline bool order_is_gigantic(unsigned int order)
> {
> return order > MAX_PAGE_ORDER;
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 927086bb4a3c..5590989e18c7 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -573,7 +573,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
> * in the case of shared pmds. fault mutex prevents
> * races with other faulting threads.
> */
> - idx = linear_page_index(dst_vma, dst_addr);
> + idx = hugetlb_linear_page_index(dst_vma, dst_addr);
Just wondering whether making the shift explicit here instead of
introducing another hugetlb helper might be sufficient?
idx >>= huge_page_order(hstate_vma(vma));
I mean huge_page_order() is already explicitly called in several places
outside hugetlb.
> mapping = dst_vma->vm_file->f_mapping;
> hash = hugetlb_fault_mutex_hash(mapping, idx);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
thanks,
-jane
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-10 19:47 ` jane.chu
@ 2026-03-11 10:54 ` Jianhui Zhou
2026-03-25 0:03 ` Andrew Morton
0 siblings, 1 reply; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-11 10:54 UTC (permalink / raw)
To: jane.chu
Cc: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport,
David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Hugh Dickins, Sidhartha Kumar, Jonas Zhou,
linux-mm, linux-kernel, stable, syzbot+f525fd79634858f478e7
On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote:
> Just wondering whether making the shift explicit here instead of
> introducing another hugetlb helper might be sufficient?
>
> idx >>= huge_page_order(hstate_vma(vma));
That would work for hugetlb VMAs since both (address - vm_start) and
vm_pgoff are guaranteed to be huge page aligned. However, David
suggested introducing hugetlb_linear_page_index() to provide a cleaner
API that mirrors linear_page_index(), so I kept this approach.
Thanks for the review!
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-11 10:54 ` Jianhui Zhou
@ 2026-03-25 0:03 ` Andrew Morton
2026-03-25 1:06 ` SeongJae Park
` (2 more replies)
0 siblings, 3 replies; 27+ messages in thread
From: Andrew Morton @ 2026-03-25 0:03 UTC (permalink / raw)
To: Jianhui Zhou
Cc: jane.chu, Muchun Song, Oscar Salvador, Mike Rapoport,
David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Hugh Dickins, Sidhartha Kumar, Jonas Zhou,
linux-mm, linux-kernel, stable, syzbot+f525fd79634858f478e7
On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
> On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote:
> > Just wondering whether making the shift explicit here instead of
> > introducing another hugetlb helper might be sufficient?
> >
> > idx >>= huge_page_order(hstate_vma(vma));
>
> That would work for hugetlb VMAs since both (address - vm_start) and
> vm_pgoff are guaranteed to be huge page aligned. However, David
> suggested introducing hugetlb_linear_page_index() to provide a cleaner
> API that mirrors linear_page_index(), so I kept this approach.
>
Thanks.
Would anyone like to review this cc:stable patch for us?
From: Jianhui Zhou <jianhuizzzzz@gmail.com>
Subject: mm/userfaultfd: fix hugetlb fault mutex hash calculation
Date: Tue, 10 Mar 2026 19:05:26 +0800
In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units. This mismatch means that different
addresses within the same huge page can produce different hash values,
leading to the use of different mutexes for the same huge page. This can
cause races between faulting threads, which can corrupt the reservation
map and trigger the BUG_ON in resv_map_release().
Fix this by introducing hugetlb_linear_page_index(), which returns the
page index in huge page granularity, and using it in place of
linear_page_index().
Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com
Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Hildenbrand <david@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: JonasZhou <JonasZhou@zhaoxin.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Peter Xu <peterx@redhat.com>
Cc: SeongJae Park <sj@kernel.org>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/hugetlb.h | 17 +++++++++++++++++
mm/userfaultfd.c | 2 +-
2 files changed, 18 insertions(+), 1 deletion(-)
--- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
+++ a/include/linux/hugetlb.h
@@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s
return h->order + PAGE_SHIFT;
}
+/**
+ * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
+ * page size granularity.
+ * @vma: the hugetlb VMA
+ * @address: the virtual address within the VMA
+ *
+ * Return: the page offset within the mapping in huge page units.
+ */
+static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
+ unsigned long address)
+{
+ struct hstate *h = hstate_vma(vma);
+
+ return ((address - vma->vm_start) >> huge_page_shift(h)) +
+ (vma->vm_pgoff >> huge_page_order(h));
+}
+
static inline bool order_is_gigantic(unsigned int order)
{
return order > MAX_PAGE_ORDER;
--- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
+++ a/mm/userfaultfd.c
@@ -573,7 +573,7 @@ retry:
* in the case of shared pmds. fault mutex prevents
* races with other faulting threads.
*/
- idx = linear_page_index(dst_vma, dst_addr);
+ idx = hugetlb_linear_page_index(dst_vma, dst_addr);
mapping = dst_vma->vm_file->f_mapping;
hash = hugetlb_fault_mutex_hash(mapping, idx);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
_
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 0:03 ` Andrew Morton
@ 2026-03-25 1:06 ` SeongJae Park
2026-03-25 6:07 ` Jianhui Zhou
2026-03-25 8:49 ` David Hildenbrand (Arm)
2026-03-25 19:10 ` Mike Rapoport
2 siblings, 1 reply; 27+ messages in thread
From: SeongJae Park @ 2026-03-25 1:06 UTC (permalink / raw)
To: Andrew Morton
Cc: SeongJae Park, Jianhui Zhou, jane.chu, Muchun Song,
Oscar Salvador, Mike Rapoport, David Hildenbrand, Peter Xu,
Andrea Arcangeli, Mike Kravetz, Hugh Dickins, Sidhartha Kumar,
Jonas Zhou, linux-mm, linux-kernel, stable,
syzbot+f525fd79634858f478e7
On Tue, 24 Mar 2026 17:03:11 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
> On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
>
> > On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote:
> > > Just wondering whether making the shift explicit here instead of
> > > introducing another hugetlb helper might be sufficient?
> > >
> > > idx >>= huge_page_order(hstate_vma(vma));
> >
> > That would work for hugetlb VMAs since both (address - vm_start) and
> > vm_pgoff are guaranteed to be huge page aligned. However, David
> > suggested introducing hugetlb_linear_page_index() to provide a cleaner
> > API that mirrors linear_page_index(), so I kept this approach.
> >
>
> Thanks.
>
> Would anyone like to review this cc:stable patch for us?
>
>
> From: Jianhui Zhou <jianhuizzzzz@gmail.com>
> Subject: mm/userfaultfd: fix hugetlb fault mutex hash calculation
> Date: Tue, 10 Mar 2026 19:05:26 +0800
>
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units. This mismatch means that different
> addresses within the same huge page can produce different hash values,
> leading to the use of different mutexes for the same huge page. This can
> cause races between faulting threads, which can corrupt the reservation
> map and trigger the BUG_ON in resv_map_release().
>
> Fix this by introducing hugetlb_linear_page_index(), which returns the
> page index in huge page granularity, and using it in place of
> linear_page_index().
>
> Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com
> Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: JonasZhou <JonasZhou@zhaoxin.com>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Muchun Song <muchun.song@linux.dev>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: SeongJae Park <sj@kernel.org>
> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
I added trivial comments below, but looks good to me.
Acked-by: SeongJae Park <sj@kernel.org>
> ---
>
> include/linux/hugetlb.h | 17 +++++++++++++++++
> mm/userfaultfd.c | 2 +-
> 2 files changed, 18 insertions(+), 1 deletion(-)
>
> --- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
> +++ a/include/linux/hugetlb.h
> @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s
> return h->order + PAGE_SHIFT;
> }
>
> +/**
> + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
> + * page size granularity.
> + * @vma: the hugetlb VMA
> + * @address: the virtual address within the VMA
> + *
> + * Return: the page offset within the mapping in huge page units.
> + */
> +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
> + unsigned long address)
> +{
> + struct hstate *h = hstate_vma(vma);
> +
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
Nit. The outermost parentheses feels odd to me.
> +}
> +
> static inline bool order_is_gigantic(unsigned int order)
> {
> return order > MAX_PAGE_ORDER;
> --- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
> +++ a/mm/userfaultfd.c
> @@ -573,7 +573,7 @@ retry:
> * in the case of shared pmds. fault mutex prevents
> * races with other faulting threads.
> */
> - idx = linear_page_index(dst_vma, dst_addr);
> + idx = hugetlb_linear_page_index(dst_vma, dst_addr);
> mapping = dst_vma->vm_file->f_mapping;
> hash = hugetlb_fault_mutex_hash(mapping, idx);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
Seems userfaulfd.c is the only caller of the new helper function. Why don't
you define the function in userfaultfd.c ?
Thanks,
SJ
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 1:06 ` SeongJae Park
@ 2026-03-25 6:07 ` Jianhui Zhou
2026-03-25 8:49 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-25 6:07 UTC (permalink / raw)
To: SeongJae Park
Cc: Andrew Morton, jane.chu, Muchun Song, Oscar Salvador,
Mike Rapoport, David Hildenbrand, Peter Xu, Andrea Arcangeli,
Mike Kravetz, Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm,
linux-kernel, stable, syzbot+f525fd79634858f478e7
On Tue, Mar 25, 2026 at 01:06:00AM +0000, SeongJae Park wrote:
> Seems userfaulfd.c is the only caller of the new helper function. Why don't
> you define the function in userfaultfd.c ?
I kept hugetlb_linear_page_index() in include/linux/hugetlb.h because
this is hugetlb-specific logic, not userfaultfd-specific logic.
The goal was simply to avoid open-coding the hugetlb index conversion
outside hugetlb code and to make the unit change explicit at the call site.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 6:07 ` Jianhui Zhou
@ 2026-03-25 8:49 ` David Hildenbrand (Arm)
2026-03-25 19:08 ` Mike Rapoport
0 siblings, 1 reply; 27+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-25 8:49 UTC (permalink / raw)
To: Jianhui Zhou, SeongJae Park
Cc: Andrew Morton, jane.chu, Muchun Song, Oscar Salvador,
Mike Rapoport, Peter Xu, Andrea Arcangeli, Mike Kravetz,
Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm, linux-kernel,
stable, syzbot+f525fd79634858f478e7
On 3/25/26 07:07, Jianhui Zhou wrote:
> On Tue, Mar 25, 2026 at 01:06:00AM +0000, SeongJae Park wrote:
>> Seems userfaulfd.c is the only caller of the new helper function. Why don't
>> you define the function in userfaultfd.c ?
> I kept hugetlb_linear_page_index() in include/linux/hugetlb.h because
> this is hugetlb-specific logic, not userfaultfd-specific logic.
Yes, and see my comment about either removing it entirely again next, or
actually also using it in hugetlb.c.
--
Cheers,
David
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 8:49 ` David Hildenbrand (Arm)
@ 2026-03-25 19:08 ` Mike Rapoport
0 siblings, 0 replies; 27+ messages in thread
From: Mike Rapoport @ 2026-03-25 19:08 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Jianhui Zhou, SeongJae Park, Andrew Morton, jane.chu, Muchun Song,
Oscar Salvador, Peter Xu, Andrea Arcangeli, Mike Kravetz,
Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm, linux-kernel,
stable, syzbot+f525fd79634858f478e7
On Wed, Mar 25, 2026 at 09:49:54AM +0100, David Hildenbrand (Arm) wrote:
> On 3/25/26 07:07, Jianhui Zhou wrote:
> > On Tue, Mar 25, 2026 at 01:06:00AM +0000, SeongJae Park wrote:
> >> Seems userfaulfd.c is the only caller of the new helper function. Why don't
> >> you define the function in userfaultfd.c ?
> > I kept hugetlb_linear_page_index() in include/linux/hugetlb.h because
> > this is hugetlb-specific logic, not userfaultfd-specific logic.
>
> Yes, and see my comment about either removing it entirely again next, or
> actually also using it in hugetlb.c.
I think it's better to move large piece of mfill_atomic_hugetlb() to
hugetlb.c and git rid of the helper then.
And now keep it simple for easier backporting.
> --
> Cheers,
>
> David
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 0:03 ` Andrew Morton
2026-03-25 1:06 ` SeongJae Park
@ 2026-03-25 8:49 ` David Hildenbrand (Arm)
2026-03-25 19:02 ` Mike Rapoport
2026-03-25 23:46 ` jane.chu
2026-03-25 19:10 ` Mike Rapoport
2 siblings, 2 replies; 27+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-25 8:49 UTC (permalink / raw)
To: Andrew Morton, Jianhui Zhou, Muchun Song, Oscar Salvador,
Mike Rapoport
Cc: jane.chu, Peter Xu, Andrea Arcangeli, Mike Kravetz, SeongJae Park,
Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm, linux-kernel,
stable, syzbot+f525fd79634858f478e7
On 3/25/26 01:03, Andrew Morton wrote:
> On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
>
>> On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote:
>>> Just wondering whether making the shift explicit here instead of
>>> introducing another hugetlb helper might be sufficient?
>>>
>>> idx >>= huge_page_order(hstate_vma(vma));
>>
>> That would work for hugetlb VMAs since both (address - vm_start) and
>> vm_pgoff are guaranteed to be huge page aligned. However, David
>> suggested introducing hugetlb_linear_page_index() to provide a cleaner
>> API that mirrors linear_page_index(), so I kept this approach.
>>
>
> Thanks.
>
> Would anyone like to review this cc:stable patch for us?
I would hope the hugetlb+userfaultfd submaintainers could have a
detailed look! Moving them to "To:"
One of the issue why this doesn't get more attention might be posting a
new revision as reply to an old revision, which is an anti-pattern :)
>
>
> From: Jianhui Zhou <jianhuizzzzz@gmail.com>
> Subject: mm/userfaultfd: fix hugetlb fault mutex hash calculation
> Date: Tue, 10 Mar 2026 19:05:26 +0800
>
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units. This mismatch means that different
> addresses within the same huge page can produce different hash values,
> leading to the use of different mutexes for the same huge page. This can
> cause races between faulting threads, which can corrupt the reservation
> map and trigger the BUG_ON in resv_map_release().
>
> Fix this by introducing hugetlb_linear_page_index(), which returns the
> page index in huge page granularity, and using it in place of
> linear_page_index().
>
> Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com
> Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: JonasZhou <JonasZhou@zhaoxin.com>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Muchun Song <muchun.song@linux.dev>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: SeongJae Park <sj@kernel.org>
> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
>
> include/linux/hugetlb.h | 17 +++++++++++++++++
> mm/userfaultfd.c | 2 +-
> 2 files changed, 18 insertions(+), 1 deletion(-)
>
> --- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
> +++ a/include/linux/hugetlb.h
> @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s
> return h->order + PAGE_SHIFT;
> }
>
> +/**
> + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
> + * page size granularity.
> + * @vma: the hugetlb VMA
> + * @address: the virtual address within the VMA
> + *
> + * Return: the page offset within the mapping in huge page units.
> + */
> +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
> + unsigned long address)
> +{
> + struct hstate *h = hstate_vma(vma);
> +
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
> +}
> +
> static inline bool order_is_gigantic(unsigned int order)
> {
> return order > MAX_PAGE_ORDER;
> --- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
> +++ a/mm/userfaultfd.c
> @@ -573,7 +573,7 @@ retry:
> * in the case of shared pmds. fault mutex prevents
> * races with other faulting threads.
> */
> - idx = linear_page_index(dst_vma, dst_addr);
> + idx = hugetlb_linear_page_index(dst_vma, dst_addr);
> mapping = dst_vma->vm_file->f_mapping;
> hash = hugetlb_fault_mutex_hash(mapping, idx);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> _
>
Let's take a look at other hugetlb_fault_mutex_hash() users:
* remove_inode_hugepages: uses folio->index >> huge_page_order(h)
-> hugetlb granularity
* hugetlbfs_fallocate(): start/index is in hugetlb granularity
-> hugetlb granularity
* memfd_alloc_folio(): idx >>= huge_page_order(h);
-> hugetlb granularity
* hugetlb_wp(): uses vma_hugecache_offset()
-> hugetlb granularity
* hugetlb_handle_userfault(): uses vmf->pgoff, which hugetlb_fault()
sets to vma_hugecache_offset()
-> hugetlb granularity
* hugetlb_no_page(): similarly uses vmf->pgoff
-> hugetlb granularity
* hugetlb_fault(): similarly uses vmf->pgoff
-> hugetlb granularity
So this change here looks good to me
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
But it raises the question:
(1) should be convert all that to just operate on the ordinary index,
such that we don't even need hugetlb_linear_page_index()? That would be
an addon patch.
(2) Alternatively, could we replace all users of vma_hugecache_offset()
by the much cleaner hugetlb_linear_page_index() ?
In general, I think we should look into having idx/vmf->pgoff being
consistent with the remainder of MM, converting all code in hugetlb to
do that.
Any takers?
--
Cheers,
David
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 8:49 ` David Hildenbrand (Arm)
@ 2026-03-25 19:02 ` Mike Rapoport
2026-03-25 23:46 ` jane.chu
1 sibling, 0 replies; 27+ messages in thread
From: Mike Rapoport @ 2026-03-25 19:02 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Andrew Morton, Jianhui Zhou, Muchun Song, Oscar Salvador,
jane.chu, Peter Xu, Andrea Arcangeli, Mike Kravetz, SeongJae Park,
Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm, linux-kernel,
stable, syzbot+f525fd79634858f478e7
On Wed, Mar 25, 2026 at 09:49:09AM +0100, David Hildenbrand (Arm) wrote:
> On 3/25/26 01:03, Andrew Morton wrote:
> > On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
> >
> >> On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote:
> >>> Just wondering whether making the shift explicit here instead of
> >>> introducing another hugetlb helper might be sufficient?
> >>>
> >>> idx >>= huge_page_order(hstate_vma(vma));
> >>
> >> That would work for hugetlb VMAs since both (address - vm_start) and
> >> vm_pgoff are guaranteed to be huge page aligned. However, David
> >> suggested introducing hugetlb_linear_page_index() to provide a cleaner
> >> API that mirrors linear_page_index(), so I kept this approach.
> >>
> >
> > Thanks.
> >
> > Would anyone like to review this cc:stable patch for us?
>
> I would hope the hugetlb+userfaultfd submaintainers could have a
> detailed look! Moving them to "To:"
Wouldn't help much with something deeply buried in a thread :)
> One of the issue why this doesn't get more attention might be posting a
> new revision as reply to an old revision, which is an anti-pattern :)
Indeed.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 8:49 ` David Hildenbrand (Arm)
2026-03-25 19:02 ` Mike Rapoport
@ 2026-03-25 23:46 ` jane.chu
2026-03-26 9:18 ` David Hildenbrand (Arm)
1 sibling, 1 reply; 27+ messages in thread
From: jane.chu @ 2026-03-25 23:46 UTC (permalink / raw)
To: David Hildenbrand (Arm), Andrew Morton, Jianhui Zhou, Muchun Song,
Oscar Salvador, Mike Rapoport
Cc: Peter Xu, Andrea Arcangeli, Mike Kravetz, SeongJae Park,
Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm, linux-kernel,
stable, syzbot+f525fd79634858f478e7
Hi, David,
On 3/25/2026 1:49 AM, David Hildenbrand (Arm) wrote:
[..]
>>
>> --- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
>> +++ a/include/linux/hugetlb.h
>> @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s
>> return h->order + PAGE_SHIFT;
>> }
>>
>> +/**
>> + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
>> + * page size granularity.
>> + * @vma: the hugetlb VMA
>> + * @address: the virtual address within the VMA
>> + *
>> + * Return: the page offset within the mapping in huge page units.
>> + */
>> +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
>> + unsigned long address)
>> +{
>> + struct hstate *h = hstate_vma(vma);
>> +
>> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
>> + (vma->vm_pgoff >> huge_page_order(h));
>> +}
>> +
>> static inline bool order_is_gigantic(unsigned int order)
>> {
>> return order > MAX_PAGE_ORDER;
>> --- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
>> +++ a/mm/userfaultfd.c
>> @@ -573,7 +573,7 @@ retry:
>> * in the case of shared pmds. fault mutex prevents
>> * races with other faulting threads.
>> */
>> - idx = linear_page_index(dst_vma, dst_addr);
>> + idx = hugetlb_linear_page_index(dst_vma, dst_addr);
>> mapping = dst_vma->vm_file->f_mapping;
>> hash = hugetlb_fault_mutex_hash(mapping, idx);
>> mutex_lock(&hugetlb_fault_mutex_table[hash]);
>> _
>>
>
> Let's take a look at other hugetlb_fault_mutex_hash() users:
>
> * remove_inode_hugepages: uses folio->index >> huge_page_order(h)
> -> hugetlb granularity
> * hugetlbfs_fallocate(): start/index is in hugetlb granularity
> -> hugetlb granularity
> * memfd_alloc_folio(): idx >>= huge_page_order(h);
> -> hugetlb granularity
> * hugetlb_wp(): uses vma_hugecache_offset()
> -> hugetlb granularity
> * hugetlb_handle_userfault(): uses vmf->pgoff, which hugetlb_fault()
> sets to vma_hugecache_offset()
> -> hugetlb granularity
> * hugetlb_no_page(): similarly uses vmf->pgoff
> -> hugetlb granularity
> * hugetlb_fault(): similarly uses vmf->pgoff
> -> hugetlb granularity
>
> So this change here looks good to me
>
> Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
>
>
> But it raises the question:
>
> (1) should be convert all that to just operate on the ordinary index,
> such that we don't even need hugetlb_linear_page_index()? That would be
> an addon patch.
>
Do you mean to convert all callers of hugetlb_linear_page_index() and
vma_hugepcache_offset() to use index and huge_page_order(h) ?
May I add, to improve readability, rename the huge-page-granularity
'idx' to huge_idx or hidx ?
> (2) Alternatively, could we replace all users of vma_hugecache_offset()
> by the much cleaner hugetlb_linear_page_index() ?
>
The difference between the two helpers is hstate_vma() in the latter
that is about 5 pointer de-references, not sure of any performance
implication though. At minimum, we could have
hugetlb_linear_page_index(vma, addr)
-> __hugetlb_linear_page_index(h, vma, addr)
basically renaming vma_hugecache_offset().
> In general, I think we should look into having idx/vmf->pgoff being
> consistent with the remainder of MM, converting all code in hugetlb to
> do that.
>
> Any takers?
>
I'd be happy to, just to make sure I understand the proposal clearly.
thanks!
-jane
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 23:46 ` jane.chu
@ 2026-03-26 9:18 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 27+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-26 9:18 UTC (permalink / raw)
To: jane.chu, Andrew Morton, Jianhui Zhou, Muchun Song,
Oscar Salvador, Mike Rapoport
Cc: Peter Xu, Andrea Arcangeli, Mike Kravetz, SeongJae Park,
Hugh Dickins, Sidhartha Kumar, Jonas Zhou, linux-mm, linux-kernel,
stable, syzbot+f525fd79634858f478e7
On 3/26/26 00:46, jane.chu@oracle.com wrote:
> Hi, David,
>
> On 3/25/2026 1:49 AM, David Hildenbrand (Arm) wrote:
> [..]
[...]
>>
>> But it raises the question:
>>
>> (1) should be convert all that to just operate on the ordinary index,
>> such that we don't even need hugetlb_linear_page_index()? That would be
>> an addon patch.
>>
>
> Do you mean to convert all callers of hugetlb_linear_page_index() and
> vma_hugepcache_offset() to use index and huge_page_order(h) ?
> May I add, to improve readability, rename the huge-page-granularity
> 'idx' to huge_idx or hidx ?
What I meant is that we change all hugetlb code to use the ordinary idx.
It's a bigger rework.
For example, we'd be getting rid of filemap_lock_hugetlb_folio() completely and
simply use filemap_lock_folio. As one example:
@@ -657,10 +657,9 @@ static void hugetlbfs_zero_partial_page(struct hstate *h,
loff_t start,
loff_t end)
{
- pgoff_t idx = start >> huge_page_shift(h);
struct folio *folio;
- folio = filemap_lock_hugetlb_folio(h, mapping, idx);
+ folio = filemap_lock_folio(mapping, start >> PAGE_SHIFT);
if (IS_ERR(folio))
return;
Other parts are more tricky, as we have to make sure that we get
an idx that points at the start of the folio.
Likely such a conversion could be done incrementally. But it's a bit of work.
We'd be getting rid of some more hugetlb special casing.
An alternative is passing in an address into hugetlb_linear_page_index(),
just letting it do the calculation itself (it can get the hstate from the mapping).
>
>> (2) Alternatively, could we replace all users of vma_hugecache_offset()
>> by the much cleaner hugetlb_linear_page_index() ?
>>
>
> The difference between the two helpers is hstate_vma() in the latter
> that is about 5 pointer de-references, not sure of any performance
> implication though.
hstate_vma() is really just hstate_file(vma->vm_file)->
hstate_inode(file_inode(f))->HUGETLBFS_SB(i->i_sb)->hstate;
So some pointer chasing.
hard to believe that this would matter in any of this code :)
> At minimum, we could have
> hugetlb_linear_page_index(vma, addr)
> -> __hugetlb_linear_page_index(h, vma, addr)
> basically renaming vma_hugecache_offset().
I would only do that if it's really required for performance.
--
Cheers,
David
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v4] mm/userfaultfd: fix hugetlb fault mutex hash calculation
2026-03-25 0:03 ` Andrew Morton
2026-03-25 1:06 ` SeongJae Park
2026-03-25 8:49 ` David Hildenbrand (Arm)
@ 2026-03-25 19:10 ` Mike Rapoport
2 siblings, 0 replies; 27+ messages in thread
From: Mike Rapoport @ 2026-03-25 19:10 UTC (permalink / raw)
To: Andrew Morton
Cc: Jianhui Zhou, jane.chu, Muchun Song, Oscar Salvador,
David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
SeongJae Park, Hugh Dickins, Sidhartha Kumar, Jonas Zhou,
linux-mm, linux-kernel, stable, syzbot+f525fd79634858f478e7
On Tue, Mar 24, 2026 at 05:03:11PM -0700, Andrew Morton wrote:
> On Wed, 11 Mar 2026 18:54:26 +0800 Jianhui Zhou <jianhuizzzzz@gmail.com> wrote:
>
> > On Tue, Mar 10, 2026 at 12:47:07PM -0700, jane.chu@oracle.com wrote:
> > > Just wondering whether making the shift explicit here instead of
> > > introducing another hugetlb helper might be sufficient?
> > >
> > > idx >>= huge_page_order(hstate_vma(vma));
> >
> > That would work for hugetlb VMAs since both (address - vm_start) and
> > vm_pgoff are guaranteed to be huge page aligned. However, David
> > suggested introducing hugetlb_linear_page_index() to provide a cleaner
> > API that mirrors linear_page_index(), so I kept this approach.
> >
>
> Thanks.
>
> Would anyone like to review this cc:stable patch for us?
>
>
> From: Jianhui Zhou <jianhuizzzzz@gmail.com>
> Subject: mm/userfaultfd: fix hugetlb fault mutex hash calculation
> Date: Tue, 10 Mar 2026 19:05:26 +0800
>
> In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
> page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
> returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
> expects the index in huge page units. This mismatch means that different
> addresses within the same huge page can produce different hash values,
> leading to the use of different mutexes for the same huge page. This can
> cause races between faulting threads, which can corrupt the reservation
> map and trigger the BUG_ON in resv_map_release().
>
> Fix this by introducing hugetlb_linear_page_index(), which returns the
> page index in huge page granularity, and using it in place of
> linear_page_index().
>
> Link: https://lkml.kernel.org/r/20260310110526.335749-1-jianhuizzzzz@gmail.com
> Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
> Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
> Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: David Hildenbrand <david@kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: JonasZhou <JonasZhou@zhaoxin.com>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Muchun Song <muchun.song@linux.dev>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: SeongJae Park <sj@kernel.org>
> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Looks fine from uffd perspective, and simple enough for stable@.
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
>
> include/linux/hugetlb.h | 17 +++++++++++++++++
> mm/userfaultfd.c | 2 +-
> 2 files changed, 18 insertions(+), 1 deletion(-)
>
> --- a/include/linux/hugetlb.h~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
> +++ a/include/linux/hugetlb.h
> @@ -796,6 +796,23 @@ static inline unsigned huge_page_shift(s
> return h->order + PAGE_SHIFT;
> }
>
> +/**
> + * hugetlb_linear_page_index() - linear_page_index() but in hugetlb
> + * page size granularity.
> + * @vma: the hugetlb VMA
> + * @address: the virtual address within the VMA
> + *
> + * Return: the page offset within the mapping in huge page units.
> + */
> +static inline pgoff_t hugetlb_linear_page_index(struct vm_area_struct *vma,
> + unsigned long address)
> +{
> + struct hstate *h = hstate_vma(vma);
> +
> + return ((address - vma->vm_start) >> huge_page_shift(h)) +
> + (vma->vm_pgoff >> huge_page_order(h));
> +}
> +
> static inline bool order_is_gigantic(unsigned int order)
> {
> return order > MAX_PAGE_ORDER;
> --- a/mm/userfaultfd.c~mm-userfaultfd-fix-hugetlb-fault-mutex-hash-calculation
> +++ a/mm/userfaultfd.c
> @@ -573,7 +573,7 @@ retry:
> * in the case of shared pmds. fault mutex prevents
> * races with other faulting threads.
> */
> - idx = linear_page_index(dst_vma, dst_addr);
> + idx = hugetlb_linear_page_index(dst_vma, dst_addr);
> mapping = dst_vma->vm_file->f_mapping;
> hash = hugetlb_fault_mutex_hash(mapping, idx);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> _
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation
@ 2026-03-06 13:59 Jianhui Zhou
0 siblings, 0 replies; 27+ messages in thread
From: Jianhui Zhou @ 2026-03-06 13:59 UTC (permalink / raw)
To: Muchun Song, Oscar Salvador, Andrew Morton, Mike Rapoport
Cc: David Hildenbrand, Peter Xu, Andrea Arcangeli, Mike Kravetz,
linux-mm, linux-kernel, Jonas Zhou, Jianhui Zhou,
syzbot+f525fd79634858f478e7, stable
From: Jianhui Zhou <jianhuizzzzz@gmail.com>
In mfill_atomic_hugetlb(), linear_page_index() is used to calculate the
page index for hugetlb_fault_mutex_hash(). However, linear_page_index()
returns the index in PAGE_SIZE units, while hugetlb_fault_mutex_hash()
expects the index in huge page units (as calculated by
vma_hugecache_offset()). This mismatch means that different addresses
within the same huge page can produce different hash values, leading to
the use of different mutexes for the same huge page. This can cause
races between faulting threads, which can corrupt the reservation map
and trigger the BUG_ON in resv_map_release().
Fix this by replacing linear_page_index() with vma_hugecache_offset()
and applying huge_page_mask() to align the address properly. To make
vma_hugecache_offset() available outside of mm/hugetlb.c, move it to
include/linux/hugetlb.h as a static inline function.
Fixes: 60d4d2d2b40e ("userfaultfd: hugetlbfs: add __mcopy_atomic_hugetlb for huge page UFFDIO_COPY")
Reported-by: syzbot+f525fd79634858f478e7@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=f525fd79634858f478e7
Cc: stable@vger.kernel.org
Signed-off-by: Jianhui Zhou <jianhuizzzzz@gmail.com>
---
include/linux/hugetlb.h | 17 +++++++++++++++++
mm/hugetlb.c | 11 -----------
mm/userfaultfd.c | 5 ++++-
3 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 65910437be1c..3f994f3e839c 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -796,6 +796,17 @@ static inline unsigned huge_page_shift(struct hstate *h)
return h->order + PAGE_SHIFT;
}
+/*
+ * Convert the address within this vma to the page offset within
+ * the mapping, huge page units here.
+ */
+static inline pgoff_t vma_hugecache_offset(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ return ((address - vma->vm_start) >> huge_page_shift(h)) +
+ (vma->vm_pgoff >> huge_page_order(h));
+}
+
static inline bool order_is_gigantic(unsigned int order)
{
return order > MAX_PAGE_ORDER;
@@ -1197,6 +1208,12 @@ static inline unsigned int huge_page_shift(struct hstate *h)
return PAGE_SHIFT;
}
+static inline pgoff_t vma_hugecache_offset(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ return linear_page_index(vma, address);
+}
+
static inline bool hstate_is_gigantic(struct hstate *h)
{
return false;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0beb6e22bc26..b87ed652c748 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1006,17 +1006,6 @@ static long region_count(struct resv_map *resv, long f, long t)
return chg;
}
-/*
- * Convert the address within this vma to the page offset within
- * the mapping, huge page units here.
- */
-static pgoff_t vma_hugecache_offset(struct hstate *h,
- struct vm_area_struct *vma, unsigned long address)
-{
- return ((address - vma->vm_start) >> huge_page_shift(h)) +
- (vma->vm_pgoff >> huge_page_order(h));
-}
-
/**
* vma_kernel_pagesize - Page size granularity for this VMA.
* @vma: The user mapping.
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 927086bb4a3c..8efebc47a410 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -507,6 +507,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
pgoff_t idx;
u32 hash;
struct address_space *mapping;
+ struct hstate *h;
/*
* There is no default zero huge page for all huge page sizes as
@@ -564,6 +565,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
goto out_unlock;
}
+ h = hstate_vma(dst_vma);
+
while (src_addr < src_start + len) {
VM_WARN_ON_ONCE(dst_addr >= dst_start + len);
@@ -573,7 +576,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(
* in the case of shared pmds. fault mutex prevents
* races with other faulting threads.
*/
- idx = linear_page_index(dst_vma, dst_addr);
+ idx = vma_hugecache_offset(h, dst_vma, dst_addr & huge_page_mask(h));
mapping = dst_vma->vm_file->f_mapping;
hash = hugetlb_fault_mutex_hash(mapping, idx);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread
end of thread, other threads:[~2026-03-26 9:18 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-06 14:03 [PATCH] mm/userfaultfd: fix hugetlb fault mutex hash calculation Jianhui Zhou
2026-03-06 16:53 ` Peter Xu
2026-03-07 13:37 ` 周建辉
2026-03-07 13:59 ` Jianhui Zhou
2026-03-07 3:27 ` SeongJae Park
2026-03-08 13:41 ` Jianhui Zhou
2026-03-08 22:57 ` SeongJae Park
2026-03-07 14:35 ` [PATCH v2] " Jianhui Zhou
2026-03-09 2:08 ` Hugh Dickins
2026-03-09 3:08 ` Jianhui Zhou
2026-03-09 16:47 ` David Hildenbrand (Arm)
2026-03-10 10:24 ` Jianhui Zhou
2026-03-09 3:30 ` [PATCH v3] " Jianhui Zhou
2026-03-10 11:05 ` [PATCH v4] " Jianhui Zhou
2026-03-10 19:47 ` jane.chu
2026-03-11 10:54 ` Jianhui Zhou
2026-03-25 0:03 ` Andrew Morton
2026-03-25 1:06 ` SeongJae Park
2026-03-25 6:07 ` Jianhui Zhou
2026-03-25 8:49 ` David Hildenbrand (Arm)
2026-03-25 19:08 ` Mike Rapoport
2026-03-25 8:49 ` David Hildenbrand (Arm)
2026-03-25 19:02 ` Mike Rapoport
2026-03-25 23:46 ` jane.chu
2026-03-26 9:18 ` David Hildenbrand (Arm)
2026-03-25 19:10 ` Mike Rapoport
-- strict thread matches above, loose matches on Subject: below --
2026-03-06 13:59 [PATCH] " Jianhui Zhou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox