From: Usama Arif <usama.arif@linux.dev>
To: Jane Chu <jane.chu@oracle.com>
Cc: Usama Arif <usama.arif@linux.dev>,
akpm@linux-foundation.org, david@kernel.org,
muchun.song@linux.dev, osalvador@suse.de,
lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
vbabka@kernel.org, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, corbet@lwn.net, skhan@linuxfoundation.org,
hughd@google.com, baolin.wang@linux.alibaba.com,
peterx@redhat.com, linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/6] hugetlb: make hugetlb_fault_mutex_hash() take PAGE_SIZE index
Date: Fri, 10 Apr 2026 04:24:31 -0700 [thread overview]
Message-ID: <20260410112433.3248586-1-usama.arif@linux.dev> (raw)
In-Reply-To: <20260409234158.837786-4-jane.chu@oracle.com>
On Thu, 9 Apr 2026 17:41:54 -0600 Jane Chu <jane.chu@oracle.com> wrote:
> hugetlb_fault_mutex_hash() is used to serialize faults and page cache
> operations on the same hugetlb file offset. The helper currently expects
> its index argument in hugetlb page granularity, so callers have to
> open-code conversions from the PAGE_SIZE-based indices commonly used
> in the rest of MM helpers.
>
> Change hugetlb_fault_mutex_hash() to take a PAGE_SIZE-based index
> instead, and perform the hugetlb-granularity conversion inside the helper.
> Update all callers accordingly.
>
> This makes the helper interface consistent with filemap_get_folio(),
> and linear_page_index(), while preserving the same lock selection for
> a given hugetlb file offset.
>
> Signed-off-by: Jane Chu <jane.chu@oracle.com>
> ---
> fs/hugetlbfs/inode.c | 19 ++++++++++---------
> mm/hugetlb.c | 28 +++++++++++++++++++---------
> mm/memfd.c | 11 ++++++-----
> mm/userfaultfd.c | 7 +++----
> 4 files changed, 38 insertions(+), 27 deletions(-)
>
> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
> index cf79fb830377..e24e9bf54e14 100644
> --- a/fs/hugetlbfs/inode.c
> +++ b/fs/hugetlbfs/inode.c
> @@ -575,7 +575,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
> struct address_space *mapping = &inode->i_data;
> const pgoff_t end = lend >> PAGE_SHIFT;
> struct folio_batch fbatch;
> - pgoff_t next, index;
> + pgoff_t next, idx;
> int i, freed = 0;
> bool truncate_op = (lend == LLONG_MAX);
>
> @@ -586,15 +586,15 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart,
> struct folio *folio = fbatch.folios[i];
> u32 hash = 0;
>
> - index = folio->index >> huge_page_order(h);
> - hash = hugetlb_fault_mutex_hash(mapping, index);
> + hash = hugetlb_fault_mutex_hash(mapping, folio->index);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
>
> /*
> * Remove folio that was part of folio_batch.
> */
> + idx = folio->index >> huge_page_order(h);
> remove_inode_single_folio(h, inode, mapping, folio,
> - index, truncate_op);
> + idx, truncate_op);
> freed++;
>
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> @@ -734,7 +734,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
> struct mm_struct *mm = current->mm;
> loff_t hpage_size = huge_page_size(h);
> unsigned long hpage_shift = huge_page_shift(h);
> - pgoff_t start, index, end;
> + pgoff_t start, end, idx, index;
> int error;
> u32 hash;
>
> @@ -774,7 +774,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
> vm_flags_init(&pseudo_vma, VM_HUGETLB | VM_MAYSHARE | VM_SHARED);
> pseudo_vma.vm_file = file;
>
> - for (index = start; index < end; index++) {
> + for (idx = start; idx < end; idx++) {
> /*
> * This is supposed to be the vaddr where the page is being
> * faulted in, but we have no vaddr here.
> @@ -794,14 +794,15 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
> }
>
> /* addr is the offset within the file (zero based) */
> - addr = index * hpage_size;
> + addr = idx * hpage_size;
>
> /* mutex taken here, fault path and hole punch */
> + index = idx << huge_page_order(h);
> hash = hugetlb_fault_mutex_hash(mapping, index);
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
>
> /* See if already present in mapping to avoid alloc/free */
> - folio = filemap_get_folio(mapping, index << huge_page_order(h));
> + folio = filemap_get_folio(mapping, index);
> if (!IS_ERR(folio)) {
> folio_put(folio);
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> @@ -824,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
> }
> folio_zero_user(folio, addr);
> __folio_mark_uptodate(folio);
> - error = hugetlb_add_to_page_cache(folio, mapping, index);
> + error = hugetlb_add_to_page_cache(folio, mapping, idx);
> if (unlikely(error)) {
> restore_reserve_on_error(h, &pseudo_vma, addr, folio);
> folio_put(folio);
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 38b39eaf46cc..9d5ae1f87850 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5515,7 +5515,7 @@ static vm_fault_t hugetlb_wp(struct vm_fault *vmf)
> */
> if (cow_from_owner) {
> struct address_space *mapping = vma->vm_file->f_mapping;
> - pgoff_t idx;
> + pgoff_t index;
> u32 hash;
>
> folio_put(old_folio);
> @@ -5528,8 +5528,9 @@ static vm_fault_t hugetlb_wp(struct vm_fault *vmf)
> *
> * Reacquire both after unmap operation.
> */
> - idx = vma_hugecache_offset(h, vma, vmf->address);
> - hash = hugetlb_fault_mutex_hash(mapping, idx);
> + index = linear_page_index(vma, vmf->address);
> + hash = hugetlb_fault_mutex_hash(mapping, index);
> +
> hugetlb_vma_unlock_read(vma);
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>
> @@ -5664,6 +5665,10 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_fault *vmf,
> unsigned long reason)
> {
> u32 hash;
> + pgoff_t index;
> +
> + index = linear_page_index((const struct vm_area_struct *)vmf, vmf->address);
This is supposed to be linear_page_index(vmf->vma, vmf->address), right?
next prev parent reply other threads:[~2026-04-10 11:24 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 23:41 [PATCH 0/6] hugetlb: normalize exported interfaces to use base-page indices Jane Chu
2026-04-09 23:41 ` [PATCH 1/6] hugetlb: open-code hugetlb folio lookup index conversion Jane Chu
2026-04-09 23:41 ` [PATCH 2/6] hugetlb: remove the hugetlb_linear_page_index() helper Jane Chu
2026-04-09 23:41 ` [PATCH 3/6] hugetlb: make hugetlb_fault_mutex_hash() take PAGE_SIZE index Jane Chu
2026-04-10 11:24 ` Usama Arif [this message]
2026-04-10 17:51 ` jane.chu
2026-04-09 23:41 ` [PATCH 4/6] hugetlb: drop vma_hugecache_offset() in favor of linear_page_index() Jane Chu
2026-04-09 23:41 ` [PATCH 5/6] hugetlb: make hugetlb_add_to_page_cache() use PAGE_SIZE-based index Jane Chu
2026-04-09 23:41 ` [PATCH 6/6] hugetlb: pass hugetlb reservation ranges in base-page indices Jane Chu
2026-04-10 6:45 ` [syzbot ci] Re: hugetlb: normalize exported interfaces to use " syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260410112433.3248586-1-usama.arif@linux.dev \
--to=usama.arif@linux.dev \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=hughd@google.com \
--cc=jane.chu@oracle.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=peterx@redhat.com \
--cc=rppt@kernel.org \
--cc=skhan@linuxfoundation.org \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox