* [PATCH v2] mm/hugetlb: add missing hugetlb_lock in __unmap_hugepage_range()
@ 2025-08-23 18:21 Jeongjun Park
2025-08-26 5:38 ` Jeongjun Park
0 siblings, 1 reply; 2+ messages in thread
From: Jeongjun Park @ 2025-08-23 18:21 UTC (permalink / raw)
To: muchun.song, osalvador, david, akpm
Cc: leitao, linux-mm, linux-kernel, stable,
syzbot+417aeb05fd190f3a6da9, Jeongjun Park
When restoring a reservation for an anonymous page, we need to check to
freeing a surplus. However, __unmap_hugepage_range() causes data race
because it reads h->surplus_huge_pages without the protection of
hugetlb_lock.
And adjust_reservation is a boolean variable that indicates whether
reservations for anonymous pages in each folio should be restored.
Therefore, it should be initialized to false for each round of the loop.
However, this variable is not initialized to false except when defining
the current adjust_reservation variable.
This means that once adjust_reservation is set to true even once within
the loop, reservations for anonymous pages will be restored
unconditionally in all subsequent rounds, regardless of the folio's state.
To fix this, we need to add the missing hugetlb_lock, unlock the
page_table_lock earlier so that we don't lock the hugetlb_lock inside the
page_table_lock lock, and initialize adjust_reservation to false on each
round within the loop.
Cc: <stable@vger.kernel.org>
Reported-by: syzbot+417aeb05fd190f3a6da9@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=417aeb05fd190f3a6da9
Fixes: df7a6d1f6405 ("mm/hugetlb: restore the reservation if needed")
Signed-off-by: Jeongjun Park <aha310510@gmail.com>
---
v2: Fix issues with changing the page_table_lock unlock location and initializing adjust_reservation
- Link to v1: https://lore.kernel.org/all/20250822055857.1142454-1-aha310510@gmail.com/
---
mm/hugetlb.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 753f99b4c718..eed59cfb5d21 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5851,7 +5851,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
spinlock_t *ptl;
struct hstate *h = hstate_vma(vma);
unsigned long sz = huge_page_size(h);
- bool adjust_reservation = false;
+ bool adjust_reservation;
unsigned long last_addr_mask;
bool force_flush = false;
@@ -5944,6 +5944,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
sz);
hugetlb_count_sub(pages_per_huge_page(h), mm);
hugetlb_remove_rmap(folio);
+ spin_unlock(ptl);
/*
* Restore the reservation for anonymous page, otherwise the
@@ -5951,14 +5952,16 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* If there we are freeing a surplus, do not set the restore
* reservation bit.
*/
+ adjust_reservation = false;
+
+ spin_lock_irq(&hugetlb_lock);
if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
folio_test_anon(folio)) {
folio_set_hugetlb_restore_reserve(folio);
/* Reservation to be adjusted after the spin lock */
adjust_reservation = true;
}
-
- spin_unlock(ptl);
+ spin_unlock_irq(&hugetlb_lock);
/*
* Adjust the reservation for the region that will have the
--
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v2] mm/hugetlb: add missing hugetlb_lock in __unmap_hugepage_range()
2025-08-23 18:21 [PATCH v2] mm/hugetlb: add missing hugetlb_lock in __unmap_hugepage_range() Jeongjun Park
@ 2025-08-26 5:38 ` Jeongjun Park
0 siblings, 0 replies; 2+ messages in thread
From: Jeongjun Park @ 2025-08-26 5:38 UTC (permalink / raw)
To: muchun.song, osalvador, david, akpm
Cc: leitao, sidhartha.kumar, linux-mm, linux-kernel, stable,
syzbot+417aeb05fd190f3a6da9
Jeongjun Park <aha310510@gmail.com> wrote:
>
> When restoring a reservation for an anonymous page, we need to check to
> freeing a surplus. However, __unmap_hugepage_range() causes data race
> because it reads h->surplus_huge_pages without the protection of
> hugetlb_lock.
>
> And adjust_reservation is a boolean variable that indicates whether
> reservations for anonymous pages in each folio should be restored.
> Therefore, it should be initialized to false for each round of the loop.
> However, this variable is not initialized to false except when defining
> the current adjust_reservation variable.
>
> This means that once adjust_reservation is set to true even once within
> the loop, reservations for anonymous pages will be restored
> unconditionally in all subsequent rounds, regardless of the folio's state.
>
> To fix this, we need to add the missing hugetlb_lock, unlock the
> page_table_lock earlier so that we don't lock the hugetlb_lock inside the
> page_table_lock lock, and initialize adjust_reservation to false on each
> round within the loop.
>
> Cc: <stable@vger.kernel.org>
> Reported-by: syzbot+417aeb05fd190f3a6da9@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=417aeb05fd190f3a6da9
> Fixes: df7a6d1f6405 ("mm/hugetlb: restore the reservation if needed")
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Sorry, I forgot to add the reviewed-by tag.
> Signed-off-by: Jeongjun Park <aha310510@gmail.com>
> ---
> v2: Fix issues with changing the page_table_lock unlock location and initializing adjust_reservation
> - Link to v1: https://lore.kernel.org/all/20250822055857.1142454-1-aha310510@gmail.com/
> ---
> mm/hugetlb.c | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 753f99b4c718..eed59cfb5d21 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5851,7 +5851,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> spinlock_t *ptl;
> struct hstate *h = hstate_vma(vma);
> unsigned long sz = huge_page_size(h);
> - bool adjust_reservation = false;
> + bool adjust_reservation;
> unsigned long last_addr_mask;
> bool force_flush = false;
>
> @@ -5944,6 +5944,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> sz);
> hugetlb_count_sub(pages_per_huge_page(h), mm);
> hugetlb_remove_rmap(folio);
> + spin_unlock(ptl);
>
> /*
> * Restore the reservation for anonymous page, otherwise the
> @@ -5951,14 +5952,16 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * If there we are freeing a surplus, do not set the restore
> * reservation bit.
> */
> + adjust_reservation = false;
> +
> + spin_lock_irq(&hugetlb_lock);
> if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
> folio_test_anon(folio)) {
> folio_set_hugetlb_restore_reserve(folio);
> /* Reservation to be adjusted after the spin lock */
> adjust_reservation = true;
> }
> -
> - spin_unlock(ptl);
> + spin_unlock_irq(&hugetlb_lock);
>
> /*
> * Adjust the reservation for the region that will have the
> --
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-08-26 5:38 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-23 18:21 [PATCH v2] mm/hugetlb: add missing hugetlb_lock in __unmap_hugepage_range() Jeongjun Park
2025-08-26 5:38 ` Jeongjun Park
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).