* [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios
[not found] <20260109040936.3862042-1-willy@infradead.org>
@ 2026-01-09 4:09 ` Matthew Wilcox (Oracle)
0 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2026-01-09 4:09 UTC (permalink / raw)
Cc: Matthew Wilcox (Oracle), Zi Yan, syzbot+2d9c96466c978346b55f,
Lance Yang, stable
Syzbot has found a deadlock (analyzed by Lance Yang):
1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock).
2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire
folio_lock.
migrate_pages()
-> migrate_hugetlbs()
-> unmap_and_move_huge_page() <- Takes folio_lock!
-> remove_migration_ptes()
-> __rmap_walk_file()
-> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)!
hugetlbfs_fallocate()
-> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)!
-> hugetlbfs_zero_partial_page()
-> filemap_lock_hugetlb_folio()
-> filemap_lock_folio()
-> __filemap_get_folio <- Waits for folio_lock!
The migration path is the one taking locks in the wrong order according
to the documentation at the top of mm/rmap.c. So expand the scope of the
existing i_mmap_lock to cover the calls to remove_migration_ptes() too.
This is (mostly) how it used to be after commit c0d0381ade79. That was
removed by 336bf30eb765 for both file & anon hugetlb pages when it should
only have been removed for anon hugetlb pages.
Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race)
Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com
Debugged-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: stable@vger.kernel.org
---
mm/migrate.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 5169f9717f60..4688b9e38cd2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
int page_was_mapped = 0;
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
+ enum ttu_flags ttu = 0;
if (folio_ref_count(src) == 1) {
/* page was freed from under us. So we are done. */
@@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
goto put_anon;
if (folio_mapped(src)) {
- enum ttu_flags ttu = 0;
-
if (!folio_test_anon(src)) {
/*
* In shared mappings, try_to_unmap could potentially
@@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
try_to_migrate(src, ttu);
page_was_mapped = 1;
-
- if (ttu & TTU_RMAP_LOCKED)
- i_mmap_unlock_write(mapping);
}
if (!folio_mapped(src))
rc = move_to_new_folio(dst, src, mode);
if (page_was_mapped)
- remove_migration_ptes(src, !rc ? dst : src, 0);
+ remove_migration_ptes(src, !rc ? dst : src,
+ ttu ? RMP_LOCKED : 0);
+
+ if (ttu & TTU_RMAP_LOCKED)
+ i_mmap_unlock_write(mapping);
unlock_put_anon:
folio_unlock(dst);
--
2.47.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios
[not found] <20260109041345.3863089-1-willy@infradead.org>
@ 2026-01-09 4:13 ` Matthew Wilcox (Oracle)
2026-01-09 4:32 ` Lance Yang
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2026-01-09 4:13 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), Zi Yan, David Hildenbrand,
Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka,
Harry Yoo, Jann Horn, linux-mm, syzbot+2d9c96466c978346b55f,
Lance Yang, stable
Syzbot has found a deadlock (analyzed by Lance Yang):
1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock).
2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire
folio_lock.
migrate_pages()
-> migrate_hugetlbs()
-> unmap_and_move_huge_page() <- Takes folio_lock!
-> remove_migration_ptes()
-> __rmap_walk_file()
-> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)!
hugetlbfs_fallocate()
-> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)!
-> hugetlbfs_zero_partial_page()
-> filemap_lock_hugetlb_folio()
-> filemap_lock_folio()
-> __filemap_get_folio <- Waits for folio_lock!
The migration path is the one taking locks in the wrong order according
to the documentation at the top of mm/rmap.c. So expand the scope of the
existing i_mmap_lock to cover the calls to remove_migration_ptes() too.
This is (mostly) how it used to be after commit c0d0381ade79. That was
removed by 336bf30eb765 for both file & anon hugetlb pages when it should
only have been removed for anon hugetlb pages.
Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race)
Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com
Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com
Debugged-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: stable@vger.kernel.org
---
mm/migrate.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 5169f9717f60..4688b9e38cd2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
int page_was_mapped = 0;
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
+ enum ttu_flags ttu = 0;
if (folio_ref_count(src) == 1) {
/* page was freed from under us. So we are done. */
@@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
goto put_anon;
if (folio_mapped(src)) {
- enum ttu_flags ttu = 0;
-
if (!folio_test_anon(src)) {
/*
* In shared mappings, try_to_unmap could potentially
@@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
try_to_migrate(src, ttu);
page_was_mapped = 1;
-
- if (ttu & TTU_RMAP_LOCKED)
- i_mmap_unlock_write(mapping);
}
if (!folio_mapped(src))
rc = move_to_new_folio(dst, src, mode);
if (page_was_mapped)
- remove_migration_ptes(src, !rc ? dst : src, 0);
+ remove_migration_ptes(src, !rc ? dst : src,
+ ttu ? RMP_LOCKED : 0);
+
+ if (ttu & TTU_RMAP_LOCKED)
+ i_mmap_unlock_write(mapping);
unlock_put_anon:
folio_unlock(dst);
--
2.47.3
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios
2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle)
@ 2026-01-09 4:32 ` Lance Yang
2026-01-09 13:50 ` David Hildenbrand (Red Hat)
2026-01-09 14:57 ` Zi Yan
2 siblings, 0 replies; 6+ messages in thread
From: Lance Yang @ 2026-01-09 4:32 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Zi Yan, David Hildenbrand, Lorenzo Stoakes, Rik van Riel,
Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm,
syzbot+2d9c96466c978346b55f, stable, Andrew Morton
On 2026/1/9 12:13, Matthew Wilcox (Oracle) wrote:
> Syzbot has found a deadlock (analyzed by Lance Yang):
>
> 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock).
> 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire
> folio_lock.
>
> migrate_pages()
> -> migrate_hugetlbs()
> -> unmap_and_move_huge_page() <- Takes folio_lock!
> -> remove_migration_ptes()
> -> __rmap_walk_file()
> -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)!
>
> hugetlbfs_fallocate()
> -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)!
> -> hugetlbfs_zero_partial_page()
> -> filemap_lock_hugetlb_folio()
> -> filemap_lock_folio()
> -> __filemap_get_folio <- Waits for folio_lock!
>
> The migration path is the one taking locks in the wrong order according
> to the documentation at the top of mm/rmap.c. So expand the scope of the
> existing i_mmap_lock to cover the calls to remove_migration_ptes() too.
>
> This is (mostly) how it used to be after commit c0d0381ade79. That was
> removed by 336bf30eb765 for both file & anon hugetlb pages when it should
> only have been removed for anon hugetlb pages.
Cool. Thanks for the fix!
As someone new to hugetlb, learned something about the lock ordering here.
Cheers,
Lance
>
> Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race)
> Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com
> Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com
> Debugged-by: Lance Yang <lance.yang@linux.dev>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: stable@vger.kernel.org
> ---
> mm/migrate.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5169f9717f60..4688b9e38cd2 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
> int page_was_mapped = 0;
> struct anon_vma *anon_vma = NULL;
> struct address_space *mapping = NULL;
> + enum ttu_flags ttu = 0;
>
> if (folio_ref_count(src) == 1) {
> /* page was freed from under us. So we are done. */
> @@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
> goto put_anon;
>
> if (folio_mapped(src)) {
> - enum ttu_flags ttu = 0;
> -
> if (!folio_test_anon(src)) {
> /*
> * In shared mappings, try_to_unmap could potentially
> @@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
>
> try_to_migrate(src, ttu);
> page_was_mapped = 1;
> -
> - if (ttu & TTU_RMAP_LOCKED)
> - i_mmap_unlock_write(mapping);
> }
>
> if (!folio_mapped(src))
> rc = move_to_new_folio(dst, src, mode);
>
> if (page_was_mapped)
> - remove_migration_ptes(src, !rc ? dst : src, 0);
> + remove_migration_ptes(src, !rc ? dst : src,
> + ttu ? RMP_LOCKED : 0);
> +
> + if (ttu & TTU_RMAP_LOCKED)
> + i_mmap_unlock_write(mapping);
>
> unlock_put_anon:
> folio_unlock(dst);
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios
2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle)
2026-01-09 4:32 ` Lance Yang
@ 2026-01-09 13:50 ` David Hildenbrand (Red Hat)
2026-01-09 14:44 ` Matthew Wilcox
2026-01-09 14:57 ` Zi Yan
2 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand (Red Hat) @ 2026-01-09 13:50 UTC (permalink / raw)
To: Matthew Wilcox (Oracle), Andrew Morton
Cc: Zi Yan, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett,
Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm,
syzbot+2d9c96466c978346b55f, Lance Yang, stable
On 1/9/26 05:13, Matthew Wilcox (Oracle) wrote:
> Syzbot has found a deadlock (analyzed by Lance Yang):
>
> 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock).
> 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire
> folio_lock.
>
> migrate_pages()
> -> migrate_hugetlbs()
> -> unmap_and_move_huge_page() <- Takes folio_lock!
> -> remove_migration_ptes()
> -> __rmap_walk_file()
> -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)!
>
> hugetlbfs_fallocate()
> -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)!
> -> hugetlbfs_zero_partial_page()
> -> filemap_lock_hugetlb_folio()
> -> filemap_lock_folio()
> -> __filemap_get_folio <- Waits for folio_lock!
As raised in the other patch I stumbled over first:
We now handle file-backed folios correctly I think. Could we somehow
also be in trouble for anon folios? Because there, we'd still take the
rmap lock after grabbing the folio lock.
>
> The migration path is the one taking locks in the wrong order according
> to the documentation at the top of mm/rmap.c. So expand the scope of the
> existing i_mmap_lock to cover the calls to remove_migration_ptes() too.
>
> This is (mostly) how it used to be after commit c0d0381ade79. That was
> removed by 336bf30eb765 for both file & anon hugetlb pages when it should
> only have been removed for anon hugetlb pages.
>
> Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race)
> Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com
> Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com
> Debugged-by: Lance Yang <lance.yang@linux.dev>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: stable@vger.kernel.org
> ---
> mm/migrate.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5169f9717f60..4688b9e38cd2 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1458,6 +1458,7 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
> int page_was_mapped = 0;
> struct anon_vma *anon_vma = NULL;
> struct address_space *mapping = NULL;
> + enum ttu_flags ttu = 0;
>
> if (folio_ref_count(src) == 1) {
> /* page was freed from under us. So we are done. */
> @@ -1498,8 +1499,6 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
> goto put_anon;
>
> if (folio_mapped(src)) {
> - enum ttu_flags ttu = 0;
> -
> if (!folio_test_anon(src)) {
> /*
> * In shared mappings, try_to_unmap could potentially
> @@ -1516,16 +1515,17 @@ static int unmap_and_move_huge_page(new_folio_t get_new_folio,
>
> try_to_migrate(src, ttu);
> page_was_mapped = 1;
> -
> - if (ttu & TTU_RMAP_LOCKED)
> - i_mmap_unlock_write(mapping);
> }
>
> if (!folio_mapped(src))
> rc = move_to_new_folio(dst, src, mode);
>
> if (page_was_mapped)
> - remove_migration_ptes(src, !rc ? dst : src, 0);
> + remove_migration_ptes(src, !rc ? dst : src,
> + ttu ? RMP_LOCKED : 0);
(ttu & TTU_RMAP_LOCKED) ? RMP_LOCKED : 0)
Would be cleaner, but I see how you clean that up in #2. :)
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios
2026-01-09 13:50 ` David Hildenbrand (Red Hat)
@ 2026-01-09 14:44 ` Matthew Wilcox
0 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox @ 2026-01-09 14:44 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Andrew Morton, Zi Yan, Lorenzo Stoakes, Rik van Riel,
Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm,
syzbot+2d9c96466c978346b55f, Lance Yang, stable
On Fri, Jan 09, 2026 at 02:50:26PM +0100, David Hildenbrand (Red Hat) wrote:
> We now handle file-backed folios correctly I think. Could we somehow
> also be in trouble for anon folios? Because there, we'd still take the
> rmap lock after grabbing the folio lock.
We're now pretty far afield from my area of MM expertise, but since using
AI is now encouraged, I will confidently state that only file-backed
hugetlb folios have this inversion of the rmap lock and folio lock.
anon hugetlb folios follow the normal rules. And it's all because
of PMD sharing, which isn't needed in the anon case but is needed for
file-backed.
So once mshare is in, we can remove this wart.
> > if (page_was_mapped)
> > - remove_migration_ptes(src, !rc ? dst : src, 0);
> > + remove_migration_ptes(src, !rc ? dst : src,
> > + ttu ? RMP_LOCKED : 0);
>
> (ttu & TTU_RMAP_LOCKED) ? RMP_LOCKED : 0)
>
> Would be cleaner, but I see how you clean that up in #2. :)
Yes, that would be more future-proof, but this code has no future ;-)
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Thanks!
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios
2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle)
2026-01-09 4:32 ` Lance Yang
2026-01-09 13:50 ` David Hildenbrand (Red Hat)
@ 2026-01-09 14:57 ` Zi Yan
2 siblings, 0 replies; 6+ messages in thread
From: Zi Yan @ 2026-01-09 14:57 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Rik van Riel,
Liam R. Howlett, Vlastimil Babka, Harry Yoo, Jann Horn, linux-mm,
syzbot+2d9c96466c978346b55f, Lance Yang, stable
On 8 Jan 2026, at 23:13, Matthew Wilcox (Oracle) wrote:
> Syzbot has found a deadlock (analyzed by Lance Yang):
>
> 1) Task (5749): Holds folio_lock, then tries to acquire i_mmap_rwsem(read lock).
> 2) Task (5754): Holds i_mmap_rwsem(write lock), then tries to acquire
> folio_lock.
>
> migrate_pages()
> -> migrate_hugetlbs()
> -> unmap_and_move_huge_page() <- Takes folio_lock!
> -> remove_migration_ptes()
> -> __rmap_walk_file()
> -> i_mmap_lock_read() <- Waits for i_mmap_rwsem(read lock)!
>
> hugetlbfs_fallocate()
> -> hugetlbfs_punch_hole() <- Takes i_mmap_rwsem(write lock)!
> -> hugetlbfs_zero_partial_page()
> -> filemap_lock_hugetlb_folio()
> -> filemap_lock_folio()
> -> __filemap_get_folio <- Waits for folio_lock!
>
> The migration path is the one taking locks in the wrong order according
> to the documentation at the top of mm/rmap.c. So expand the scope of the
> existing i_mmap_lock to cover the calls to remove_migration_ptes() too.
>
> This is (mostly) how it used to be after commit c0d0381ade79. That was
> removed by 336bf30eb765 for both file & anon hugetlb pages when it should
> only have been removed for anon hugetlb pages.
>
> Fixes: 336bf30eb765 (hugetlbfs: fix anon huge page migration race)
> Reported-by: syzbot+2d9c96466c978346b55f@syzkaller.appspotmail.com
> Link: https://lore.kernel.org/all/68e9715a.050a0220.1186a4.000d.GAE@google.com
> Debugged-by: Lance Yang <lance.yang@linux.dev>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: stable@vger.kernel.org
> ---
> mm/migrate.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
LGTM.
Acked-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-01-09 14:57 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260109041345.3863089-1-willy@infradead.org>
2026-01-09 4:13 ` [PATCH 1/2] migrate: Correct lock ordering for hugetlb file folios Matthew Wilcox (Oracle)
2026-01-09 4:32 ` Lance Yang
2026-01-09 13:50 ` David Hildenbrand (Red Hat)
2026-01-09 14:44 ` Matthew Wilcox
2026-01-09 14:57 ` Zi Yan
[not found] <20260109040936.3862042-1-willy@infradead.org>
2026-01-09 4:09 ` Matthew Wilcox (Oracle)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox