* FAILED: patch "[PATCH] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when" failed to apply to 6.1-stable tree
@ 2024-04-29 11:34 gregkh
2024-04-30 7:41 ` [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio() Miaohe Lin
2024-05-05 7:09 ` Miaohe Lin
0 siblings, 2 replies; 6+ messages in thread
From: gregkh @ 2024-04-29 11:34 UTC (permalink / raw)
To: linmiaohe, akpm, osalvador, stable; +Cc: stable
The patch below does not apply to the 6.1-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable@vger.kernel.org>.
To reproduce the conflict and resubmit, you may use the following commands:
git fetch https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/ linux-6.1.y
git checkout FETCH_HEAD
git cherry-pick -x 52ccdde16b6540abe43b6f8d8e1e1ec90b0983af
# <resolve conflicts, build, test, etc.>
git commit -s
git send-email --to '<stable@vger.kernel.org>' --in-reply-to '2024042912-visibly-carpool-70bd@gregkh' --subject-prefix 'PATCH 6.1.y' HEAD^..
Possible dependencies:
52ccdde16b65 ("mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio()")
32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap")
d6ef19e25df2 ("mm/hugetlb: convert update_and_free_page() to folios")
cfd5082b5147 ("mm/hugetlb: convert remove_hugetlb_page() to folios")
1a7cdab59b22 ("mm/hugetlb: convert dissolve_free_huge_page() to folios")
911565b82853 ("mm/hugetlb: convert destroy_compound_gigantic_page() to folios")
cb67f4282bf9 ("mm,thp,rmap: simplify compound page mapcount handling")
dad6a5eb5556 ("mm,hugetlb: use folio fields in second tail page")
0356c4b96f68 ("mm/hugetlb: convert free_huge_page to folios")
de656ed376c4 ("mm/hugetlb_cgroup: convert set_hugetlb_cgroup*() to folios")
f074732d599e ("mm/hugetlb_cgroup: convert hugetlb_cgroup_from_page() to folios")
a098c977722c ("mm/hugetlb_cgroup: convert __set_hugetlb_cgroup() to folios")
4781593d5dba ("mm/hugetlb: unify clearing of RestoreReserve for private pages")
149562f75094 ("mm/hugetlb: add hugetlb_folio_subpool() helpers")
d340625f4849 ("mm: add private field of first tail to struct page and struct folio")
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
From 52ccdde16b6540abe43b6f8d8e1e1ec90b0983af Mon Sep 17 00:00:00 2001
From: Miaohe Lin <linmiaohe@huawei.com>
Date: Fri, 19 Apr 2024 16:58:19 +0800
Subject: [PATCH] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when
dissolve_free_hugetlb_folio()
When I did memory failure tests recently, below warning occurs:
DEBUG_LOCKS_WARN_ON(1)
WARNING: CPU: 8 PID: 1011 at kernel/locking/lockdep.c:232 __lock_acquire+0xccb/0x1ca0
Modules linked in: mce_inject hwpoison_inject
CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
RIP: 0010:__lock_acquire+0xccb/0x1ca0
RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
FS: 00007ff9f32aa740(0000) GS:ffffa1ce5fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff9f3134ba0 CR3: 00000008484e4000 CR4: 00000000000006f0
Call Trace:
<TASK>
lock_acquire+0xbe/0x2d0
_raw_spin_lock_irqsave+0x3a/0x60
hugepage_subpool_put_pages.part.0+0xe/0xc0
free_huge_folio+0x253/0x3f0
dissolve_free_huge_page+0x147/0x210
__page_handle_poison+0x9/0x70
memory_failure+0x4e6/0x8c0
hard_offline_page_store+0x55/0xa0
kernfs_fop_write_iter+0x12c/0x1d0
vfs_write+0x380/0x540
ksys_write+0x64/0xe0
do_syscall_64+0xbc/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff9f3114887
RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
</TASK>
Kernel panic - not syncing: kernel: panic_on_warn set ...
CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
panic+0x326/0x350
check_panic_on_warn+0x4f/0x50
__warn+0x98/0x190
report_bug+0x18e/0x1a0
handle_bug+0x3d/0x70
exc_invalid_op+0x18/0x70
asm_exc_invalid_op+0x1a/0x20
RIP: 0010:__lock_acquire+0xccb/0x1ca0
RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
lock_acquire+0xbe/0x2d0
_raw_spin_lock_irqsave+0x3a/0x60
hugepage_subpool_put_pages.part.0+0xe/0xc0
free_huge_folio+0x253/0x3f0
dissolve_free_huge_page+0x147/0x210
__page_handle_poison+0x9/0x70
memory_failure+0x4e6/0x8c0
hard_offline_page_store+0x55/0xa0
kernfs_fop_write_iter+0x12c/0x1d0
vfs_write+0x380/0x540
ksys_write+0x64/0xe0
do_syscall_64+0xbc/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff9f3114887
RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
</TASK>
After git bisecting and digging into the code, I believe the root cause is
that _deferred_list field of folio is unioned with _hugetlb_subpool field.
In __update_and_free_hugetlb_folio(), folio->_deferred_list is
initialized leading to corrupted folio->_hugetlb_subpool when folio is
hugetlb. Later free_huge_folio() will use _hugetlb_subpool and above
warning happens.
But it is assumed hugetlb flag must have been cleared when calling
folio_put() in update_and_free_hugetlb_folio(). This assumption is broken
due to below race:
CPU1 CPU2
dissolve_free_huge_page update_and_free_pages_bulk
update_and_free_hugetlb_folio hugetlb_vmemmap_restore_folios
folio_clear_hugetlb_vmemmap_optimized
clear_flag = folio_test_hugetlb_vmemmap_optimized
if (clear_flag) <-- False, it's already cleared.
__folio_clear_hugetlb(folio) <-- Hugetlb is not cleared.
folio_put
free_huge_folio <-- free_the_page is expected.
list_for_each_entry()
__folio_clear_hugetlb <-- Too late.
Fix this issue by checking whether folio is hugetlb directly instead of
checking clear_flag to close the race window.
Link: https://lkml.kernel.org/r/20240419085819.1901645-1-linmiaohe@huawei.com
Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 05371bf54f96..ce7be5c24442 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1781,7 +1781,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h,
* If vmemmap pages were allocated above, then we need to clear the
* hugetlb destructor under the hugetlb lock.
*/
- if (clear_dtor) {
+ if (folio_test_hugetlb(folio)) {
spin_lock_irq(&hugetlb_lock);
__clear_hugetlb_destructor(h, folio);
spin_unlock_irq(&hugetlb_lock);
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio()
2024-04-29 11:34 FAILED: patch "[PATCH] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when" failed to apply to 6.1-stable tree gregkh
@ 2024-04-30 7:41 ` Miaohe Lin
2024-04-30 8:19 ` Greg KH
2024-05-05 7:09 ` Miaohe Lin
1 sibling, 1 reply; 6+ messages in thread
From: Miaohe Lin @ 2024-04-30 7:41 UTC (permalink / raw)
To: stable; +Cc: Miaohe Lin, Oscar Salvador, Andrew Morton
When I did memory failure tests recently, below warning occurs:
DEBUG_LOCKS_WARN_ON(1)
WARNING: CPU: 8 PID: 1011 at kernel/locking/lockdep.c:232 __lock_acquire+0xccb/0x1ca0
Modules linked in: mce_inject hwpoison_inject
CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
RIP: 0010:__lock_acquire+0xccb/0x1ca0
RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
FS: 00007ff9f32aa740(0000) GS:ffffa1ce5fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff9f3134ba0 CR3: 00000008484e4000 CR4: 00000000000006f0
Call Trace:
<TASK>
lock_acquire+0xbe/0x2d0
_raw_spin_lock_irqsave+0x3a/0x60
hugepage_subpool_put_pages.part.0+0xe/0xc0
free_huge_folio+0x253/0x3f0
dissolve_free_huge_page+0x147/0x210
__page_handle_poison+0x9/0x70
memory_failure+0x4e6/0x8c0
hard_offline_page_store+0x55/0xa0
kernfs_fop_write_iter+0x12c/0x1d0
vfs_write+0x380/0x540
ksys_write+0x64/0xe0
do_syscall_64+0xbc/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff9f3114887
RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
</TASK>
Kernel panic - not syncing: kernel: panic_on_warn set ...
CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
panic+0x326/0x350
check_panic_on_warn+0x4f/0x50
__warn+0x98/0x190
report_bug+0x18e/0x1a0
handle_bug+0x3d/0x70
exc_invalid_op+0x18/0x70
asm_exc_invalid_op+0x1a/0x20
RIP: 0010:__lock_acquire+0xccb/0x1ca0
RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
lock_acquire+0xbe/0x2d0
_raw_spin_lock_irqsave+0x3a/0x60
hugepage_subpool_put_pages.part.0+0xe/0xc0
free_huge_folio+0x253/0x3f0
dissolve_free_huge_page+0x147/0x210
__page_handle_poison+0x9/0x70
memory_failure+0x4e6/0x8c0
hard_offline_page_store+0x55/0xa0
kernfs_fop_write_iter+0x12c/0x1d0
vfs_write+0x380/0x540
ksys_write+0x64/0xe0
do_syscall_64+0xbc/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff9f3114887
RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
</TASK>
After git bisecting and digging into the code, I believe the root cause is
that _deferred_list field of folio is unioned with _hugetlb_subpool field.
In __update_and_free_hugetlb_folio(), folio->_deferred_list is
initialized leading to corrupted folio->_hugetlb_subpool when folio is
hugetlb. Later free_huge_folio() will use _hugetlb_subpool and above
warning happens.
But it is assumed hugetlb flag must have been cleared when calling
folio_put() in update_and_free_hugetlb_folio(). This assumption is broken
due to below race:
CPU1 CPU2
dissolve_free_huge_page update_and_free_pages_bulk
update_and_free_hugetlb_folio hugetlb_vmemmap_restore_folios
folio_clear_hugetlb_vmemmap_optimized
clear_flag = folio_test_hugetlb_vmemmap_optimized
if (clear_flag) <-- False, it's already cleared.
__folio_clear_hugetlb(folio) <-- Hugetlb is not cleared.
folio_put
free_huge_folio <-- free_the_page is expected.
list_for_each_entry()
__folio_clear_hugetlb <-- Too late.
Fix this issue by checking whether folio is hugetlb directly instead of
checking clear_flag to close the race window.
Link: https://lkml.kernel.org/r/20240419085819.1901645-1-linmiaohe@huawei.com
Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 52ccdde16b6540abe43b6f8d8e1e1ec90b0983af)
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/hugetlb.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 37288a7f0fa6..8573da127939 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1796,7 +1796,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
* If vmemmap pages were allocated above, then we need to clear the
* hugetlb destructor under the hugetlb lock.
*/
- if (clear_dtor) {
+ if (folio_test_hugetlb(folio)) {
spin_lock_irq(&hugetlb_lock);
__clear_hugetlb_destructor(h, page);
spin_unlock_irq(&hugetlb_lock);
--
2.33.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio()
2024-04-30 7:41 ` [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio() Miaohe Lin
@ 2024-04-30 8:19 ` Greg KH
2024-04-30 8:39 ` Miaohe Lin
0 siblings, 1 reply; 6+ messages in thread
From: Greg KH @ 2024-04-30 8:19 UTC (permalink / raw)
To: Miaohe Lin; +Cc: stable, Oscar Salvador, Andrew Morton
On Tue, Apr 30, 2024 at 03:41:46PM +0800, Miaohe Lin wrote:
> When I did memory failure tests recently, below warning occurs:
>
> DEBUG_LOCKS_WARN_ON(1)
> WARNING: CPU: 8 PID: 1011 at kernel/locking/lockdep.c:232 __lock_acquire+0xccb/0x1ca0
> Modules linked in: mce_inject hwpoison_inject
> CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> RIP: 0010:__lock_acquire+0xccb/0x1ca0
> RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
> RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
> RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
> RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
> R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
> R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
> FS: 00007ff9f32aa740(0000) GS:ffffa1ce5fc00000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007ff9f3134ba0 CR3: 00000008484e4000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> lock_acquire+0xbe/0x2d0
> _raw_spin_lock_irqsave+0x3a/0x60
> hugepage_subpool_put_pages.part.0+0xe/0xc0
> free_huge_folio+0x253/0x3f0
> dissolve_free_huge_page+0x147/0x210
> __page_handle_poison+0x9/0x70
> memory_failure+0x4e6/0x8c0
> hard_offline_page_store+0x55/0xa0
> kernfs_fop_write_iter+0x12c/0x1d0
> vfs_write+0x380/0x540
> ksys_write+0x64/0xe0
> do_syscall_64+0xbc/0x1d0
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7ff9f3114887
> RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
> RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
> RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
> R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
> R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
> </TASK>
> Kernel panic - not syncing: kernel: panic_on_warn set ...
> CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
> Call Trace:
> <TASK>
> panic+0x326/0x350
> check_panic_on_warn+0x4f/0x50
> __warn+0x98/0x190
> report_bug+0x18e/0x1a0
> handle_bug+0x3d/0x70
> exc_invalid_op+0x18/0x70
> asm_exc_invalid_op+0x1a/0x20
> RIP: 0010:__lock_acquire+0xccb/0x1ca0
> RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
> RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
> RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
> RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
> R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
> R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
> lock_acquire+0xbe/0x2d0
> _raw_spin_lock_irqsave+0x3a/0x60
> hugepage_subpool_put_pages.part.0+0xe/0xc0
> free_huge_folio+0x253/0x3f0
> dissolve_free_huge_page+0x147/0x210
> __page_handle_poison+0x9/0x70
> memory_failure+0x4e6/0x8c0
> hard_offline_page_store+0x55/0xa0
> kernfs_fop_write_iter+0x12c/0x1d0
> vfs_write+0x380/0x540
> ksys_write+0x64/0xe0
> do_syscall_64+0xbc/0x1d0
> entry_SYSCALL_64_after_hwframe+0x77/0x7f
> RIP: 0033:0x7ff9f3114887
> RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
> RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
> RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
> RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
> R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
> R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
> </TASK>
>
> After git bisecting and digging into the code, I believe the root cause is
> that _deferred_list field of folio is unioned with _hugetlb_subpool field.
> In __update_and_free_hugetlb_folio(), folio->_deferred_list is
> initialized leading to corrupted folio->_hugetlb_subpool when folio is
> hugetlb. Later free_huge_folio() will use _hugetlb_subpool and above
> warning happens.
>
> But it is assumed hugetlb flag must have been cleared when calling
> folio_put() in update_and_free_hugetlb_folio(). This assumption is broken
> due to below race:
>
> CPU1 CPU2
> dissolve_free_huge_page update_and_free_pages_bulk
> update_and_free_hugetlb_folio hugetlb_vmemmap_restore_folios
> folio_clear_hugetlb_vmemmap_optimized
> clear_flag = folio_test_hugetlb_vmemmap_optimized
> if (clear_flag) <-- False, it's already cleared.
> __folio_clear_hugetlb(folio) <-- Hugetlb is not cleared.
> folio_put
> free_huge_folio <-- free_the_page is expected.
> list_for_each_entry()
> __folio_clear_hugetlb <-- Too late.
>
> Fix this issue by checking whether folio is hugetlb directly instead of
> checking clear_flag to close the race window.
>
> Link: https://lkml.kernel.org/r/20240419085819.1901645-1-linmiaohe@huawei.com
> Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap")
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> (cherry picked from commit 52ccdde16b6540abe43b6f8d8e1e1ec90b0983af)
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 37288a7f0fa6..8573da127939 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1796,7 +1796,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
> * If vmemmap pages were allocated above, then we need to clear the
> * hugetlb destructor under the hugetlb lock.
> */
> - if (clear_dtor) {
> + if (folio_test_hugetlb(folio)) {
> spin_lock_irq(&hugetlb_lock);
> __clear_hugetlb_destructor(h, page);
> spin_unlock_irq(&hugetlb_lock);
> --
> 2.33.0
>
>
You failed to at least test-build this change, why? :(
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio()
2024-04-30 8:19 ` Greg KH
@ 2024-04-30 8:39 ` Miaohe Lin
0 siblings, 0 replies; 6+ messages in thread
From: Miaohe Lin @ 2024-04-30 8:39 UTC (permalink / raw)
To: Greg KH; +Cc: stable, Oscar Salvador, Andrew Morton
On 2024/4/30 16:19, Greg KH wrote:
> On Tue, Apr 30, 2024 at 03:41:46PM +0800, Miaohe Lin wrote:
>> When I did memory failure tests recently, below warning occurs:
>>
>> DEBUG_LOCKS_WARN_ON(1)
>> WARNING: CPU: 8 PID: 1011 at kernel/locking/lockdep.c:232 __lock_acquire+0xccb/0x1ca0
>> Modules linked in: mce_inject hwpoison_inject
>> CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>> RIP: 0010:__lock_acquire+0xccb/0x1ca0
>> RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
>> RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
>> RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
>> RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
>> R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
>> R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
>> FS: 00007ff9f32aa740(0000) GS:ffffa1ce5fc00000(0000) knlGS:0000000000000000
>> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 00007ff9f3134ba0 CR3: 00000008484e4000 CR4: 00000000000006f0
>> Call Trace:
>> <TASK>
>> lock_acquire+0xbe/0x2d0
>> _raw_spin_lock_irqsave+0x3a/0x60
>> hugepage_subpool_put_pages.part.0+0xe/0xc0
>> free_huge_folio+0x253/0x3f0
>> dissolve_free_huge_page+0x147/0x210
>> __page_handle_poison+0x9/0x70
>> memory_failure+0x4e6/0x8c0
>> hard_offline_page_store+0x55/0xa0
>> kernfs_fop_write_iter+0x12c/0x1d0
>> vfs_write+0x380/0x540
>> ksys_write+0x64/0xe0
>> do_syscall_64+0xbc/0x1d0
>> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>> RIP: 0033:0x7ff9f3114887
>> RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
>> RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
>> RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
>> R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
>> R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
>> </TASK>
>> Kernel panic - not syncing: kernel: panic_on_warn set ...
>> CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
>> Call Trace:
>> <TASK>
>> panic+0x326/0x350
>> check_panic_on_warn+0x4f/0x50
>> __warn+0x98/0x190
>> report_bug+0x18e/0x1a0
>> handle_bug+0x3d/0x70
>> exc_invalid_op+0x18/0x70
>> asm_exc_invalid_op+0x1a/0x20
>> RIP: 0010:__lock_acquire+0xccb/0x1ca0
>> RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
>> RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
>> RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
>> RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
>> R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
>> R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
>> lock_acquire+0xbe/0x2d0
>> _raw_spin_lock_irqsave+0x3a/0x60
>> hugepage_subpool_put_pages.part.0+0xe/0xc0
>> free_huge_folio+0x253/0x3f0
>> dissolve_free_huge_page+0x147/0x210
>> __page_handle_poison+0x9/0x70
>> memory_failure+0x4e6/0x8c0
>> hard_offline_page_store+0x55/0xa0
>> kernfs_fop_write_iter+0x12c/0x1d0
>> vfs_write+0x380/0x540
>> ksys_write+0x64/0xe0
>> do_syscall_64+0xbc/0x1d0
>> entry_SYSCALL_64_after_hwframe+0x77/0x7f
>> RIP: 0033:0x7ff9f3114887
>> RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
>> RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
>> RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
>> RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
>> R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
>> R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
>> </TASK>
>>
>> After git bisecting and digging into the code, I believe the root cause is
>> that _deferred_list field of folio is unioned with _hugetlb_subpool field.
>> In __update_and_free_hugetlb_folio(), folio->_deferred_list is
>> initialized leading to corrupted folio->_hugetlb_subpool when folio is
>> hugetlb. Later free_huge_folio() will use _hugetlb_subpool and above
>> warning happens.
>>
>> But it is assumed hugetlb flag must have been cleared when calling
>> folio_put() in update_and_free_hugetlb_folio(). This assumption is broken
>> due to below race:
>>
>> CPU1 CPU2
>> dissolve_free_huge_page update_and_free_pages_bulk
>> update_and_free_hugetlb_folio hugetlb_vmemmap_restore_folios
>> folio_clear_hugetlb_vmemmap_optimized
>> clear_flag = folio_test_hugetlb_vmemmap_optimized
>> if (clear_flag) <-- False, it's already cleared.
>> __folio_clear_hugetlb(folio) <-- Hugetlb is not cleared.
>> folio_put
>> free_huge_folio <-- free_the_page is expected.
>> list_for_each_entry()
>> __folio_clear_hugetlb <-- Too late.
>>
>> Fix this issue by checking whether folio is hugetlb directly instead of
>> checking clear_flag to close the race window.
>>
>> Link: https://lkml.kernel.org/r/20240419085819.1901645-1-linmiaohe@huawei.com
>> Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap")
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>> Cc: <stable@vger.kernel.org>
>> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
>> (cherry picked from commit 52ccdde16b6540abe43b6f8d8e1e1ec90b0983af)
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
>> ---
>> mm/hugetlb.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 37288a7f0fa6..8573da127939 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -1796,7 +1796,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
>> * If vmemmap pages were allocated above, then we need to clear the
>> * hugetlb destructor under the hugetlb lock.
>> */
>> - if (clear_dtor) {
>> + if (folio_test_hugetlb(folio)) {
>> spin_lock_irq(&hugetlb_lock);
>> __clear_hugetlb_destructor(h, page);
>> spin_unlock_irq(&hugetlb_lock);
>> --
>> 2.33.0
>>
>>
>
> You failed to at least test-build this change, why? :(
Oh, sorry for lost my mind! I didn't see the conflict when I cherry-pick the commit so I thought
the problem have been resolved in some other way. I should have a rest before doing this. :(
Will reproduce the issue and test the both patches before sending them out. Sorry for make noise.
Thanks.
.
>
> .
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio()
2024-04-29 11:34 FAILED: patch "[PATCH] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when" failed to apply to 6.1-stable tree gregkh
2024-04-30 7:41 ` [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio() Miaohe Lin
@ 2024-05-05 7:09 ` Miaohe Lin
2024-05-15 7:23 ` Greg KH
1 sibling, 1 reply; 6+ messages in thread
From: Miaohe Lin @ 2024-05-05 7:09 UTC (permalink / raw)
To: stable; +Cc: Miaohe Lin, Oscar Salvador, Andrew Morton
When I did memory failure tests recently, below warning occurs:
DEBUG_LOCKS_WARN_ON(1)
WARNING: CPU: 8 PID: 1011 at kernel/locking/lockdep.c:232 __lock_acquire+0xccb/0x1ca0
Modules linked in: mce_inject hwpoison_inject
CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
RIP: 0010:__lock_acquire+0xccb/0x1ca0
RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
FS: 00007ff9f32aa740(0000) GS:ffffa1ce5fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff9f3134ba0 CR3: 00000008484e4000 CR4: 00000000000006f0
Call Trace:
<TASK>
lock_acquire+0xbe/0x2d0
_raw_spin_lock_irqsave+0x3a/0x60
hugepage_subpool_put_pages.part.0+0xe/0xc0
free_huge_folio+0x253/0x3f0
dissolve_free_huge_page+0x147/0x210
__page_handle_poison+0x9/0x70
memory_failure+0x4e6/0x8c0
hard_offline_page_store+0x55/0xa0
kernfs_fop_write_iter+0x12c/0x1d0
vfs_write+0x380/0x540
ksys_write+0x64/0xe0
do_syscall_64+0xbc/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff9f3114887
RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
</TASK>
Kernel panic - not syncing: kernel: panic_on_warn set ...
CPU: 8 PID: 1011 Comm: bash Kdump: loaded Not tainted 6.9.0-rc3-next-20240410-00012-gdb69f219f4be #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
panic+0x326/0x350
check_panic_on_warn+0x4f/0x50
__warn+0x98/0x190
report_bug+0x18e/0x1a0
handle_bug+0x3d/0x70
exc_invalid_op+0x18/0x70
asm_exc_invalid_op+0x1a/0x20
RIP: 0010:__lock_acquire+0xccb/0x1ca0
RSP: 0018:ffffa7a1c7fe3bd0 EFLAGS: 00000082
RAX: 0000000000000000 RBX: eb851eb853975fcf RCX: ffffa1ce5fc1c9c8
RDX: 00000000ffffffd8 RSI: 0000000000000027 RDI: ffffa1ce5fc1c9c0
RBP: ffffa1c6865d3280 R08: ffffffffb0f570a8 R09: 0000000000009ffb
R10: 0000000000000286 R11: ffffffffb0f2ad50 R12: ffffa1c6865d3d10
R13: ffffa1c6865d3c70 R14: 0000000000000000 R15: 0000000000000004
lock_acquire+0xbe/0x2d0
_raw_spin_lock_irqsave+0x3a/0x60
hugepage_subpool_put_pages.part.0+0xe/0xc0
free_huge_folio+0x253/0x3f0
dissolve_free_huge_page+0x147/0x210
__page_handle_poison+0x9/0x70
memory_failure+0x4e6/0x8c0
hard_offline_page_store+0x55/0xa0
kernfs_fop_write_iter+0x12c/0x1d0
vfs_write+0x380/0x540
ksys_write+0x64/0xe0
do_syscall_64+0xbc/0x1d0
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff9f3114887
RSP: 002b:00007ffecbacb458 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007ff9f3114887
RDX: 000000000000000c RSI: 0000564494164e10 RDI: 0000000000000001
RBP: 0000564494164e10 R08: 00007ff9f31d1460 R09: 000000007fffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000000c
R13: 00007ff9f321b780 R14: 00007ff9f3217600 R15: 00007ff9f3216a00
</TASK>
After git bisecting and digging into the code, I believe the root cause is
that _deferred_list field of folio is unioned with _hugetlb_subpool field.
In __update_and_free_hugetlb_folio(), folio->_deferred_list is
initialized leading to corrupted folio->_hugetlb_subpool when folio is
hugetlb. Later free_huge_folio() will use _hugetlb_subpool and above
warning happens.
But it is assumed hugetlb flag must have been cleared when calling
folio_put() in update_and_free_hugetlb_folio(). This assumption is broken
due to below race:
CPU1 CPU2
dissolve_free_huge_page update_and_free_pages_bulk
update_and_free_hugetlb_folio hugetlb_vmemmap_restore_folios
folio_clear_hugetlb_vmemmap_optimized
clear_flag = folio_test_hugetlb_vmemmap_optimized
if (clear_flag) <-- False, it's already cleared.
__folio_clear_hugetlb(folio) <-- Hugetlb is not cleared.
folio_put
free_huge_folio <-- free_the_page is expected.
list_for_each_entry()
__folio_clear_hugetlb <-- Too late.
Fix this issue by checking whether folio is hugetlb directly instead of
checking clear_flag to close the race window.
Link: https://lkml.kernel.org/r/20240419085819.1901645-1-linmiaohe@huawei.com
Fixes: 32c877191e02 ("hugetlb: do not clear hugetlb dtor until allocating vmemmap")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from commit 52ccdde16b6540abe43b6f8d8e1e1ec90b0983af)
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
---
mm/hugetlb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 37288a7f0fa6..87d87c34cdf5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1761,7 +1761,6 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
{
int i;
struct page *subpage;
- bool clear_dtor = HPageVmemmapOptimized(page);
if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
return;
@@ -1796,7 +1795,7 @@ static void __update_and_free_page(struct hstate *h, struct page *page)
* If vmemmap pages were allocated above, then we need to clear the
* hugetlb destructor under the hugetlb lock.
*/
- if (clear_dtor) {
+ if (PageHuge(page)) {
spin_lock_irq(&hugetlb_lock);
__clear_hugetlb_destructor(h, page);
spin_unlock_irq(&hugetlb_lock);
--
2.33.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio()
2024-05-05 7:09 ` Miaohe Lin
@ 2024-05-15 7:23 ` Greg KH
0 siblings, 0 replies; 6+ messages in thread
From: Greg KH @ 2024-05-15 7:23 UTC (permalink / raw)
To: Miaohe Lin; +Cc: stable, Oscar Salvador, Andrew Morton
On Sun, May 05, 2024 at 03:09:31PM +0800, Miaohe Lin wrote:
> When I did memory failure tests recently, below warning occurs:
>
Both backports now queued up, thanks.
greg k-h
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-05-15 7:23 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-04-29 11:34 FAILED: patch "[PATCH] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when" failed to apply to 6.1-stable tree gregkh
2024-04-30 7:41 ` [PATCH 6.1.y] mm/hugetlb: fix DEBUG_LOCKS_WARN_ON(1) when dissolve_free_hugetlb_folio() Miaohe Lin
2024-04-30 8:19 ` Greg KH
2024-04-30 8:39 ` Miaohe Lin
2024-05-05 7:09 ` Miaohe Lin
2024-05-15 7:23 ` Greg KH
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).