* [syzbot ci] Re: Eliminate Dying Memory Cgroup [not found] <cover.1761658310.git.zhengqi.arch@bytedance.com> @ 2025-10-28 20:58 ` syzbot ci 2025-10-29 0:22 ` Harry Yoo 0 siblings, 1 reply; 6+ messages in thread From: syzbot ci @ 2025-10-28 20:58 UTC (permalink / raw) To: akpm, axelrasmussen, cgroups, chengming.zhou, david, hannes, harry.yoo, hughd, imran.f.khan, kamalesh.babulal, linux-kernel, linux-mm, lorenzo.stoakes, mhocko, muchun.song, nphamcs, qi.zheng, roman.gushchin, shakeel.butt, songmuchun, weixugc, yuanchu, zhengqi.arch, ziy Cc: syzbot, syzkaller-bugs syzbot ci has tested the following series [v1] Eliminate Dying Memory Cgroup https://lore.kernel.org/all/cover.1761658310.git.zhengqi.arch@bytedance.com * [PATCH v1 01/26] mm: memcontrol: remove dead code of checking parent memory cgroup * [PATCH v1 02/26] mm: workingset: use folio_lruvec() in workingset_refault() * [PATCH v1 03/26] mm: rename unlock_page_lruvec_irq and its variants * [PATCH v1 04/26] mm: vmscan: refactor move_folios_to_lru() * [PATCH v1 05/26] mm: memcontrol: allocate object cgroup for non-kmem case * [PATCH v1 06/26] mm: memcontrol: return root object cgroup for root memory cgroup * [PATCH v1 07/26] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() * [PATCH v1 08/26] buffer: prevent memory cgroup release in folio_alloc_buffers() * [PATCH v1 09/26] writeback: prevent memory cgroup release in writeback module * [PATCH v1 10/26] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() * [PATCH v1 11/26] mm: page_io: prevent memory cgroup release in page_io module * [PATCH v1 12/26] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() * [PATCH v1 13/26] mm: mglru: prevent memory cgroup release in mglru * [PATCH v1 14/26] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() * [PATCH v1 15/26] mm: workingset: prevent memory cgroup release in lru_gen_eviction() * [PATCH v1 16/26] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() * [PATCH v1 17/26] mm: workingset: prevent lruvec release in workingset_refault() * [PATCH v1 18/26] mm: zswap: prevent lruvec release in zswap_folio_swapin() * [PATCH v1 19/26] mm: swap: prevent lruvec release in swap module * [PATCH v1 20/26] mm: workingset: prevent lruvec release in workingset_activation() * [PATCH v1 21/26] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock * [PATCH v1 22/26] mm: vmscan: prepare for reparenting traditional LRU folios * [PATCH v1 23/26] mm: vmscan: prepare for reparenting MGLRU folios * [PATCH v1 24/26] mm: memcontrol: refactor memcg_reparent_objcgs() * [PATCH v1 25/26] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios * [PATCH v1 26/26] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers and found the following issue: WARNING in folio_memcg Full report is available here: https://ci.syzbot.org/series/0d48a77a-fb4f-485d-9fd6-086afd6fb650 *** WARNING in folio_memcg tree: mm-new URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git base: b227c04932039bccc21a0a89cd6df50fa57e4716 arch: amd64 compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 config: https://ci.syzbot.org/builds/503d7034-ae99-44d1-8fb2-62e7ef5e1c7c/config C repro: https://ci.syzbot.org/findings/880c374a-1b49-436e-9be2-63d5e2c6b6ab/c_repro syz repro: https://ci.syzbot.org/findings/880c374a-1b49-436e-9be2-63d5e2c6b6ab/syz_repro exFAT-fs (loop0): failed to load upcase table (idx : 0x00010000, chksum : 0xe5674ec2, utbl_chksum : 0xe619d30d) exFAT-fs (loop0): failed to load alloc-bitmap exFAT-fs (loop0): failed to recognize exfat type ------------[ cut here ]------------ WARNING: CPU: 1 PID: 5965 at ./include/linux/memcontrol.h:380 obj_cgroup_memcg include/linux/memcontrol.h:380 [inline] WARNING: CPU: 1 PID: 5965 at ./include/linux/memcontrol.h:380 folio_memcg+0x148/0x1c0 include/linux/memcontrol.h:434 Modules linked in: CPU: 1 UID: 0 PID: 5965 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 RIP: 0010:obj_cgroup_memcg include/linux/memcontrol.h:380 [inline] RIP: 0010:folio_memcg+0x148/0x1c0 include/linux/memcontrol.h:434 Code: 48 c1 e8 03 42 80 3c 20 00 74 08 48 89 df e8 5f c8 06 00 48 8b 03 5b 41 5c 41 5e 41 5f 5d e9 cf 89 2a 09 cc e8 a9 bb a0 ff 90 <0f> 0b 90 eb ca 44 89 f9 80 e1 07 80 c1 03 38 c1 0f 8c ef fe ff ff RSP: 0018:ffffc90003ec66b0 EFLAGS: 00010293 RAX: ffffffff821f4b57 RBX: ffff888108b31480 RCX: ffff88816be91d00 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: ffff88816be91d00 R09: 0000000000000002 R10: 00000000fffffff0 R11: 0000000000000000 R12: dffffc0000000000 R13: 00000000ffffffe4 R14: ffffea0006d5f840 R15: ffffea0006d5f870 FS: 000055555db87500(0000) GS:ffff8882a9f35000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2ee63fff CR3: 000000010c308000 CR4: 00000000000006f0 Call Trace: <TASK> zswap_compress mm/zswap.c:900 [inline] zswap_store_page mm/zswap.c:1430 [inline] zswap_store+0xfa2/0x1f80 mm/zswap.c:1541 swap_writeout+0x6e8/0xf20 mm/page_io.c:275 writeout mm/vmscan.c:651 [inline] pageout mm/vmscan.c:699 [inline] shrink_folio_list+0x34ec/0x4c40 mm/vmscan.c:1418 reclaim_folio_list+0xeb/0x500 mm/vmscan.c:2196 reclaim_pages+0x454/0x520 mm/vmscan.c:2233 madvise_cold_or_pageout_pte_range+0x1974/0x1d00 mm/madvise.c:565 walk_pmd_range mm/pagewalk.c:130 [inline] walk_pud_range mm/pagewalk.c:224 [inline] walk_p4d_range mm/pagewalk.c:262 [inline] walk_pgd_range+0xfe9/0x1d40 mm/pagewalk.c:303 __walk_page_range+0x14c/0x710 mm/pagewalk.c:410 walk_page_range_vma+0x393/0x440 mm/pagewalk.c:717 madvise_pageout_page_range mm/madvise.c:624 [inline] madvise_pageout mm/madvise.c:649 [inline] madvise_vma_behavior+0x311f/0x3a10 mm/madvise.c:1352 madvise_walk_vmas+0x51c/0xa30 mm/madvise.c:1669 madvise_do_behavior+0x38e/0x550 mm/madvise.c:1885 do_madvise+0x1bc/0x270 mm/madvise.c:1978 __do_sys_madvise mm/madvise.c:1987 [inline] __se_sys_madvise mm/madvise.c:1985 [inline] __x64_sys_madvise+0xa7/0xc0 mm/madvise.c:1985 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fccac38efc9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007ffd9cc58708 EFLAGS: 00000246 ORIG_RAX: 000000000000001c RAX: ffffffffffffffda RBX: 00007fccac5e5fa0 RCX: 00007fccac38efc9 RDX: 0000000000000015 RSI: 7fffffffffffffff RDI: 0000200000000000 RBP: 00007fccac411f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fccac5e5fa0 R14: 00007fccac5e5fa0 R15: 0000000000000003 </TASK> *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [syzbot ci] Re: Eliminate Dying Memory Cgroup 2025-10-28 20:58 ` [syzbot ci] Re: Eliminate Dying Memory Cgroup syzbot ci @ 2025-10-29 0:22 ` Harry Yoo 2025-10-29 0:25 ` syzbot ci 2025-10-29 3:12 ` Qi Zheng 0 siblings, 2 replies; 6+ messages in thread From: Harry Yoo @ 2025-10-29 0:22 UTC (permalink / raw) To: syzbot ci Cc: akpm, axelrasmussen, cgroups, chengming.zhou, david, hannes, hughd, imran.f.khan, kamalesh.babulal, linux-kernel, linux-mm, lorenzo.stoakes, mhocko, muchun.song, nphamcs, qi.zheng, roman.gushchin, shakeel.butt, songmuchun, weixugc, yuanchu, zhengqi.arch, ziy, syzbot, syzkaller-bugs On Tue, Oct 28, 2025 at 01:58:33PM -0700, syzbot ci wrote: > syzbot ci has tested the following series > > [v1] Eliminate Dying Memory Cgroup > https://lore.kernel.org/all/cover.1761658310.git.zhengqi.arch@bytedance.com > * [PATCH v1 01/26] mm: memcontrol: remove dead code of checking parent memory cgroup > * [PATCH v1 02/26] mm: workingset: use folio_lruvec() in workingset_refault() > * [PATCH v1 03/26] mm: rename unlock_page_lruvec_irq and its variants > * [PATCH v1 04/26] mm: vmscan: refactor move_folios_to_lru() > * [PATCH v1 05/26] mm: memcontrol: allocate object cgroup for non-kmem case > * [PATCH v1 06/26] mm: memcontrol: return root object cgroup for root memory cgroup > * [PATCH v1 07/26] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() > * [PATCH v1 08/26] buffer: prevent memory cgroup release in folio_alloc_buffers() > * [PATCH v1 09/26] writeback: prevent memory cgroup release in writeback module > * [PATCH v1 10/26] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() > * [PATCH v1 11/26] mm: page_io: prevent memory cgroup release in page_io module > * [PATCH v1 12/26] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() > * [PATCH v1 13/26] mm: mglru: prevent memory cgroup release in mglru > * [PATCH v1 14/26] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() > * [PATCH v1 15/26] mm: workingset: prevent memory cgroup release in lru_gen_eviction() > * [PATCH v1 16/26] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() > * [PATCH v1 17/26] mm: workingset: prevent lruvec release in workingset_refault() > * [PATCH v1 18/26] mm: zswap: prevent lruvec release in zswap_folio_swapin() > * [PATCH v1 19/26] mm: swap: prevent lruvec release in swap module > * [PATCH v1 20/26] mm: workingset: prevent lruvec release in workingset_activation() > * [PATCH v1 21/26] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock > * [PATCH v1 22/26] mm: vmscan: prepare for reparenting traditional LRU folios > * [PATCH v1 23/26] mm: vmscan: prepare for reparenting MGLRU folios > * [PATCH v1 24/26] mm: memcontrol: refactor memcg_reparent_objcgs() > * [PATCH v1 25/26] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios > * [PATCH v1 26/26] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers > > and found the following issue: > WARNING in folio_memcg > > Full report is available here: > https://ci.syzbot.org/series/0d48a77a-fb4f-485d-9fd6-086afd6fb650 > > *** > > WARNING in folio_memcg > > tree: mm-new > URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git > base: b227c04932039bccc21a0a89cd6df50fa57e4716 > arch: amd64 > compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 > config: https://ci.syzbot.org/builds/503d7034-ae99-44d1-8fb2-62e7ef5e1c7c/config > C repro: https://ci.syzbot.org/findings/880c374a-1b49-436e-9be2-63d5e2c6b6ab/c_repro > syz repro: https://ci.syzbot.org/findings/880c374a-1b49-436e-9be2-63d5e2c6b6ab/syz_repro > > exFAT-fs (loop0): failed to load upcase table (idx : 0x00010000, chksum : 0xe5674ec2, utbl_chksum : 0xe619d30d) > exFAT-fs (loop0): failed to load alloc-bitmap > exFAT-fs (loop0): failed to recognize exfat type > ------------[ cut here ]------------ > WARNING: CPU: 1 PID: 5965 at ./include/linux/memcontrol.h:380 obj_cgroup_memcg include/linux/memcontrol.h:380 [inline] > WARNING: CPU: 1 PID: 5965 at ./include/linux/memcontrol.h:380 folio_memcg+0x148/0x1c0 include/linux/memcontrol.h:434 This is understandable as the code snippet was added fairly recently and is easy to miss during rebasing. #syz test diff --git a/mm/zswap.c b/mm/zswap.c index a341814468b9..738d914e5354 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -896,11 +896,14 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, * to the active LRU list in the case. */ if (comp_ret || !dlen || dlen >= PAGE_SIZE) { + rcu_read_lock(); if (!mem_cgroup_zswap_writeback_enabled( folio_memcg(page_folio(page)))) { + rcu_read_unlock(); comp_ret = comp_ret ? comp_ret : -EINVAL; goto unlock; } + rcu_read_unlock(); comp_ret = 0; dlen = PAGE_SIZE; dst = kmap_local_page(page); ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: Re: [syzbot ci] Re: Eliminate Dying Memory Cgroup 2025-10-29 0:22 ` Harry Yoo @ 2025-10-29 0:25 ` syzbot ci 2025-10-29 3:12 ` Qi Zheng 1 sibling, 0 replies; 6+ messages in thread From: syzbot ci @ 2025-10-29 0:25 UTC (permalink / raw) To: harry.yoo Cc: akpm, axelrasmussen, cgroups, chengming.zhou, david, hannes, harry.yoo, hughd, imran.f.khan, kamalesh.babulal, linux-kernel, linux-mm, lorenzo.stoakes, mhocko, muchun.song, nphamcs, qi.zheng, roman.gushchin, shakeel.butt, songmuchun, syzbot, syzkaller-bugs, weixugc, yuanchu, zhengqi.arch, ziy Unknown command ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [syzbot ci] Re: Eliminate Dying Memory Cgroup 2025-10-29 0:22 ` Harry Yoo 2025-10-29 0:25 ` syzbot ci @ 2025-10-29 3:12 ` Qi Zheng 1 sibling, 0 replies; 6+ messages in thread From: Qi Zheng @ 2025-10-29 3:12 UTC (permalink / raw) To: Harry Yoo, syzbot ci Cc: akpm, axelrasmussen, cgroups, chengming.zhou, david, hannes, hughd, imran.f.khan, kamalesh.babulal, linux-kernel, linux-mm, lorenzo.stoakes, mhocko, muchun.song, nphamcs, roman.gushchin, shakeel.butt, songmuchun, weixugc, yuanchu, zhengqi.arch, ziy, syzbot, syzkaller-bugs Hi Harry, On 10/29/25 8:22 AM, Harry Yoo wrote: > On Tue, Oct 28, 2025 at 01:58:33PM -0700, syzbot ci wrote: >> syzbot ci has tested the following series >> >> [v1] Eliminate Dying Memory Cgroup >> https://lore.kernel.org/all/cover.1761658310.git.zhengqi.arch@bytedance.com >> * [PATCH v1 01/26] mm: memcontrol: remove dead code of checking parent memory cgroup >> * [PATCH v1 02/26] mm: workingset: use folio_lruvec() in workingset_refault() >> * [PATCH v1 03/26] mm: rename unlock_page_lruvec_irq and its variants >> * [PATCH v1 04/26] mm: vmscan: refactor move_folios_to_lru() >> * [PATCH v1 05/26] mm: memcontrol: allocate object cgroup for non-kmem case >> * [PATCH v1 06/26] mm: memcontrol: return root object cgroup for root memory cgroup >> * [PATCH v1 07/26] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() >> * [PATCH v1 08/26] buffer: prevent memory cgroup release in folio_alloc_buffers() >> * [PATCH v1 09/26] writeback: prevent memory cgroup release in writeback module >> * [PATCH v1 10/26] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() >> * [PATCH v1 11/26] mm: page_io: prevent memory cgroup release in page_io module >> * [PATCH v1 12/26] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() >> * [PATCH v1 13/26] mm: mglru: prevent memory cgroup release in mglru >> * [PATCH v1 14/26] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() >> * [PATCH v1 15/26] mm: workingset: prevent memory cgroup release in lru_gen_eviction() >> * [PATCH v1 16/26] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() >> * [PATCH v1 17/26] mm: workingset: prevent lruvec release in workingset_refault() >> * [PATCH v1 18/26] mm: zswap: prevent lruvec release in zswap_folio_swapin() >> * [PATCH v1 19/26] mm: swap: prevent lruvec release in swap module >> * [PATCH v1 20/26] mm: workingset: prevent lruvec release in workingset_activation() >> * [PATCH v1 21/26] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock >> * [PATCH v1 22/26] mm: vmscan: prepare for reparenting traditional LRU folios >> * [PATCH v1 23/26] mm: vmscan: prepare for reparenting MGLRU folios >> * [PATCH v1 24/26] mm: memcontrol: refactor memcg_reparent_objcgs() >> * [PATCH v1 25/26] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios >> * [PATCH v1 26/26] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers >> >> and found the following issue: >> WARNING in folio_memcg >> >> Full report is available here: >> https://ci.syzbot.org/series/0d48a77a-fb4f-485d-9fd6-086afd6fb650 >> >> *** >> >> WARNING in folio_memcg >> >> tree: mm-new >> URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git >> base: b227c04932039bccc21a0a89cd6df50fa57e4716 >> arch: amd64 >> compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8 >> config: https://ci.syzbot.org/builds/503d7034-ae99-44d1-8fb2-62e7ef5e1c7c/config >> C repro: https://ci.syzbot.org/findings/880c374a-1b49-436e-9be2-63d5e2c6b6ab/c_repro >> syz repro: https://ci.syzbot.org/findings/880c374a-1b49-436e-9be2-63d5e2c6b6ab/syz_repro >> >> exFAT-fs (loop0): failed to load upcase table (idx : 0x00010000, chksum : 0xe5674ec2, utbl_chksum : 0xe619d30d) >> exFAT-fs (loop0): failed to load alloc-bitmap >> exFAT-fs (loop0): failed to recognize exfat type >> ------------[ cut here ]------------ >> WARNING: CPU: 1 PID: 5965 at ./include/linux/memcontrol.h:380 obj_cgroup_memcg include/linux/memcontrol.h:380 [inline] >> WARNING: CPU: 1 PID: 5965 at ./include/linux/memcontrol.h:380 folio_memcg+0x148/0x1c0 include/linux/memcontrol.h:434 > > This is understandable as the code snippet was added fairly recently > and is easy to miss during rebasing. My mistake, I should recheck it. > > #syz test > > diff --git a/mm/zswap.c b/mm/zswap.c > index a341814468b9..738d914e5354 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -896,11 +896,14 @@ static bool zswap_compress(struct page *page, struct zswap_entry *entry, > * to the active LRU list in the case. > */ > if (comp_ret || !dlen || dlen >= PAGE_SIZE) { > + rcu_read_lock(); > if (!mem_cgroup_zswap_writeback_enabled( > folio_memcg(page_folio(page)))) { > + rcu_read_unlock(); > comp_ret = comp_ret ? comp_ret : -EINVAL; > goto unlock; > } > + rcu_read_unlock(); > comp_ret = 0; > dlen = PAGE_SIZE; > dst = kmap_local_page(page); LGTM, will do in the next version. Thanks! > ^ permalink raw reply [flat|nested] 6+ messages in thread
[parent not found: <cover.1768389889.git.zhengqi.arch@bytedance.com>]
* [syzbot ci] Re: Eliminate Dying Memory Cgroup [not found] <cover.1768389889.git.zhengqi.arch@bytedance.com> @ 2026-01-14 17:07 ` syzbot ci 2026-01-15 3:47 ` Qi Zheng 0 siblings, 1 reply; 6+ messages in thread From: syzbot ci @ 2026-01-14 17:07 UTC (permalink / raw) To: akpm, apais, axelrasmussen, cgroups, chengming.zhou, chenridong, chenridong, david, hamzamahfooz, hannes, harry.yoo, hughd, imran.f.khan, kamalesh.babulal, lance.yang, linux-kernel, linux-mm, lorenzo.stoakes, mhocko, mkoutny, muchun.song, nphamcs, qi.zheng, roman.gushchin, shakeel.butt, songmuchun, weixugc, yosry.ahmed, yuanchu, zhengqi.arch, ziy Cc: syzbot, syzkaller-bugs syzbot ci has tested the following series [v3] Eliminate Dying Memory Cgroup https://lore.kernel.org/all/cover.1768389889.git.zhengqi.arch@bytedance.com * [PATCH v3 01/30] mm: memcontrol: remove dead code of checking parent memory cgroup * [PATCH v3 02/30] mm: workingset: use folio_lruvec() in workingset_refault() * [PATCH v3 03/30] mm: rename unlock_page_lruvec_irq and its variants * [PATCH v3 04/30] mm: vmscan: prepare for the refactoring the move_folios_to_lru() * [PATCH v3 05/30] mm: vmscan: refactor move_folios_to_lru() * [PATCH v3 06/30] mm: memcontrol: allocate object cgroup for non-kmem case * [PATCH v3 07/30] mm: memcontrol: return root object cgroup for root memory cgroup * [PATCH v3 08/30] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() * [PATCH v3 09/30] buffer: prevent memory cgroup release in folio_alloc_buffers() * [PATCH v3 10/30] writeback: prevent memory cgroup release in writeback module * [PATCH v3 11/30] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() * [PATCH v3 12/30] mm: page_io: prevent memory cgroup release in page_io module * [PATCH v3 13/30] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() * [PATCH v3 14/30] mm: mglru: prevent memory cgroup release in mglru * [PATCH v3 15/30] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() * [PATCH v3 16/30] mm: workingset: prevent memory cgroup release in lru_gen_eviction() * [PATCH v3 17/30] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() * [PATCH v3 18/30] mm: zswap: prevent memory cgroup release in zswap_compress() * [PATCH v3 19/30] mm: workingset: prevent lruvec release in workingset_refault() * [PATCH v3 20/30] mm: zswap: prevent lruvec release in zswap_folio_swapin() * [PATCH v3 21/30] mm: swap: prevent lruvec release in lru_gen_clear_refs() * [PATCH v3 22/30] mm: workingset: prevent lruvec release in workingset_activation() * [PATCH v3 23/30] mm: do not open-code lruvec lock * [PATCH v3 24/30] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock * [PATCH v3 25/30] mm: vmscan: prepare for reparenting traditional LRU folios * [PATCH v3 26/30] mm: vmscan: prepare for reparenting MGLRU folios * [PATCH v3 27/30] mm: memcontrol: refactor memcg_reparent_objcgs() * [PATCH v3 28/30] mm: memcontrol: prepare for reparenting state_local * [PATCH v3 29/30] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios * [PATCH v3 30/30] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers and found the following issue: UBSAN: array-index-out-of-bounds in reparent_memcg_lruvec_state_local Full report is available here: https://ci.syzbot.org/series/45c0b58d-255a-4579-9880-497bdbd4fb99 *** UBSAN: array-index-out-of-bounds in reparent_memcg_lruvec_state_local tree: linux-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next base: b775e489bec70895b7ef6b66927886bbac79598f arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/4d8819ab-0f94-42e8-bd70-87c7e83c37d2/config syz repro: https://ci.syzbot.org/findings/7850f5dd-4ac7-4b74-85ff-a75ddddebbee/syz_repro ------------[ cut here ]------------ UBSAN: array-index-out-of-bounds in mm/memcontrol.c:530:3 index 33 is out of range for type 'long[33]' CPU: 1 UID: 0 PID: 31 Comm: kworker/1:1 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Workqueue: cgroup_offline css_killed_work_fn Call Trace: <TASK> dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 ubsan_epilogue+0xa/0x30 lib/ubsan.c:233 __ubsan_handle_out_of_bounds+0xe8/0xf0 lib/ubsan.c:455 reparent_memcg_lruvec_state_local+0x34f/0x460 mm/memcontrol.c:530 reparent_memcg1_lruvec_state_local+0xa7/0xc0 mm/memcontrol-v1.c:1917 reparent_state_local mm/memcontrol.c:242 [inline] memcg_reparent_objcgs mm/memcontrol.c:299 [inline] mem_cgroup_css_offline+0xc7c/0xc90 mm/memcontrol.c:4054 offline_css kernel/cgroup/cgroup.c:5760 [inline] css_killed_work_fn+0x12f/0x570 kernel/cgroup/cgroup.c:6055 process_one_work+0x949/0x15a0 kernel/workqueue.c:3279 process_scheduled_works kernel/workqueue.c:3362 [inline] worker_thread+0x9af/0xee0 kernel/workqueue.c:3443 kthread+0x388/0x470 kernel/kthread.c:467 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 </TASK> ---[ end trace ]--- Kernel panic - not syncing: UBSAN: panic_on_warn set ... CPU: 1 UID: 0 PID: 31 Comm: kworker/1:1 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Workqueue: cgroup_offline css_killed_work_fn Call Trace: <TASK> vpanic+0x1e0/0x670 kernel/panic.c:490 panic+0xc5/0xd0 kernel/panic.c:627 check_panic_on_warn+0x89/0xb0 kernel/panic.c:377 __ubsan_handle_out_of_bounds+0xe8/0xf0 lib/ubsan.c:455 reparent_memcg_lruvec_state_local+0x34f/0x460 mm/memcontrol.c:530 reparent_memcg1_lruvec_state_local+0xa7/0xc0 mm/memcontrol-v1.c:1917 reparent_state_local mm/memcontrol.c:242 [inline] memcg_reparent_objcgs mm/memcontrol.c:299 [inline] mem_cgroup_css_offline+0xc7c/0xc90 mm/memcontrol.c:4054 offline_css kernel/cgroup/cgroup.c:5760 [inline] css_killed_work_fn+0x12f/0x570 kernel/cgroup/cgroup.c:6055 process_one_work+0x949/0x15a0 kernel/workqueue.c:3279 process_scheduled_works kernel/workqueue.c:3362 [inline] worker_thread+0x9af/0xee0 kernel/workqueue.c:3443 kthread+0x388/0x470 kernel/kthread.c:467 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 </TASK> Kernel Offset: disabled Rebooting in 86400 seconds.. *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [syzbot ci] Re: Eliminate Dying Memory Cgroup 2026-01-14 17:07 ` syzbot ci @ 2026-01-15 3:47 ` Qi Zheng 0 siblings, 0 replies; 6+ messages in thread From: Qi Zheng @ 2026-01-15 3:47 UTC (permalink / raw) To: syzbot ci, akpm, apais, axelrasmussen, cgroups, chengming.zhou, chenridong, chenridong, david, hamzamahfooz, hannes, harry.yoo, hughd, imran.f.khan, kamalesh.babulal, lance.yang, linux-kernel, linux-mm, lorenzo.stoakes, mhocko, mkoutny, muchun.song, nphamcs, roman.gushchin, shakeel.butt, songmuchun, weixugc, yosry.ahmed, yuanchu, zhengqi.arch, ziy Cc: syzbot, syzkaller-bugs On 1/15/26 1:07 AM, syzbot ci wrote: > syzbot ci has tested the following series > > [v3] Eliminate Dying Memory Cgroup > https://lore.kernel.org/all/cover.1768389889.git.zhengqi.arch@bytedance.com > * [PATCH v3 01/30] mm: memcontrol: remove dead code of checking parent memory cgroup > * [PATCH v3 02/30] mm: workingset: use folio_lruvec() in workingset_refault() > * [PATCH v3 03/30] mm: rename unlock_page_lruvec_irq and its variants > * [PATCH v3 04/30] mm: vmscan: prepare for the refactoring the move_folios_to_lru() > * [PATCH v3 05/30] mm: vmscan: refactor move_folios_to_lru() > * [PATCH v3 06/30] mm: memcontrol: allocate object cgroup for non-kmem case > * [PATCH v3 07/30] mm: memcontrol: return root object cgroup for root memory cgroup > * [PATCH v3 08/30] mm: memcontrol: prevent memory cgroup release in get_mem_cgroup_from_folio() > * [PATCH v3 09/30] buffer: prevent memory cgroup release in folio_alloc_buffers() > * [PATCH v3 10/30] writeback: prevent memory cgroup release in writeback module > * [PATCH v3 11/30] mm: memcontrol: prevent memory cgroup release in count_memcg_folio_events() > * [PATCH v3 12/30] mm: page_io: prevent memory cgroup release in page_io module > * [PATCH v3 13/30] mm: migrate: prevent memory cgroup release in folio_migrate_mapping() > * [PATCH v3 14/30] mm: mglru: prevent memory cgroup release in mglru > * [PATCH v3 15/30] mm: memcontrol: prevent memory cgroup release in mem_cgroup_swap_full() > * [PATCH v3 16/30] mm: workingset: prevent memory cgroup release in lru_gen_eviction() > * [PATCH v3 17/30] mm: thp: prevent memory cgroup release in folio_split_queue_lock{_irqsave}() > * [PATCH v3 18/30] mm: zswap: prevent memory cgroup release in zswap_compress() > * [PATCH v3 19/30] mm: workingset: prevent lruvec release in workingset_refault() > * [PATCH v3 20/30] mm: zswap: prevent lruvec release in zswap_folio_swapin() > * [PATCH v3 21/30] mm: swap: prevent lruvec release in lru_gen_clear_refs() > * [PATCH v3 22/30] mm: workingset: prevent lruvec release in workingset_activation() > * [PATCH v3 23/30] mm: do not open-code lruvec lock > * [PATCH v3 24/30] mm: memcontrol: prepare for reparenting LRU pages for lruvec lock > * [PATCH v3 25/30] mm: vmscan: prepare for reparenting traditional LRU folios > * [PATCH v3 26/30] mm: vmscan: prepare for reparenting MGLRU folios > * [PATCH v3 27/30] mm: memcontrol: refactor memcg_reparent_objcgs() > * [PATCH v3 28/30] mm: memcontrol: prepare for reparenting state_local > * [PATCH v3 29/30] mm: memcontrol: eliminate the problem of dying memory cgroup for LRU folios > * [PATCH v3 30/30] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance helpers > > and found the following issue: > UBSAN: array-index-out-of-bounds in reparent_memcg_lruvec_state_local > > Full report is available here: > https://ci.syzbot.org/series/45c0b58d-255a-4579-9880-497bdbd4fb99 > > *** > > UBSAN: array-index-out-of-bounds in reparent_memcg_lruvec_state_local > > tree: linux-next > URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next > base: b775e489bec70895b7ef6b66927886bbac79598f > arch: amd64 > compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 > config: https://ci.syzbot.org/builds/4d8819ab-0f94-42e8-bd70-87c7e83c37d2/config > syz repro: https://ci.syzbot.org/findings/7850f5dd-4ac7-4b74-85ff-a75ddddebbee/syz_repro > > ------------[ cut here ]------------ > UBSAN: array-index-out-of-bounds in mm/memcontrol.c:530:3 > index 33 is out of range for type 'long[33]' Oh, the size of lruvec_stats->state_local is NR_MEMCG_NODE_STAT_ITEMS, but memcg1_stats contains MEMCG_SWAP, which is outside the array range. It seems that only the following items need to be reparented: 1). NR_LRU_LISTS 2). NR_SLAB_RECLAIMABLE_B + NR_SLAB_UNRECLAIMABLE_B But for 2), since we reparented the slab page a long time ago, it seems there has always been a problem. So this patchset will only handle 1). > CPU: 1 UID: 0 PID: 31 Comm: kworker/1:1 Not tainted syzkaller #0 PREEMPT(full) > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 > Workqueue: cgroup_offline css_killed_work_fn > Call Trace: > <TASK> > dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 > ubsan_epilogue+0xa/0x30 lib/ubsan.c:233 > __ubsan_handle_out_of_bounds+0xe8/0xf0 lib/ubsan.c:455 > reparent_memcg_lruvec_state_local+0x34f/0x460 mm/memcontrol.c:530 > reparent_memcg1_lruvec_state_local+0xa7/0xc0 mm/memcontrol-v1.c:1917 > reparent_state_local mm/memcontrol.c:242 [inline] > memcg_reparent_objcgs mm/memcontrol.c:299 [inline] > mem_cgroup_css_offline+0xc7c/0xc90 mm/memcontrol.c:4054 > offline_css kernel/cgroup/cgroup.c:5760 [inline] > css_killed_work_fn+0x12f/0x570 kernel/cgroup/cgroup.c:6055 > process_one_work+0x949/0x15a0 kernel/workqueue.c:3279 > process_scheduled_works kernel/workqueue.c:3362 [inline] > worker_thread+0x9af/0xee0 kernel/workqueue.c:3443 > kthread+0x388/0x470 kernel/kthread.c:467 > ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158 > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 > </TASK> > ---[ end trace ]--- > Kernel panic - not syncing: UBSAN: panic_on_warn set ... > CPU: 1 UID: 0 PID: 31 Comm: kworker/1:1 Not tainted syzkaller #0 PREEMPT(full) > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 > Workqueue: cgroup_offline css_killed_work_fn > Call Trace: > <TASK> > vpanic+0x1e0/0x670 kernel/panic.c:490 > panic+0xc5/0xd0 kernel/panic.c:627 > check_panic_on_warn+0x89/0xb0 kernel/panic.c:377 > __ubsan_handle_out_of_bounds+0xe8/0xf0 lib/ubsan.c:455 > reparent_memcg_lruvec_state_local+0x34f/0x460 mm/memcontrol.c:530 > reparent_memcg1_lruvec_state_local+0xa7/0xc0 mm/memcontrol-v1.c:1917 > reparent_state_local mm/memcontrol.c:242 [inline] > memcg_reparent_objcgs mm/memcontrol.c:299 [inline] > mem_cgroup_css_offline+0xc7c/0xc90 mm/memcontrol.c:4054 > offline_css kernel/cgroup/cgroup.c:5760 [inline] > css_killed_work_fn+0x12f/0x570 kernel/cgroup/cgroup.c:6055 > process_one_work+0x949/0x15a0 kernel/workqueue.c:3279 > process_scheduled_works kernel/workqueue.c:3362 [inline] > worker_thread+0x9af/0xee0 kernel/workqueue.c:3443 > kthread+0x388/0x470 kernel/kthread.c:467 > ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158 > ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 > </TASK> > Kernel Offset: disabled > Rebooting in 86400 seconds.. > > > *** > > If these findings have caused you to resend the series or submit a > separate fix, please add the following tag to your commit message: > Tested-by: syzbot@syzkaller.appspotmail.com > > --- > This report is generated by a bot. It may contain errors. > syzbot ci engineers can be reached at syzkaller@googlegroups.com. ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2026-01-15 3:47 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cover.1761658310.git.zhengqi.arch@bytedance.com>
2025-10-28 20:58 ` [syzbot ci] Re: Eliminate Dying Memory Cgroup syzbot ci
2025-10-29 0:22 ` Harry Yoo
2025-10-29 0:25 ` syzbot ci
2025-10-29 3:12 ` Qi Zheng
[not found] <cover.1768389889.git.zhengqi.arch@bytedance.com>
2026-01-14 17:07 ` syzbot ci
2026-01-15 3:47 ` Qi Zheng
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox