* [syzbot ci] Re: Virtual Swap Space [not found] <20260318222953.441758-1-nphamcs@gmail.com> @ 2026-03-19 21:36 ` syzbot ci 2026-03-19 23:26 ` Nhat Pham 0 siblings, 1 reply; 3+ messages in thread From: syzbot ci @ 2026-03-19 21:36 UTC (permalink / raw) To: akpm, apopple, axelrasmussen, baohua, baolin.wang, bhe, byungchul, cgroups, chengming.zhou, chrisl, corbet, david, dev.jain, gourry, hannes, hughd, jannh, joshua.hahnjy, kasong, kernel-team, lance.yang, lenb, liam.howlett, linux-doc, linux-kernel, linux-mm, linux-pm, lorenzo.stoakes, matthew.brost, mhocko, muchun.song, npache, nphamcs, pavel, peterx, peterz, pfalcato, rafael, rakie.kim, riel, roman.gushchin, rppt, ryan.roberts, shakeel.butt, shikemeng, surenb, tglx, vbabka, weixugc, ying.huang Cc: syzbot, syzkaller-bugs syzbot ci has tested the following series [v4] Virtual Swap Space https://lore.kernel.org/all/20260318222953.441758-1-nphamcs@gmail.com * [PATCH v4 01/21] mm/swap: decouple swap cache from physical swap infrastructure * [PATCH v4 02/21] swap: rearrange the swap header file * [PATCH v4 03/21] mm: swap: add an abstract API for locking out swapoff * [PATCH v4 04/21] zswap: add new helpers for zswap entry operations * [PATCH v4 05/21] mm/swap: add a new function to check if a swap entry is in swap cached. * [PATCH v4 06/21] mm: swap: add a separate type for physical swap slots * [PATCH v4 07/21] mm: create scaffolds for the new virtual swap implementation * [PATCH v4 08/21] zswap: prepare zswap for swap virtualization * [PATCH v4 09/21] mm: swap: allocate a virtual swap slot for each swapped out page * [PATCH v4 10/21] swap: move swap cache to virtual swap descriptor * [PATCH v4 11/21] zswap: move zswap entry management to the virtual swap descriptor * [PATCH v4 12/21] swap: implement the swap_cgroup API using virtual swap * [PATCH v4 13/21] swap: manage swap entry lifecycle at the virtual swap layer * [PATCH v4 14/21] mm: swap: decouple virtual swap slot from backing store * [PATCH v4 15/21] zswap: do not start zswap shrinker if there is no physical swap slots * [PATCH v4 16/21] swap: do not unnecesarily pin readahead swap entries * [PATCH v4 17/21] swapfile: remove zeromap bitmap * [PATCH v4 18/21] memcg: swap: only charge physical swap slots * [PATCH v4 19/21] swap: simplify swapoff using virtual swap * [PATCH v4 20/21] swapfile: replace the swap map with bitmaps * [PATCH v4 21/21] vswap: batch contiguous vswap free calls and found the following issue: possible deadlock in vswap_iter Full report is available here: https://ci.syzbot.org/series/f8238a2a-370e-404d-b3f7-5945b574bd63 *** possible deadlock in vswap_iter tree: bpf-next URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next.git base: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/cf1517a6-d391-46d8-bfbe-98e6be6b93ce/config syz repro: https://ci.syzbot.org/findings/b4e84ae7-17d4-4bf8-9c3f-4c13b10a1e52/syz_repro ============================================ WARNING: possible recursive locking detected syzkaller #0 Not tainted -------------------------------------------- syz.1.18/6001 is trying to acquire lock: ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock.h:351 [inline] ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: vswap_iter+0xfa/0x1b0 mm/vswap.c:274 but task is already holding lock: ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock.h:376 [inline] ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: swap_cache_lock_irq+0xe2/0x190 mm/vswap.c:1529 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&cluster->lock); lock(&cluster->lock); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by syz.1.18/6001: #0: ffff8881bb523440 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:391 [inline] #0: ffff8881bb523440 (&mm->mmap_lock){++++}-{4:4}, at: madvise_lock+0x152/0x2e0 mm/madvise.c:1789 #1: ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock.h:376 [inline] #1: ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: swap_cache_lock_irq+0xe2/0x190 mm/vswap.c:1529 #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: vswap_cgroup_record+0x41/0x440 mm/vswap.c:1909 stack backtrace: CPU: 0 UID: 0 PID: 6001 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_deadlock_bug+0x279/0x290 kernel/locking/lockdep.c:3041 check_deadlock kernel/locking/lockdep.c:3093 [inline] validate_chain kernel/locking/lockdep.c:3895 [inline] __lock_acquire+0x253f/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154 spin_lock include/linux/spinlock.h:351 [inline] vswap_iter+0xfa/0x1b0 mm/vswap.c:274 vswap_cgroup_record+0xeb/0x440 mm/vswap.c:1910 swap_cgroup_record+0xc5/0x130 mm/vswap.c:1933 memcg1_swapout+0x358/0x9e0 mm/memcontrol-v1.c:623 __remove_mapping+0x7d4/0xa70 mm/vmscan.c:762 shrink_folio_list+0x287c/0x5160 mm/vmscan.c:1518 reclaim_folio_list+0x100/0x400 mm/vmscan.c:2198 reclaim_pages+0x45b/0x530 mm/vmscan.c:2235 madvise_cold_or_pageout_pte_range+0x1eac/0x2220 mm/madvise.c:444 walk_pmd_range mm/pagewalk.c:130 [inline] walk_pud_range mm/pagewalk.c:224 [inline] walk_p4d_range mm/pagewalk.c:262 [inline] walk_pgd_range+0x1032/0x1d30 mm/pagewalk.c:303 __walk_page_range+0x14c/0x710 mm/pagewalk.c:410 walk_page_range_vma_unsafe+0x309/0x410 mm/pagewalk.c:714 madvise_pageout_page_range mm/madvise.c:622 [inline] madvise_pageout mm/madvise.c:647 [inline] madvise_vma_behavior+0x2951/0x43c0 mm/madvise.c:1366 madvise_walk_vmas+0x57a/0xaf0 mm/madvise.c:1721 madvise_do_behavior+0x386/0x540 mm/madvise.c:1937 do_madvise+0x1fa/0x2e0 mm/madvise.c:2030 __do_sys_madvise mm/madvise.c:2039 [inline] __se_sys_madvise mm/madvise.c:2037 [inline] __x64_sys_madvise+0xa6/0xc0 mm/madvise.c:2037 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f2b3459c799 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f2b35495028 EFLAGS: 00000246 ORIG_RAX: 000000000000001c RAX: ffffffffffffffda RBX: 00007f2b34815fa0 RCX: 00007f2b3459c799 RDX: 0000000000000015 RSI: 0000000000600000 RDI: 0000200000000000 RBP: 00007f2b34632c99 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f2b34816038 R14: 00007f2b34815fa0 R15: 00007ffcfbbae8f8 </TASK> *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com. ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [syzbot ci] Re: Virtual Swap Space 2026-03-19 21:36 ` [syzbot ci] Re: Virtual Swap Space syzbot ci @ 2026-03-19 23:26 ` Nhat Pham 0 siblings, 0 replies; 3+ messages in thread From: Nhat Pham @ 2026-03-19 23:26 UTC (permalink / raw) To: syzbot ci Cc: akpm, apopple, axelrasmussen, baohua, baolin.wang, bhe, byungchul, cgroups, chengming.zhou, chrisl, corbet, david, dev.jain, gourry, hannes, hughd, jannh, joshua.hahnjy, kasong, kernel-team, lance.yang, lenb, liam.howlett, linux-doc, linux-kernel, linux-mm, linux-pm, lorenzo.stoakes, matthew.brost, mhocko, muchun.song, npache, pavel, peterx, peterz, pfalcato, rafael, rakie.kim, riel, roman.gushchin, rppt, ryan.roberts, shakeel.butt, shikemeng, surenb, tglx, vbabka, weixugc, ying.huang, syzbot, syzkaller-bugs On Thu, Mar 19, 2026 at 2:36 PM syzbot ci <syzbot+ci0215525ee2c0ed89@syzkaller.appspotmail.com> wrote: > > syzbot ci has tested the following series > > [v4] Virtual Swap Space > https://lore.kernel.org/all/20260318222953.441758-1-nphamcs@gmail.com > * [PATCH v4 01/21] mm/swap: decouple swap cache from physical swap infrastructure > * [PATCH v4 02/21] swap: rearrange the swap header file > * [PATCH v4 03/21] mm: swap: add an abstract API for locking out swapoff > * [PATCH v4 04/21] zswap: add new helpers for zswap entry operations > * [PATCH v4 05/21] mm/swap: add a new function to check if a swap entry is in swap cached. > * [PATCH v4 06/21] mm: swap: add a separate type for physical swap slots > * [PATCH v4 07/21] mm: create scaffolds for the new virtual swap implementation > * [PATCH v4 08/21] zswap: prepare zswap for swap virtualization > * [PATCH v4 09/21] mm: swap: allocate a virtual swap slot for each swapped out page > * [PATCH v4 10/21] swap: move swap cache to virtual swap descriptor > * [PATCH v4 11/21] zswap: move zswap entry management to the virtual swap descriptor > * [PATCH v4 12/21] swap: implement the swap_cgroup API using virtual swap > * [PATCH v4 13/21] swap: manage swap entry lifecycle at the virtual swap layer > * [PATCH v4 14/21] mm: swap: decouple virtual swap slot from backing store > * [PATCH v4 15/21] zswap: do not start zswap shrinker if there is no physical swap slots > * [PATCH v4 16/21] swap: do not unnecesarily pin readahead swap entries > * [PATCH v4 17/21] swapfile: remove zeromap bitmap > * [PATCH v4 18/21] memcg: swap: only charge physical swap slots > * [PATCH v4 19/21] swap: simplify swapoff using virtual swap > * [PATCH v4 20/21] swapfile: replace the swap map with bitmaps > * [PATCH v4 21/21] vswap: batch contiguous vswap free calls > > and found the following issue: > possible deadlock in vswap_iter > > Full report is available here: > https://ci.syzbot.org/series/f8238a2a-370e-404d-b3f7-5945b574bd63 > > *** > > possible deadlock in vswap_iter > > tree: bpf-next > URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next.git > base: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b > arch: amd64 > compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 > config: https://ci.syzbot.org/builds/cf1517a6-d391-46d8-bfbe-98e6be6b93ce/config > syz repro: https://ci.syzbot.org/findings/b4e84ae7-17d4-4bf8-9c3f-4c13b10a1e52/syz_repro > > ============================================ > WARNING: possible recursive locking detected > syzkaller #0 Not tainted > -------------------------------------------- > syz.1.18/6001 is trying to acquire lock: > ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock.h:351 [inline] > ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: vswap_iter+0xfa/0x1b0 mm/vswap.c:274 > > but task is already holding lock: > ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock.h:376 [inline] > ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: swap_cache_lock_irq+0xe2/0x190 mm/vswap.c:1529 > > other info that might help us debug this: > Possible unsafe locking scenario: > > CPU0 > ---- > lock(&cluster->lock); > lock(&cluster->lock); > > *** DEADLOCK *** > > May be due to missing lock nesting notation > > 3 locks held by syz.1.18/6001: > #0: ffff8881bb523440 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:391 [inline] > #0: ffff8881bb523440 (&mm->mmap_lock){++++}-{4:4}, at: madvise_lock+0x152/0x2e0 mm/madvise.c:1789 > #1: ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock.h:376 [inline] > #1: ffff88811fba0018 (&cluster->lock){+.+.}-{3:3}, at: swap_cache_lock_irq+0xe2/0x190 mm/vswap.c:1529 > #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] > #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] > #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: vswap_cgroup_record+0x41/0x440 mm/vswap.c:1909 > > stack backtrace: > CPU: 0 UID: 0 PID: 6001 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT(full) > Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 > Call Trace: > <TASK> > dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 > print_deadlock_bug+0x279/0x290 kernel/locking/lockdep.c:3041 > check_deadlock kernel/locking/lockdep.c:3093 [inline] > validate_chain kernel/locking/lockdep.c:3895 [inline] > __lock_acquire+0x253f/0x2cf0 kernel/locking/lockdep.c:5237 > lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868 > __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] > _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154 > spin_lock include/linux/spinlock.h:351 [inline] > vswap_iter+0xfa/0x1b0 mm/vswap.c:274 > vswap_cgroup_record+0xeb/0x440 mm/vswap.c:1910 > swap_cgroup_record+0xc5/0x130 mm/vswap.c:1933 > memcg1_swapout+0x358/0x9e0 mm/memcontrol-v1.c:623 Good (syz)bot! We're already holding the cluster lock here - shouldn't need to reacquire the lock. Should be an easy-ish fix. ^ permalink raw reply [flat|nested] 3+ messages in thread
[parent not found: <20260208215839.87595-1-nphamcs@gmail.com>]
* [syzbot ci] Re: Virtual Swap Space [not found] <20260208215839.87595-1-nphamcs@gmail.com> @ 2026-02-10 15:45 ` syzbot ci 0 siblings, 0 replies; 3+ messages in thread From: syzbot ci @ 2026-02-10 15:45 UTC (permalink / raw) To: akpm, axelrasmussen, baohua, bhe, cgroups, chengming.zhou, chrisl, christophe.leroy, gourry, hannes, huang.ying.caritas, hughd, jannh, joshua.hahnjy, kasong, kernel-team, len.brown, linux-kernel, linux-mm, linux-pm, lorenzo.stoakes, mhocko, muchun.song, npache, nphamcs, osalvador, pavel, peterx, pfalcato, rafael, riel, roman.gushchin, ryan.roberts, shakeel.butt, shikemeng, viro, weixugc, yosry.ahmed, yuanchu, zhengqi.arch Cc: syzbot, syzkaller-bugs syzbot ci has tested the following series [v3] Virtual Swap Space https://lore.kernel.org/all/20260208215839.87595-1-nphamcs@gmail.com * [PATCH v3 01/20] mm/swap: decouple swap cache from physical swap infrastructure * [PATCH v3 02/20] swap: rearrange the swap header file * [PATCH v3 03/20] mm: swap: add an abstract API for locking out swapoff * [PATCH v3 04/20] zswap: add new helpers for zswap entry operations * [PATCH v3 05/20] mm/swap: add a new function to check if a swap entry is in swap cached. * [PATCH v3 06/20] mm: swap: add a separate type for physical swap slots * [PATCH v3 07/20] mm: create scaffolds for the new virtual swap implementation * [PATCH v3 08/20] zswap: prepare zswap for swap virtualization * [PATCH v3 09/20] mm: swap: allocate a virtual swap slot for each swapped out page * [PATCH v3 10/20] swap: move swap cache to virtual swap descriptor * [PATCH v3 11/20] zswap: move zswap entry management to the virtual swap descriptor * [PATCH v3 12/20] swap: implement the swap_cgroup API using virtual swap * [PATCH v3 13/20] swap: manage swap entry lifecycle at the virtual swap layer * [PATCH v3 14/20] mm: swap: decouple virtual swap slot from backing store * [PATCH v3 15/20] zswap: do not start zswap shrinker if there is no physical swap slots * [PATCH v3 16/20] swap: do not unnecesarily pin readahead swap entries * [PATCH v3 17/20] swapfile: remove zeromap bitmap * [PATCH v3 18/20] memcg: swap: only charge physical swap slots * [PATCH v3 19/20] swap: simplify swapoff using virtual swap * [PATCH v3 20/20] swapfile: replace the swap map with bitmaps and found the following issue: possible deadlock in vswap_iter Full report is available here: https://ci.syzbot.org/series/b9defda6-daec-4c41-bbf9-7d3b7fabd7cb *** possible deadlock in vswap_iter tree: bpf URL: https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf.git base: 05f7e89ab9731565d8a62e3b5d1ec206485eeb0b arch: amd64 compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8 config: https://ci.syzbot.org/builds/f444cfbe-4ce0-4917-94aa-3a8bd96ee376/config C repro: https://ci.syzbot.org/findings/7b8c50b1-47d6-42e0-bcfc-814e7b3bb596/c_repro syz repro: https://ci.syzbot.org/findings/7b8c50b1-47d6-42e0-bcfc-814e7b3bb596/syz_repro loop0: detected capacity change from 0 to 764 ============================================ WARNING: possible recursive locking detected syzkaller #0 Not tainted -------------------------------------------- syz-executor625/5806 is trying to acquire lock: ffff88811884c018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock.h:351 [inline] ffff88811884c018 (&cluster->lock){+.+.}-{3:3}, at: vswap_iter+0xfa/0x1b0 mm/vswap.c:274 but task is already holding lock: ffff88811884c018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock.h:376 [inline] ffff88811884c018 (&cluster->lock){+.+.}-{3:3}, at: swap_cache_lock_irq+0xe2/0x190 mm/vswap.c:1586 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&cluster->lock); lock(&cluster->lock); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by syz-executor625/5806: #0: ffff888174bc2800 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:391 [inline] #0: ffff888174bc2800 (&mm->mmap_lock){++++}-{4:4}, at: madvise_lock+0x152/0x2e0 mm/madvise.c:1789 #1: ffff88811884c018 (&cluster->lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock.h:376 [inline] #1: ffff88811884c018 (&cluster->lock){+.+.}-{3:3}, at: swap_cache_lock_irq+0xe2/0x190 mm/vswap.c:1586 #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #2: ffffffff8e55a360 (rcu_read_lock){....}-{1:3}, at: vswap_cgroup_record+0x40/0x290 mm/vswap.c:1925 stack backtrace: *** If these findings have caused you to resend the series or submit a separate fix, please add the following tag to your commit message: Tested-by: syzbot@syzkaller.appspotmail.com --- This report is generated by a bot. It may contain errors. syzbot ci engineers can be reached at syzkaller@googlegroups.com. ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-03-19 23:26 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260318222953.441758-1-nphamcs@gmail.com>
2026-03-19 21:36 ` [syzbot ci] Re: Virtual Swap Space syzbot ci
2026-03-19 23:26 ` Nhat Pham
[not found] <20260208215839.87595-1-nphamcs@gmail.com>
2026-02-10 15:45 ` syzbot ci
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox