public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
@ 2025-09-02 11:40 syzbot
  2025-09-03  1:05 ` Dave Chinner
  0 siblings, 1 reply; 12+ messages in thread
From: syzbot @ 2025-09-02 11:40 UTC (permalink / raw)
  To: cem, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    8f5ae30d69d7 Linux 6.17-rc1
git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=144aca42580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=8c5ac3d8b8abfcb
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
userspace arch: arm64
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=17161662580000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=124aca42580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/18a2e4bd0c4a/disk-8f5ae30d.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3b5395881b25/vmlinux-8f5ae30d.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e875f4e3b7ff/Image-8f5ae30d.gz.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/b51a434c3e2c/mount_1.gz
  fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=104aca42580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0391d34e801643e2809b@syzkaller.appspotmail.com

==================================================================
BUG: KASAN: slab-use-after-free in rht_key_hashfn include/linux/rhashtable.h:159 [inline]
BUG: KASAN: slab-use-after-free in rht_head_hashfn include/linux/rhashtable.h:174 [inline]
BUG: KASAN: slab-use-after-free in __rhashtable_remove_fast_one include/linux/rhashtable.h:1007 [inline]
BUG: KASAN: slab-use-after-free in __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
BUG: KASAN: slab-use-after-free in rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
BUG: KASAN: slab-use-after-free in xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
BUG: KASAN: slab-use-after-free in xfs_buf_rele+0x79c/0xcfc fs/xfs/xfs_buf.c:951
Read of size 4 at addr ffff0000ce9fe008 by task syz.2.1678/16850

CPU: 0 UID: 0 PID: 16850 Comm: syz.2.1678 Not tainted 6.17.0-rc1-syzkaller-g8f5ae30d69d7 #0 PREEMPT 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 print_address_description+0xa8/0x238 mm/kasan/report.c:378
 print_report+0x68/0x84 mm/kasan/report.c:482
 kasan_report+0xb0/0x110 mm/kasan/report.c:595
 __asan_report_load4_noabort+0x20/0x2c mm/kasan/report_generic.c:380
 rht_key_hashfn include/linux/rhashtable.h:159 [inline]
 rht_head_hashfn include/linux/rhashtable.h:174 [inline]
 __rhashtable_remove_fast_one include/linux/rhashtable.h:1007 [inline]
 __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
 rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
 xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
 xfs_buf_rele+0x79c/0xcfc fs/xfs/xfs_buf.c:951
 xfs_buftarg_shrink_scan+0x1d8/0x270 fs/xfs/xfs_buf.c:1653
 do_shrink_slab+0x650/0x11b0 mm/shrinker.c:437
 shrink_slab+0xc68/0xfb8 mm/shrinker.c:664
 drop_slab_node mm/vmscan.c:441 [inline]
 drop_slab+0x120/0x248 mm/vmscan.c:459
 drop_caches_sysctl_handler+0x170/0x300 fs/drop_caches.c:68
 proc_sys_call_handler+0x460/0x7e8 fs/proc/proc_sysctl.c:600
 proc_sys_write+0x2c/0x3c fs/proc/proc_sysctl.c:626
 do_iter_readv_writev+0x4c0/0x724 fs/read_write.c:-1
 vfs_writev+0x29c/0x7cc fs/read_write.c:1057
 do_writev+0x128/0x290 fs/read_write.c:1103
 __do_sys_writev fs/read_write.c:1171 [inline]
 __se_sys_writev fs/read_write.c:1168 [inline]
 __arm64_sys_writev+0x80/0x94 fs/read_write.c:1168
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

Allocated by task 16829:
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x40/0x78 mm/kasan/common.c:68
 kasan_save_alloc_info+0x44/0x54 mm/kasan/generic.c:562
 poison_kmalloc_redzone mm/kasan/common.c:388 [inline]
 __kasan_kmalloc+0x9c/0xb4 mm/kasan/common.c:405
 kasan_kmalloc include/linux/kasan.h:260 [inline]
 __do_kmalloc_node mm/slub.c:4365 [inline]
 __kvmalloc_node_noprof+0x38c/0x638 mm/slub.c:5052
 bucket_table_alloc lib/rhashtable.c:186 [inline]
 rhashtable_init_noprof+0x3b4/0xa10 lib/rhashtable.c:1075
 xfs_buf_cache_init+0x28/0x38 fs/xfs/xfs_buf.c:375
 xfs_perag_alloc fs/xfs/libxfs/xfs_ag.c:238 [inline]
 xfs_initialize_perag+0x208/0x5ac fs/xfs/libxfs/xfs_ag.c:279
 xfs_mountfs+0x81c/0x1c04 fs/xfs/xfs_mount.c:976
 xfs_fs_fill_super+0xe74/0x11f0 fs/xfs/xfs_super.c:1965
 get_tree_bdev_flags+0x360/0x414 fs/super.c:1692
 get_tree_bdev+0x2c/0x3c fs/super.c:1715
 xfs_fs_get_tree+0x28/0x38 fs/xfs/xfs_super.c:2012
 vfs_get_tree+0x90/0x28c fs/super.c:1815
 do_new_mount+0x278/0x7f4 fs/namespace.c:3805
 path_mount+0x5b4/0xde0 fs/namespace.c:4120
 do_mount fs/namespace.c:4133 [inline]
 __do_sys_mount fs/namespace.c:4344 [inline]
 __se_sys_mount fs/namespace.c:4321 [inline]
 __arm64_sys_mount+0x3e8/0x468 fs/namespace.c:4321
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

Freed by task 6692:
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x40/0x78 mm/kasan/common.c:68
 kasan_save_free_info+0x58/0x70 mm/kasan/generic.c:576
 poison_slab_object mm/kasan/common.c:243 [inline]
 __kasan_slab_free+0x74/0x98 mm/kasan/common.c:275
 kasan_slab_free include/linux/kasan.h:233 [inline]
 slab_free_hook mm/slub.c:2417 [inline]
 slab_free mm/slub.c:4680 [inline]
 kfree+0x17c/0x474 mm/slub.c:4879
 kvfree+0x30/0x40 mm/slub.c:5095
 bucket_table_free+0xec/0x1a4 lib/rhashtable.c:114
 rhashtable_free_and_destroy+0x70c/0x87c lib/rhashtable.c:1173
 rhashtable_destroy+0x28/0x38 lib/rhashtable.c:1184
 xfs_buf_cache_destroy+0x20/0x30 fs/xfs/xfs_buf.c:382
 xfs_perag_uninit+0x28/0x38 fs/xfs/libxfs/xfs_ag.c:116
 xfs_group_free+0x144/0x32c fs/xfs/libxfs/xfs_group.c:171
 xfs_free_perag_range+0x58/0x8c fs/xfs/libxfs/xfs_ag.c:133
 xfs_unmountfs+0x29c/0x310 fs/xfs/xfs_mount.c:1354
 xfs_fs_put_super+0x6c/0x144 fs/xfs/xfs_super.c:1247
 generic_shutdown_super+0x12c/0x2b8 fs/super.c:643
 kill_block_super+0x44/0x90 fs/super.c:1766
 xfs_kill_sb+0x20/0x58 fs/xfs/xfs_super.c:2317
 deactivate_locked_super+0xc4/0x12c fs/super.c:474
 deactivate_super+0xe0/0x100 fs/super.c:507
 cleanup_mnt+0x31c/0x3ac fs/namespace.c:1378
 __cleanup_mnt+0x20/0x30 fs/namespace.c:1385
 task_work_run+0x1dc/0x260 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 do_notify_resume+0x174/0x1f4 arch/arm64/kernel/entry-common.c:155
 exit_to_user_mode_prepare arch/arm64/kernel/entry-common.c:173 [inline]
 exit_to_user_mode arch/arm64/kernel/entry-common.c:182 [inline]
 el0_svc+0xb8/0x180 arch/arm64/kernel/entry-common.c:880
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

The buggy address belongs to the object at ffff0000ce9fe000
 which belongs to the cache kmalloc-512 of size 512
The buggy address is located 8 bytes inside of
 freed 512-byte region [ffff0000ce9fe000, ffff0000ce9fe200)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10e9fc
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
anon flags: 0x5ffc00000000040(head|node=0|zone=2|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 05ffc00000000040 ffff0000c0001c80 0000000000000000 dead000000000001
raw: 0000000000000000 0000000000100010 00000000f5000000 0000000000000000
head: 05ffc00000000040 ffff0000c0001c80 0000000000000000 dead000000000001
head: 0000000000000000 0000000000100010 00000000f5000000 0000000000000000
head: 05ffc00000000002 fffffdffc33a7f01 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000004
page dumped because: kasan: bad access detected

Memory state around the buggy address:
 ffff0000ce9fdf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffff0000ce9fdf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff0000ce9fe000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                      ^
 ffff0000ce9fe080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff0000ce9fe100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
------------[ cut here ]------------
UBSAN: shift-out-of-bounds in lib/rhashtable.c:1192:34
shift exponent 4294901760 is too large for 32-bit type 'int'
CPU: 0 UID: 0 PID: 16850 Comm: syz.2.1678 Tainted: G    B               6.17.0-rc1-syzkaller-g8f5ae30d69d7 #0 PREEMPT 
Tainted: [B]=BAD_PAGE
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 ubsan_epilogue+0x14/0x48 lib/ubsan.c:233
 __ubsan_handle_shift_out_of_bounds+0x2b0/0x34c lib/ubsan.c:494
 __rht_bucket_nested+0x460/0x594 lib/rhashtable.c:1192
 rht_bucket_var include/linux/rhashtable.h:296 [inline]
 __rhashtable_remove_fast_one include/linux/rhashtable.h:1008 [inline]
 __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
 rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
 xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
 xfs_buf_rele+0x690/0xcfc fs/xfs/xfs_buf.c:951
 xfs_buftarg_shrink_scan+0x1d8/0x270 fs/xfs/xfs_buf.c:1653
 do_shrink_slab+0x650/0x11b0 mm/shrinker.c:437
 shrink_slab+0xc68/0xfb8 mm/shrinker.c:664
 drop_slab_node mm/vmscan.c:441 [inline]
 drop_slab+0x120/0x248 mm/vmscan.c:459
 drop_caches_sysctl_handler+0x170/0x300 fs/drop_caches.c:68
 proc_sys_call_handler+0x460/0x7e8 fs/proc/proc_sysctl.c:600
 proc_sys_write+0x2c/0x3c fs/proc/proc_sysctl.c:626
 do_iter_readv_writev+0x4c0/0x724 fs/read_write.c:-1
 vfs_writev+0x29c/0x7cc fs/read_write.c:1057
 do_writev+0x128/0x290 fs/read_write.c:1103
 __do_sys_writev fs/read_write.c:1171 [inline]
 __se_sys_writev fs/read_write.c:1168 [inline]
 __arm64_sys_writev+0x80/0x94 fs/read_write.c:1168
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
---[ end trace ]---
------------[ cut here ]------------
UBSAN: shift-out-of-bounds in lib/rhashtable.c:1193:32
shift exponent 4294901760 is too large for 32-bit type 'unsigned int'
CPU: 0 UID: 0 PID: 16850 Comm: syz.2.1678 Tainted: G    B               6.17.0-rc1-syzkaller-g8f5ae30d69d7 #0 PREEMPT 
Tainted: [B]=BAD_PAGE
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 ubsan_epilogue+0x14/0x48 lib/ubsan.c:233
 __ubsan_handle_shift_out_of_bounds+0x2b0/0x34c lib/ubsan.c:494
 __rht_bucket_nested+0x4a8/0x594 lib/rhashtable.c:1193
 rht_bucket_var include/linux/rhashtable.h:296 [inline]
 __rhashtable_remove_fast_one include/linux/rhashtable.h:1008 [inline]
 __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
 rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
 xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
 xfs_buf_rele+0x690/0xcfc fs/xfs/xfs_buf.c:951
 xfs_buftarg_shrink_scan+0x1d8/0x270 fs/xfs/xfs_buf.c:1653
 do_shrink_slab+0x650/0x11b0 mm/shrinker.c:437
 shrink_slab+0xc68/0xfb8 mm/shrinker.c:664
 drop_slab_node mm/vmscan.c:441 [inline]
 drop_slab+0x120/0x248 mm/vmscan.c:459
 drop_caches_sysctl_handler+0x170/0x300 fs/drop_caches.c:68
 proc_sys_call_handler+0x460/0x7e8 fs/proc/proc_sysctl.c:600
 proc_sys_write+0x2c/0x3c fs/proc/proc_sysctl.c:626
 do_iter_readv_writev+0x4c0/0x724 fs/read_write.c:-1
 vfs_writev+0x29c/0x7cc fs/read_write.c:1057
 do_writev+0x128/0x290 fs/read_write.c:1103
 __do_sys_writev fs/read_write.c:1171 [inline]
 __se_sys_writev fs/read_write.c:1168 [inline]
 __arm64_sys_writev+0x80/0x94 fs/read_write.c:1168
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
---[ end trace ]---
Unable to handle kernel paging request at virtual address dfff800000000000
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
Mem abort info:
  ESR = 0x0000000096000005
  EC = 0x25: DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
  FSC = 0x05: level 1 translation fault
Data abort info:
  ISV = 0, ISS = 0x00000005, ISS2 = 0x00000000
  CM = 0, WnR = 0, TnD = 0, TagAccess = 0
  GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
[dfff800000000000] address between user and kernel address ranges
Internal error: Oops: 0000000096000005 [#1]  SMP
Modules linked in:
CPU: 0 UID: 0 PID: 16850 Comm: syz.2.1678 Tainted: G    B               6.17.0-rc1-syzkaller-g8f5ae30d69d7 #0 PREEMPT 
Tainted: [B]=BAD_PAGE
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
pstate: 83400005 (Nzcv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
pc : __rht_bucket_nested+0xbc/0x594 lib/rhashtable.c:1198
lr : nested_table_top lib/rhashtable.c:72 [inline]
lr : __rht_bucket_nested+0xb0/0x594 lib/rhashtable.c:1197
sp : ffff8000a23a7300
x29: ffff8000a23a7310 x28: 0000000000000010 x27: 00000000c6aa9140
x26: dfff800000000000 x25: 0000000000000000 x24: 00000000c6aa9140
x23: ffff0000ce9fe080 x22: 00000000c4881028 x21: 00000000ffff0000
x20: ffff0000ce9fe000 x19: ffff0000ce9fe004 x18: 0000000000000000
x17: 64656e6769736e75 x16: ffff80008b007230 x15: ffff70001260d84c
x14: 1ffff0001260d84c x13: 0000000000000004 x12: ffffffffffffffff
x11: ffff70001260d84c x10: 0000000000ff0100 x9 : ffff8000975a4860
x8 : 0000000000000000 x7 : fffffffffffed948 x6 : ffff800080563af4
x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffff8000830c1284
x2 : 0000000000000000 x1 : 0000000000000008 x0 : 0000000000000000
Call trace:
 __rht_bucket_nested+0xbc/0x594 lib/rhashtable.c:1198 (P)
 rht_bucket_var include/linux/rhashtable.h:296 [inline]
 __rhashtable_remove_fast_one include/linux/rhashtable.h:1008 [inline]
 __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
 rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
 xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
 xfs_buf_rele+0x690/0xcfc fs/xfs/xfs_buf.c:951
 xfs_buftarg_shrink_scan+0x1d8/0x270 fs/xfs/xfs_buf.c:1653
 do_shrink_slab+0x650/0x11b0 mm/shrinker.c:437
 shrink_slab+0xc68/0xfb8 mm/shrinker.c:664
 drop_slab_node mm/vmscan.c:441 [inline]
 drop_slab+0x120/0x248 mm/vmscan.c:459
 drop_caches_sysctl_handler+0x170/0x300 fs/drop_caches.c:68
 proc_sys_call_handler+0x460/0x7e8 fs/proc/proc_sysctl.c:600
 proc_sys_write+0x2c/0x3c fs/proc/proc_sysctl.c:626
 do_iter_readv_writev+0x4c0/0x724 fs/read_write.c:-1
 vfs_writev+0x29c/0x7cc fs/read_write.c:1057
 do_writev+0x128/0x290 fs/read_write.c:1103
 __do_sys_writev fs/read_write.c:1171 [inline]
 __se_sys_writev fs/read_write.c:1168 [inline]
 __arm64_sys_writev+0x80/0x94 fs/read_write.c:1168
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
Code: 976eb328 f94002e8 8b190d19 d343ff28 (387a6908) 
---[ end trace 0000000000000000 ]---
----------------
Code disassembly (best guess):
   0:	976eb328 	bl	0xfffffffffdbacca0
   4:	f94002e8 	ldr	x8, [x23]
   8:	8b190d19 	add	x25, x8, x25, lsl #3
   c:	d343ff28 	lsr	x8, x25, #3
* 10:	387a6908 	ldrb	w8, [x8, x26] <-- trapping instruction


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2025-09-02 11:40 [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
@ 2025-09-03  1:05 ` Dave Chinner
  2025-09-03  6:08   ` Christoph Hellwig
  0 siblings, 1 reply; 12+ messages in thread
From: Dave Chinner @ 2025-09-03  1:05 UTC (permalink / raw)
  To: syzbot; +Cc: cem, linux-kernel, linux-xfs, syzkaller-bugs

On Tue, Sep 02, 2025 at 04:40:35AM -0700, syzbot wrote:
> Hello,
> 
> syzbot found the following issue on:
> 
> HEAD commit:    8f5ae30d69d7 Linux 6.17-rc1
> git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
> console output: https://syzkaller.appspot.com/x/log.txt?x=144aca42580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=8c5ac3d8b8abfcb
> dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
> compiler:       Debian clang version 20.1.7 (++20250616065708+6146a88f6049-1~exp1~20250616065826.132), Debian LLD 20.1.7
> userspace arch: arm64
> syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=17161662580000
> C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=124aca42580000
> 
> Downloadable assets:
> disk image: https://storage.googleapis.com/syzbot-assets/18a2e4bd0c4a/disk-8f5ae30d.raw.xz
> vmlinux: https://storage.googleapis.com/syzbot-assets/3b5395881b25/vmlinux-8f5ae30d.xz
> kernel image: https://storage.googleapis.com/syzbot-assets/e875f4e3b7ff/Image-8f5ae30d.gz.xz
> mounted in repro: https://storage.googleapis.com/syzbot-assets/b51a434c3e2c/mount_1.gz
>   fsck result: failed (log: https://syzkaller.appspot.com/x/fsck.log?x=104aca42580000)
> 
> IMPORTANT: if you fix the issue, please add the following tag to the commit:
> Reported-by: syzbot+0391d34e801643e2809b@syzkaller.appspotmail.com
> 
> ==================================================================
> BUG: KASAN: slab-use-after-free in rht_key_hashfn include/linux/rhashtable.h:159 [inline]
> BUG: KASAN: slab-use-after-free in rht_head_hashfn include/linux/rhashtable.h:174 [inline]
> BUG: KASAN: slab-use-after-free in __rhashtable_remove_fast_one include/linux/rhashtable.h:1007 [inline]
> BUG: KASAN: slab-use-after-free in __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
> BUG: KASAN: slab-use-after-free in rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
> BUG: KASAN: slab-use-after-free in xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
> BUG: KASAN: slab-use-after-free in xfs_buf_rele+0x79c/0xcfc fs/xfs/xfs_buf.c:951
> Read of size 4 at addr ffff0000ce9fe008 by task syz.2.1678/16850
> 
> CPU: 0 UID: 0 PID: 16850 Comm: syz.2.1678 Not tainted 6.17.0-rc1-syzkaller-g8f5ae30d69d7 #0 PREEMPT 
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
> Call trace:
>  show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
>  __dump_stack+0x30/0x40 lib/dump_stack.c:94
>  dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
>  print_address_description+0xa8/0x238 mm/kasan/report.c:378
>  print_report+0x68/0x84 mm/kasan/report.c:482
>  kasan_report+0xb0/0x110 mm/kasan/report.c:595
>  __asan_report_load4_noabort+0x20/0x2c mm/kasan/report_generic.c:380
>  rht_key_hashfn include/linux/rhashtable.h:159 [inline]
>  rht_head_hashfn include/linux/rhashtable.h:174 [inline]
>  __rhashtable_remove_fast_one include/linux/rhashtable.h:1007 [inline]
>  __rhashtable_remove_fast include/linux/rhashtable.h:1093 [inline]
>  rhashtable_remove_fast include/linux/rhashtable.h:1122 [inline]
>  xfs_buf_rele_cached fs/xfs/xfs_buf.c:926 [inline]
>  xfs_buf_rele+0x79c/0xcfc fs/xfs/xfs_buf.c:951
>  xfs_buftarg_shrink_scan+0x1d8/0x270 fs/xfs/xfs_buf.c:1653
>  do_shrink_slab+0x650/0x11b0 mm/shrinker.c:437
>  shrink_slab+0xc68/0xfb8 mm/shrinker.c:664
>  drop_slab_node mm/vmscan.c:441 [inline]
>  drop_slab+0x120/0x248 mm/vmscan.c:459
>  drop_caches_sysctl_handler+0x170/0x300 fs/drop_caches.c:68

Yup, that's a real bug.

> Freed by task 6692:
>  kasan_save_stack mm/kasan/common.c:47 [inline]
>  kasan_save_track+0x40/0x78 mm/kasan/common.c:68
>  kasan_save_free_info+0x58/0x70 mm/kasan/generic.c:576
>  poison_slab_object mm/kasan/common.c:243 [inline]
>  __kasan_slab_free+0x74/0x98 mm/kasan/common.c:275
>  kasan_slab_free include/linux/kasan.h:233 [inline]
>  slab_free_hook mm/slub.c:2417 [inline]
>  slab_free mm/slub.c:4680 [inline]
>  kfree+0x17c/0x474 mm/slub.c:4879
>  kvfree+0x30/0x40 mm/slub.c:5095
>  bucket_table_free+0xec/0x1a4 lib/rhashtable.c:114
>  rhashtable_free_and_destroy+0x70c/0x87c lib/rhashtable.c:1173
>  rhashtable_destroy+0x28/0x38 lib/rhashtable.c:1184
>  xfs_buf_cache_destroy+0x20/0x30 fs/xfs/xfs_buf.c:382
>  xfs_perag_uninit+0x28/0x38 fs/xfs/libxfs/xfs_ag.c:116
>  xfs_group_free+0x144/0x32c fs/xfs/libxfs/xfs_group.c:171
>  xfs_free_perag_range+0x58/0x8c fs/xfs/libxfs/xfs_ag.c:133
>  xfs_unmountfs+0x29c/0x310 fs/xfs/xfs_mount.c:1354
>  xfs_fs_put_super+0x6c/0x144 fs/xfs/xfs_super.c:1247
>  generic_shutdown_super+0x12c/0x2b8 fs/super.c:643
>  kill_block_super+0x44/0x90 fs/super.c:1766
>  xfs_kill_sb+0x20/0x58 fs/xfs/xfs_super.c:2317

And this is it - we can't tear down the buffer cache hash table
until the buftarg shrinker has been shut down. This doesn't happen
until xfs_mount_free() is called from the VFS. Hence freeing the
rhashtable from xfs_perag_uninit() can race with the shrinker
processing a dispose list and removing items from the rhashtable
whilst unmount is "uninitialising" the perag structures and killing
the buffer cache rhashtable.

It is worth noting that xfs_buftarg_drain() does not guarantee that
the shrinker is not running - all it does is run until the LRU is
empty. If the shrinker is also running and is processing a dispose
list (i.e. buffers it has already removed from the LRU), then
xfs_buftarg_drain() will return whilst those buffers are still in
the buffer hash table and being processed by the shrinker.

Hence the unmount process can free the buftarg rhashtable whilst the
shrinker is still processing buffers during unmount. The buffers
still ahve passive refs to the perag whilst they are on the dispose
list, so this should have thrown refcount warnings in the log before
KASAN threw the UAF.

Yup, there it is:

[  256.307175][ T6692] XFS (loop3): Internal error atomic_read(&xg->xg_ref) != 0 at line 162 of file fs/xfs/libxfs/xfs_group.c.  Caller xfs_group_free+0x1d8/0x32c

I'm not sure when this got broken - it might even be a zero-day
rhashtable conversion bug.

I think that the buftarg rhashtable needs to initialised before the
shrinker is registered, then freed from xfs_destroy_buftarg() after
the shrinker has been shut down as it must live longer than the
buftarg shrinker instance. i.e. the buftarg rhashtable needs to have
the same life cycle as the buftarg LRU list....

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2025-09-03  1:05 ` Dave Chinner
@ 2025-09-03  6:08   ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2025-09-03  6:08 UTC (permalink / raw)
  To: Dave Chinner; +Cc: syzbot, cem, linux-kernel, linux-xfs, syzkaller-bugs

On Wed, Sep 03, 2025 at 11:05:57AM +1000, Dave Chinner wrote:
> I think that the buftarg rhashtable needs to initialised before the
> shrinker is registered, then freed from xfs_destroy_buftarg() after
> the shrinker has been shut down as it must live longer than the
> buftarg shrinker instance. i.e. the buftarg rhashtable needs to have
> the same life cycle as the buftarg LRU list....

My RFC patch to switch back to a per-buftarg hashtable would do that:

https://git.infradead.org/?p=users/hch/xfs.git;a=commitdiff;h=e3cc537864a7ab980abfa18a3efe01a111aad1d7

I still haven't gotten around doing serious performance testing on it,
and I'm busy right now.  But in about two weeks I'll probably have
both a bit of time and access to a big enough system to do serious
scalability testing on it.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2025-10-30  7:11 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
@ 2025-10-30  7:42 ` syzbot
  0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2025-10-30  7:42 UTC (permalink / raw)
  To: hch, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot tried to test the proposed patch but the build/boot failed:

failed to checkout kernel repo git://git.infradead.org/users/hch/xfs.git/xfs-buf-hash: failed to run ["git" "fetch" "--force" "679bdfc056221ae86d16104d6de6223afaafa4b7" "xfs-buf-hash"]: exit status 128


Tested on:

commit:         [unknown 
git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
kernel config:  https://syzkaller.appspot.com/x/.config?x=8c5ac3d8b8abfcb
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       
userspace arch: arm64

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2025-10-30  8:01 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
@ 2025-10-30  8:47 ` syzbot
  0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2025-10-30  8:47 UTC (permalink / raw)
  To: hch, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
BUG: MAX_LOCKDEP_CHAINS too low!

BUG: MAX_LOCKDEP_CHAINS too low!
turning off the locking correctness validator.
CPU: 1 UID: 0 PID: 2577 Comm: kworker/u8:7 Not tainted syzkaller #0 PREEMPT 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/30/2025
Workqueue: xfs-cil/loop0 xlog_cil_push_work
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 add_chain_cache kernel/locking/lockdep.c:-1 [inline]
 lookup_chain_cache_add kernel/locking/lockdep.c:3855 [inline]
 validate_chain kernel/locking/lockdep.c:3876 [inline]
 __lock_acquire+0xf9c/0x30a4 kernel/locking/lockdep.c:5237
 lock_acquire+0x14c/0x2e0 kernel/locking/lockdep.c:5868
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x5c/0x7c kernel/locking/spinlock.c:162
 __wake_up_common_lock kernel/sched/wait.c:124 [inline]
 __wake_up+0x40/0x1a8 kernel/sched/wait.c:146
 xlog_cil_set_ctx_write_state+0x2a8/0x310 fs/xfs/xfs_log_cil.c:997
 xlog_write+0x1fc/0xe94 fs/xfs/xfs_log.c:2252
 xlog_cil_write_commit_record fs/xfs/xfs_log_cil.c:1118 [inline]
 xlog_cil_push_work+0x19ec/0x1f74 fs/xfs/xfs_log_cil.c:1434
 process_one_work+0x7e8/0x155c kernel/workqueue.c:3236
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x958/0xed8 kernel/workqueue.c:3400
 kthread+0x5fc/0x75c kernel/kthread.c:463
 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844


Tested on:

commit:         af1722bb xfs: switch (back) to a per-buftarg buffer hash
git tree:       git://git.infradead.org/users/hch/misc.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=1110bfe2580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=39f8a155475bc42d
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19  6:06 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
@ 2026-01-19  7:38 ` syzbot
  0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-01-19  7:38 UTC (permalink / raw)
  To: hch, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
BUG: MAX_LOCKDEP_CHAINS too low!

BUG: MAX_LOCKDEP_CHAINS too low!
turning off the locking correctness validator.
CPU: 0 UID: 0 PID: 1610 Comm: kworker/u8:6 Not tainted syzkaller #0 PREEMPT 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
Workqueue: xfs_iwalk-13497 xfs_pwork_work
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 add_chain_cache kernel/locking/lockdep.c:-1 [inline]
 lookup_chain_cache_add kernel/locking/lockdep.c:3855 [inline]
 validate_chain kernel/locking/lockdep.c:3876 [inline]
 __lock_acquire+0xf9c/0x30a4 kernel/locking/lockdep.c:5237
 lock_acquire+0x140/0x2e0 kernel/locking/lockdep.c:5868
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x5c/0x7c kernel/locking/spinlock.c:162
 debug_object_activate+0x7c/0x460 lib/debugobjects.c:818
 debug_timer_activate kernel/time/timer.c:793 [inline]
 __mod_timer+0x8c4/0xd00 kernel/time/timer.c:1124
 add_timer_global+0x88/0xc0 kernel/time/timer.c:1283
 __queue_delayed_work+0x218/0x2c8 kernel/workqueue.c:2520
 queue_delayed_work_on+0xe4/0x194 kernel/workqueue.c:2555
 queue_delayed_work include/linux/workqueue.h:684 [inline]
 xfs_reclaim_work_queue+0x154/0x244 fs/xfs/xfs_icache.c:211
 xfs_perag_set_inode_tag+0x19c/0x4bc fs/xfs/xfs_icache.c:263
 xfs_inodegc_set_reclaimable+0x1e0/0x444 fs/xfs/xfs_icache.c:1917
 xfs_inode_mark_reclaimable+0x2c8/0x10f8 fs/xfs/xfs_icache.c:2252
 xfs_fs_destroy_inode+0x2fc/0x618 fs/xfs/xfs_super.c:712
 destroy_inode fs/inode.c:396 [inline]
 evict+0x7cc/0xa74 fs/inode.c:861
 iput_final fs/inode.c:1954 [inline]
 iput+0xc54/0xfdc fs/inode.c:2006
 xfs_irele+0xd0/0x2ac fs/xfs/xfs_inode.c:2662
 xfs_qm_dqusage_adjust+0x4f4/0x5b0 fs/xfs/xfs_qm.c:1411
 xfs_iwalk_ag_recs+0x404/0x7c8 fs/xfs/xfs_iwalk.c:209
 xfs_iwalk_run_callbacks+0x1c0/0x3e8 fs/xfs/xfs_iwalk.c:370
 xfs_iwalk_ag+0x6ac/0x82c fs/xfs/xfs_iwalk.c:473
 xfs_iwalk_ag_work+0xf8/0x1a0 fs/xfs/xfs_iwalk.c:620
 xfs_pwork_work+0x80/0x1a4 fs/xfs/xfs_pwork.c:47
 process_one_work+0x7c0/0x1558 kernel/workqueue.c:3257
 process_scheduled_works kernel/workqueue.c:3340 [inline]
 worker_thread+0x958/0xed8 kernel/workqueue.c:3421
 kthread+0x5fc/0x75c kernel/kthread.c:463
 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844


Tested on:

commit:         855e81db xfs: switch (back) to a per-buftarg buffer hash
git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=162bb63a580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=1707867b02964a26
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19  7:44 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
@ 2026-01-19  8:34 ` syzbot
  2026-01-19  8:37   ` Christoph Hellwig
  0 siblings, 1 reply; 12+ messages in thread
From: syzbot @ 2026-01-19  8:34 UTC (permalink / raw)
  To: hch, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
BUG: MAX_LOCKDEP_KEYS too low!

BUG: MAX_LOCKDEP_KEYS too low!
turning off the locking correctness validator.
CPU: 1 UID: 0 PID: 7123 Comm: syz-executor Not tainted syzkaller #0 PREEMPT 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 register_lock_class+0x310/0x348 kernel/locking/lockdep.c:1332
 __lock_acquire+0xbc/0x30a4 kernel/locking/lockdep.c:5112
 lock_acquire+0x140/0x2e0 kernel/locking/lockdep.c:5868
 touch_wq_lockdep_map+0xa8/0x164 kernel/workqueue.c:3940
 __flush_workqueue+0xfc/0x109c kernel/workqueue.c:3982
 drain_workqueue+0xa4/0x310 kernel/workqueue.c:4146
 destroy_workqueue+0xb4/0xd90 kernel/workqueue.c:5903
 xfs_destroy_mount_workqueues+0xac/0xdc fs/xfs/xfs_super.c:649
 xfs_fs_put_super+0x128/0x144 fs/xfs/xfs_super.c:1262
 generic_shutdown_super+0x12c/0x2b8 fs/super.c:643
 kill_block_super+0x44/0x90 fs/super.c:1722
 xfs_kill_sb+0x20/0x58 fs/xfs/xfs_super.c:2297
 deactivate_locked_super+0xc4/0x12c fs/super.c:474
 deactivate_super+0xe0/0x100 fs/super.c:507
 cleanup_mnt+0x31c/0x3ac fs/namespace.c:1318
 __cleanup_mnt+0x20/0x30 fs/namespace.c:1325
 task_work_run+0x1dc/0x260 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
 exit_to_user_mode_loop+0x10c/0x18c kernel/entry/common.c:75
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 exit_to_user_mode_prepare_legacy include/linux/irq-entry-common.h:242 [inline]
 arm64_exit_to_user_mode arch/arm64/kernel/entry-common.c:81 [inline]
 el0_svc+0x17c/0x26c arch/arm64/kernel/entry-common.c:725
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:743
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb


Tested on:

commit:         3e548540 increase LOCKDEP_CHAINS_BITS
git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=101b0d22580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=6c6138f827b10ea4
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19  8:34 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
@ 2026-01-19  8:37   ` Christoph Hellwig
  2026-01-19  8:53     ` Aleksandr Nogikh
  0 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2026-01-19  8:37 UTC (permalink / raw)
  To: syzbot; +Cc: hch, linux-kernel, linux-xfs, syzkaller-bugs

So I'm not sure what this test does that it always triggers the lockdep
keys, but that makes it impossible to validate the original xfs report.

Is there a way to force running syzbot reproducers without lockdep?

Note that I've also had it running locally for quite a while, an even
with lockdep enabled I'm somehow not hitting the lockdep splat.
Although that is using my normal debug config and not the provided
one.

On Mon, Jan 19, 2026 at 12:34:03AM -0800, syzbot wrote:
> Hello,
> 
> syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> BUG: MAX_LOCKDEP_KEYS too low!
> 
> BUG: MAX_LOCKDEP_KEYS too low!
> turning off the locking correctness validator.
> CPU: 1 UID: 0 PID: 7123 Comm: syz-executor Not tainted syzkaller #0 PREEMPT 
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
> Call trace:
>  show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
>  __dump_stack+0x30/0x40 lib/dump_stack.c:94
>  dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
>  dump_stack+0x1c/0x28 lib/dump_stack.c:129
>  register_lock_class+0x310/0x348 kernel/locking/lockdep.c:1332
>  __lock_acquire+0xbc/0x30a4 kernel/locking/lockdep.c:5112
>  lock_acquire+0x140/0x2e0 kernel/locking/lockdep.c:5868
>  touch_wq_lockdep_map+0xa8/0x164 kernel/workqueue.c:3940
>  __flush_workqueue+0xfc/0x109c kernel/workqueue.c:3982
>  drain_workqueue+0xa4/0x310 kernel/workqueue.c:4146
>  destroy_workqueue+0xb4/0xd90 kernel/workqueue.c:5903
>  xfs_destroy_mount_workqueues+0xac/0xdc fs/xfs/xfs_super.c:649
>  xfs_fs_put_super+0x128/0x144 fs/xfs/xfs_super.c:1262
>  generic_shutdown_super+0x12c/0x2b8 fs/super.c:643
>  kill_block_super+0x44/0x90 fs/super.c:1722
>  xfs_kill_sb+0x20/0x58 fs/xfs/xfs_super.c:2297
>  deactivate_locked_super+0xc4/0x12c fs/super.c:474
>  deactivate_super+0xe0/0x100 fs/super.c:507
>  cleanup_mnt+0x31c/0x3ac fs/namespace.c:1318
>  __cleanup_mnt+0x20/0x30 fs/namespace.c:1325
>  task_work_run+0x1dc/0x260 kernel/task_work.c:233
>  resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
>  __exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
>  exit_to_user_mode_loop+0x10c/0x18c kernel/entry/common.c:75
>  __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
>  exit_to_user_mode_prepare_legacy include/linux/irq-entry-common.h:242 [inline]
>  arm64_exit_to_user_mode arch/arm64/kernel/entry-common.c:81 [inline]
>  el0_svc+0x17c/0x26c arch/arm64/kernel/entry-common.c:725
>  el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:743
>  el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> 
> 
> Tested on:
> 
> commit:         3e548540 increase LOCKDEP_CHAINS_BITS
> git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
> console output: https://syzkaller.appspot.com/x/log.txt?x=101b0d22580000
> kernel config:  https://syzkaller.appspot.com/x/.config?x=6c6138f827b10ea4
> dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
> compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> userspace arch: arm64
> 
> Note: no patches were applied.
---end quoted text---

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19  8:37   ` Christoph Hellwig
@ 2026-01-19  8:53     ` Aleksandr Nogikh
  2026-01-19  9:03       ` Christoph Hellwig
  0 siblings, 1 reply; 12+ messages in thread
From: Aleksandr Nogikh @ 2026-01-19  8:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: syzbot, linux-kernel, linux-xfs, syzkaller-bugs, Dmitry Vyukov

On Mon, Jan 19, 2026 at 9:37 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> So I'm not sure what this test does that it always triggers the lockdep
> keys, but that makes it impossible to validate the original xfs report.
>
> Is there a way to force running syzbot reproducers without lockdep?

Not directly, but you could explicitly modify lockdep's Kconfig in
your test patch to disable lockdep entirely.

>
> Note that I've also had it running locally for quite a while, an even
> with lockdep enabled I'm somehow not hitting the lockdep splat.
> Although that is using my normal debug config and not the provided
> one.

Hmm, yes, that sounds weird.

I wonder if it's because we run the reproducers in threaded mode when
handling #syz test commands on the syzbot side, which leads to even
more syscalls being executed in parallel. Or the system just got lucky
once when it was generating the reproducer - overall, "BUG:
MAX_LOCKDEP_KEYS too low!" [1] seems to be a popular sink for
different reproducers on our side :(

[1] https://syzkaller.appspot.com/bug?extid=a70a6358abd2c3f9550f

>
> On Mon, Jan 19, 2026 at 12:34:03AM -0800, syzbot wrote:
> > Hello,
> >
> > syzbot has tested the proposed patch but the reproducer is still triggering an issue:
> > BUG: MAX_LOCKDEP_KEYS too low!
> >
> > BUG: MAX_LOCKDEP_KEYS too low!
> > turning off the locking correctness validator.
> > CPU: 1 UID: 0 PID: 7123 Comm: syz-executor Not tainted syzkaller #0 PREEMPT
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
> > Call trace:
> >  show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
> >  __dump_stack+0x30/0x40 lib/dump_stack.c:94
> >  dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
> >  dump_stack+0x1c/0x28 lib/dump_stack.c:129
> >  register_lock_class+0x310/0x348 kernel/locking/lockdep.c:1332
> >  __lock_acquire+0xbc/0x30a4 kernel/locking/lockdep.c:5112
> >  lock_acquire+0x140/0x2e0 kernel/locking/lockdep.c:5868
> >  touch_wq_lockdep_map+0xa8/0x164 kernel/workqueue.c:3940
> >  __flush_workqueue+0xfc/0x109c kernel/workqueue.c:3982
> >  drain_workqueue+0xa4/0x310 kernel/workqueue.c:4146
> >  destroy_workqueue+0xb4/0xd90 kernel/workqueue.c:5903
> >  xfs_destroy_mount_workqueues+0xac/0xdc fs/xfs/xfs_super.c:649
> >  xfs_fs_put_super+0x128/0x144 fs/xfs/xfs_super.c:1262
> >  generic_shutdown_super+0x12c/0x2b8 fs/super.c:643
> >  kill_block_super+0x44/0x90 fs/super.c:1722
> >  xfs_kill_sb+0x20/0x58 fs/xfs/xfs_super.c:2297
> >  deactivate_locked_super+0xc4/0x12c fs/super.c:474
> >  deactivate_super+0xe0/0x100 fs/super.c:507
> >  cleanup_mnt+0x31c/0x3ac fs/namespace.c:1318
> >  __cleanup_mnt+0x20/0x30 fs/namespace.c:1325
> >  task_work_run+0x1dc/0x260 kernel/task_work.c:233
> >  resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
> >  __exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
> >  exit_to_user_mode_loop+0x10c/0x18c kernel/entry/common.c:75
> >  __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
> >  exit_to_user_mode_prepare_legacy include/linux/irq-entry-common.h:242 [inline]
> >  arm64_exit_to_user_mode arch/arm64/kernel/entry-common.c:81 [inline]
> >  el0_svc+0x17c/0x26c arch/arm64/kernel/entry-common.c:725
> >  el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:743
> >  el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> > XFS (loop0): Unmounting Filesystem c496e05e-540d-4c72-b591-04d79d8b4eeb
> >
> >
> > Tested on:
> >
> > commit:         3e548540 increase LOCKDEP_CHAINS_BITS
> > git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
> > console output: https://syzkaller.appspot.com/x/log.txt?x=101b0d22580000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=6c6138f827b10ea4
> > dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
> > compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
> > userspace arch: arm64
> >
> > Note: no patches were applied.
> ---end quoted text---
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19  8:53     ` Aleksandr Nogikh
@ 2026-01-19  9:03       ` Christoph Hellwig
  0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2026-01-19  9:03 UTC (permalink / raw)
  To: Aleksandr Nogikh
  Cc: Christoph Hellwig, syzbot, linux-kernel, linux-xfs,
	syzkaller-bugs, Dmitry Vyukov

On Mon, Jan 19, 2026 at 09:53:18AM +0100, Aleksandr Nogikh wrote:
> On Mon, Jan 19, 2026 at 9:37 AM Christoph Hellwig <hch@infradead.org> wrote:
> >
> > So I'm not sure what this test does that it always triggers the lockdep
> > keys, but that makes it impossible to validate the original xfs report.
> >
> > Is there a way to force running syzbot reproducers without lockdep?
> 
> Not directly, but you could explicitly modify lockdep's Kconfig in
> your test patch to disable lockdep entirely.

Already, I'll give it a try.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19  9:03 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
@ 2026-01-19  9:29 ` syzbot
  0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-01-19  9:29 UTC (permalink / raw)
  To: hch, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot tried to test the proposed patch but the build/boot failed:

./include/linux/srcu.h:197:2: error: call to undeclared function 'lock_sync'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
./include/linux/semaphore.h:52:28: error: field designator 'name' does not refer to any field in type 'struct lockdep_map'
./include/linux/semaphore.h:52:28: error: field designator 'wait_type_inner' does not refer to any field in type 'struct lockdep_map'


Tested on:

commit:         9f73447f disable lockdep
git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
kernel config:  https://syzkaller.appspot.com/x/.config?x=8c5ac3d8b8abfcb
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Note: no patches were applied.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4)
  2026-01-19 14:45 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
@ 2026-01-19 15:17 ` syzbot
  0 siblings, 0 replies; 12+ messages in thread
From: syzbot @ 2026-01-19 15:17 UTC (permalink / raw)
  To: hch, linux-kernel, linux-xfs, syzkaller-bugs

Hello,

syzbot has tested the proposed patch and the reproducer did not trigger any issue:

Reported-by: syzbot+0391d34e801643e2809b@syzkaller.appspotmail.com
Tested-by: syzbot+0391d34e801643e2809b@syzkaller.appspotmail.com

Tested on:

commit:         5dc79b07 disable lockdep
git tree:       git://git.infradead.org/users/hch/xfs.git xfs-buf-hash
console output: https://syzkaller.appspot.com/x/log.txt?x=16604bfc580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=3433733714e92ec3
dashboard link: https://syzkaller.appspot.com/bug?extid=0391d34e801643e2809b
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Note: no patches were applied.
Note: testing is done by a robot and is best-effort only.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-01-19 15:17 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-02 11:40 [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
2025-09-03  1:05 ` Dave Chinner
2025-09-03  6:08   ` Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2025-10-30  7:11 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
2025-10-30  7:42 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
2025-10-30  8:01 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
2025-10-30  8:47 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
2026-01-19  6:06 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
2026-01-19  7:38 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
2026-01-19  7:44 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
2026-01-19  8:34 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
2026-01-19  8:37   ` Christoph Hellwig
2026-01-19  8:53     ` Aleksandr Nogikh
2026-01-19  9:03       ` Christoph Hellwig
2026-01-19  9:03 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
2026-01-19  9:29 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot
2026-01-19 14:45 [syzbot] Monthly xfs report (Oct 2025) Christoph Hellwig
2026-01-19 15:17 ` [syzbot] [xfs?] KASAN: slab-use-after-free Read in xfs_buf_rele (4) syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox