public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: syzbot <syzbot+c1c6edb02bea1da754d8@syzkaller.appspotmail.com>
To: clm@fb.com, dsterba@suse.com, josef@toxicpanda.com,
	 linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org,
	 syzkaller-bugs@googlegroups.com
Subject: [syzbot] [btrfs?] possible deadlock in __btrfs_release_delayed_node (5)
Date: Fri, 19 Dec 2025 03:02:26 -0800	[thread overview]
Message-ID: <694530c2.a70a0220.207337.010d.GAE@google.com> (raw)

Hello,

syzbot found the following issue on:

HEAD commit:    05c93f3395ed Merge branch 'for-next/core' into for-kernelci
git tree:       git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=12bacd92580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=3b5338ad1e59a06c
dashboard link: https://syzkaller.appspot.com/bug?extid=c1c6edb02bea1da754d8
compiler:       Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
userspace arch: arm64

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/6b5c913e373c/disk-05c93f33.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/15e75f1266ef/vmlinux-05c93f33.xz
kernel image: https://storage.googleapis.com/syzbot-assets/dd930129c578/Image-05c93f33.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c1c6edb02bea1da754d8@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
btrfs-cleaner/8725 is trying to acquire lock:
ffff0000d6826a48 (&delayed_node->mutex){+.+.}-{4:4}, at: __btrfs_release_delayed_node+0xa0/0x9b0 fs/btrfs/delayed-inode.c:290

but task is already holding lock:
ffff0000dbeba878 (btrfs-tree-00){++++}-{4:4}, at: btrfs_tree_read_lock_nested+0x44/0x2ec fs/btrfs/locking.c:145

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (btrfs-tree-00){++++}-{4:4}:
       __lock_release kernel/locking/lockdep.c:5574 [inline]
       lock_release+0x198/0x39c kernel/locking/lockdep.c:5889
       up_read+0x24/0x3c kernel/locking/rwsem.c:1632
       btrfs_tree_read_unlock+0xdc/0x298 fs/btrfs/locking.c:169
       btrfs_tree_unlock_rw fs/btrfs/locking.h:218 [inline]
       btrfs_search_slot+0xa6c/0x223c fs/btrfs/ctree.c:2133
       btrfs_lookup_inode+0xd8/0x38c fs/btrfs/inode-item.c:395
       __btrfs_update_delayed_inode+0x124/0xed0 fs/btrfs/delayed-inode.c:1032
       btrfs_update_delayed_inode fs/btrfs/delayed-inode.c:1118 [inline]
       __btrfs_commit_inode_delayed_items+0x15f8/0x1748 fs/btrfs/delayed-inode.c:1141
       __btrfs_run_delayed_items+0x1ac/0x514 fs/btrfs/delayed-inode.c:1176
       btrfs_run_delayed_items_nr+0x28/0x38 fs/btrfs/delayed-inode.c:1219
       flush_space+0x26c/0xb68 fs/btrfs/space-info.c:828
       do_async_reclaim_metadata_space+0x110/0x364 fs/btrfs/space-info.c:1158
       btrfs_async_reclaim_metadata_space+0x90/0xd8 fs/btrfs/space-info.c:1226
       process_one_work+0x7e8/0x155c kernel/workqueue.c:3263
       process_scheduled_works kernel/workqueue.c:3346 [inline]
       worker_thread+0x958/0xed8 kernel/workqueue.c:3427
       kthread+0x5fc/0x75c kernel/kthread.c:463
       ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844

-> #0 (&delayed_node->mutex){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain kernel/locking/lockdep.c:3908 [inline]
       __lock_acquire+0x1774/0x30a4 kernel/locking/lockdep.c:5237
       lock_acquire+0x14c/0x2e0 kernel/locking/lockdep.c:5868
       __mutex_lock_common+0x1d0/0x2678 kernel/locking/mutex.c:598
       __mutex_lock kernel/locking/mutex.c:760 [inline]
       mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:812
       __btrfs_release_delayed_node+0xa0/0x9b0 fs/btrfs/delayed-inode.c:290
       btrfs_release_delayed_node fs/btrfs/delayed-inode.c:315 [inline]
       btrfs_remove_delayed_node+0x68/0x84 fs/btrfs/delayed-inode.c:1326
       btrfs_evict_inode+0x578/0xe28 fs/btrfs/inode.c:5587
       evict+0x414/0x928 fs/inode.c:810
       iput_final fs/inode.c:1914 [inline]
       iput+0x95c/0xad4 fs/inode.c:1966
       iget_failed+0xec/0x134 fs/bad_inode.c:248
       btrfs_read_locked_inode+0xe1c/0x1234 fs/btrfs/inode.c:4101
       btrfs_iget+0x1b0/0x264 fs/btrfs/inode.c:5837
       btrfs_run_defrag_inode fs/btrfs/defrag.c:237 [inline]
       btrfs_run_defrag_inodes+0x520/0xdc4 fs/btrfs/defrag.c:309
       cleaner_kthread+0x21c/0x418 fs/btrfs/disk-io.c:1516
       kthread+0x5fc/0x75c kernel/kthread.c:463
       ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(btrfs-tree-00);
                               lock(&delayed_node->mutex);
                               lock(btrfs-tree-00);
  lock(&delayed_node->mutex);

 *** DEADLOCK ***

1 lock held by btrfs-cleaner/8725:
 #0: ffff0000dbeba878 (btrfs-tree-00){++++}-{4:4}, at: btrfs_tree_read_lock_nested+0x44/0x2ec fs/btrfs/locking.c:145

stack backtrace:
CPU: 0 UID: 0 PID: 8725 Comm: btrfs-cleaner Not tainted syzkaller #0 PREEMPT 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/03/2025
Call trace:
 show_stack+0x2c/0x3c arch/arm64/kernel/stacktrace.c:499 (C)
 __dump_stack+0x30/0x40 lib/dump_stack.c:94
 dump_stack_lvl+0xd8/0x12c lib/dump_stack.c:120
 dump_stack+0x1c/0x28 lib/dump_stack.c:129
 print_circular_bug+0x324/0x32c kernel/locking/lockdep.c:2043
 check_noncircular+0x154/0x174 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain kernel/locking/lockdep.c:3908 [inline]
 __lock_acquire+0x1774/0x30a4 kernel/locking/lockdep.c:5237
 lock_acquire+0x14c/0x2e0 kernel/locking/lockdep.c:5868
 __mutex_lock_common+0x1d0/0x2678 kernel/locking/mutex.c:598
 __mutex_lock kernel/locking/mutex.c:760 [inline]
 mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:812
 __btrfs_release_delayed_node+0xa0/0x9b0 fs/btrfs/delayed-inode.c:290
 btrfs_release_delayed_node fs/btrfs/delayed-inode.c:315 [inline]
 btrfs_remove_delayed_node+0x68/0x84 fs/btrfs/delayed-inode.c:1326
 btrfs_evict_inode+0x578/0xe28 fs/btrfs/inode.c:5587
 evict+0x414/0x928 fs/inode.c:810
 iput_final fs/inode.c:1914 [inline]
 iput+0x95c/0xad4 fs/inode.c:1966
 iget_failed+0xec/0x134 fs/bad_inode.c:248
 btrfs_read_locked_inode+0xe1c/0x1234 fs/btrfs/inode.c:4101
 btrfs_iget+0x1b0/0x264 fs/btrfs/inode.c:5837
 btrfs_run_defrag_inode fs/btrfs/defrag.c:237 [inline]
 btrfs_run_defrag_inodes+0x520/0xdc4 fs/btrfs/defrag.c:309
 cleaner_kthread+0x21c/0x418 fs/btrfs/disk-io.c:1516
 kthread+0x5fc/0x75c kernel/kthread.c:463
 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:844


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

                 reply	other threads:[~2025-12-19 11:02 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=694530c2.a70a0220.207337.010d.GAE@google.com \
    --to=syzbot+c1c6edb02bea1da754d8@syzkaller.appspotmail.com \
    --cc=clm@fb.com \
    --cc=dsterba@suse.com \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=syzkaller-bugs@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox