From: Chris Bainbridge <chris.bainbridge@gmail.com>
To: linux-btrfs@vger.kernel.org
Subject: WARNING: possible circular locking dependency detected
Date: Sat, 23 Nov 2024 20:20:28 +0000 [thread overview]
Message-ID: <Z0I5DApYx_uqT2pb@debian.local> (raw)
I noticed this splat in the log of a new kernel build. I think it was
triggered when mounting a btrfs filesystem from a USB drive, but I
wasn't able to reproduce it so easily.
[ 781.752971] ======================================================
[ 781.752973] WARNING: possible circular locking dependency detected
[ 781.752974] 6.12.0-08446-g228a1157fb9f #5 Not tainted
[ 781.752976] ------------------------------------------------------
[ 781.752977] kswapd0/141 is trying to acquire lock:
[ 781.752978] ffff991d17e61ce8 (&delayed_node->mutex){+.+.}-{4:4}, at: __btrfs_release_delayed_node.part.0+0x3f/0x280 [btrfs]
[ 781.753030]
but task is already holding lock:
[ 781.753031] ffffffffa00c8100 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xc9/0xa80
[ 781.753037]
which lock already depends on the new lock.
[ 781.753038]
the existing dependency chain (in reverse order) is:
[ 781.753039]
-> #3 (fs_reclaim){+.+.}-{0:0}:
[ 781.753042] fs_reclaim_acquire+0xbd/0xf0
[ 781.753045] __kmalloc_node_noprof+0xa9/0x4f0
[ 781.753048] __kvmalloc_node_noprof+0x24/0x100
[ 781.753049] sbitmap_init_node+0x98/0x240
[ 781.753053] scsi_realloc_sdev_budget_map+0xdd/0x1d0
[ 781.753056] scsi_add_lun+0x458/0x760
[ 781.753058] scsi_probe_and_add_lun+0x15e/0x480
[ 781.753060] __scsi_scan_target+0xfb/0x230
[ 781.753062] scsi_scan_channel+0x65/0xc0
[ 781.753064] scsi_scan_host_selected+0xfb/0x160
[ 781.753066] do_scsi_scan_host+0x9d/0xb0
[ 781.753067] do_scan_async+0x1c/0x1a0
[ 781.753069] async_run_entry_fn+0x2d/0x120
[ 781.753072] process_one_work+0x210/0x730
[ 781.753075] worker_thread+0x193/0x350
[ 781.753077] kthread+0xf3/0x120
[ 781.753079] ret_from_fork+0x40/0x70
[ 781.753081] ret_from_fork_asm+0x11/0x20
[ 781.753084]
-> #2 (&q->q_usage_counter(io)#10){++++}-{0:0}:
[ 781.753087] blk_mq_submit_bio+0x970/0xb40
[ 781.753090] __submit_bio+0x116/0x200
[ 781.753093] submit_bio_noacct_nocheck+0x1bf/0x420
[ 781.753095] submit_bio_noacct+0x1cd/0x620
[ 781.753097] submit_bio+0x38/0x100
[ 781.753099] btrfs_submit_dev_bio+0xf1/0x340 [btrfs]
[ 781.753132] btrfs_submit_bio+0x132/0x170 [btrfs]
[ 781.753160] btrfs_submit_chunk+0x19a/0x650 [btrfs]
[ 781.753187] btrfs_submit_bbio+0x1b/0x30 [btrfs]
[ 781.753215] read_extent_buffer_pages+0x197/0x220 [btrfs]
[ 781.753254] btrfs_read_extent_buffer+0x95/0x1d0 [btrfs]
[ 781.753292] read_block_for_search+0x21c/0x3b0 [btrfs]
[ 781.753328] btrfs_search_slot+0x362/0x1030 [btrfs]
[ 781.753359] btrfs_init_root_free_objectid+0x88/0x120 [btrfs]
[ 781.753392] btrfs_get_root_ref+0x21a/0x3c0 [btrfs]
[ 781.753422] btrfs_get_fs_root+0x13/0x20 [btrfs]
[ 781.753450] btrfs_lookup_dentry+0x606/0x670 [btrfs]
[ 781.753482] btrfs_lookup+0x12/0x40 [btrfs]
[ 781.753511] __lookup_slow+0xf9/0x1a0
[ 781.753514] walk_component+0x10c/0x180
[ 781.753516] path_lookupat+0x67/0x1a0
[ 781.753518] filename_lookup+0xee/0x200
[ 781.753520] vfs_path_lookup+0x54/0x90
[ 781.753522] mount_subtree+0x8b/0x150
[ 781.753525] btrfs_get_tree+0x3a3/0x890 [btrfs]
[ 781.753557] vfs_get_tree+0x27/0x100
[ 781.753563] path_mount+0x4f3/0xc00
[ 781.753566] __x64_sys_mount+0x120/0x160
[ 781.753568] x64_sys_call+0x8a1/0xfb0
[ 781.753569] do_syscall_64+0x87/0x140
[ 781.753573] entry_SYSCALL_64_after_hwframe+0x4b/0x53
[ 781.753578]
-> #1 (btrfs-tree-01){++++}-{4:4}:
[ 781.753581] lock_release+0x12f/0x2c0
[ 781.753586] up_write+0x1c/0x1f0
[ 781.753587] btrfs_tree_unlock+0x33/0xc0 [btrfs]
[ 781.753625] unlock_up+0x1ce/0x380 [btrfs]
[ 781.753657] btrfs_search_slot+0x33a/0x1030 [btrfs]
[ 781.753683] btrfs_lookup_inode+0x52/0xe0 [btrfs]
[ 781.753702] __btrfs_update_delayed_inode+0x6f/0x2f0 [btrfs]
[ 781.753723] btrfs_commit_inode_delayed_inode+0x123/0x130 [btrfs]
[ 781.753741] btrfs_evict_inode+0x2ff/0x440 [btrfs]
[ 781.753761] evict+0x11f/0x2d0
[ 781.753763] iput.part.0+0x1bb/0x290
[ 781.753764] iput+0x1c/0x30
[ 781.753765] do_unlinkat+0x2d2/0x320
[ 781.753767] __x64_sys_unlinkat+0x35/0x70
[ 781.753768] x64_sys_call+0x51b/0xfb0
[ 781.753769] do_syscall_64+0x87/0x140
[ 781.753771] entry_SYSCALL_64_after_hwframe+0x4b/0x53
[ 781.753773]
-> #0 (&delayed_node->mutex){+.+.}-{4:4}:
[ 781.753774] __lock_acquire+0x1615/0x27d0
[ 781.753776] lock_acquire+0xc9/0x300
[ 781.753777] __mutex_lock+0xd9/0xe80
[ 781.753780] mutex_lock_nested+0x1b/0x30
[ 781.753781] __btrfs_release_delayed_node.part.0+0x3f/0x280 [btrfs]
[ 781.753799] btrfs_remove_delayed_node+0x2a/0x40 [btrfs]
[ 781.753816] btrfs_evict_inode+0x1a5/0x440 [btrfs]
[ 781.753834] evict+0x11f/0x2d0
[ 781.753835] dispose_list+0x39/0x80
[ 781.753836] prune_icache_sb+0x59/0x90
[ 781.753838] super_cache_scan+0x14e/0x1d0
[ 781.753839] do_shrink_slab+0x176/0x7a0
[ 781.753841] shrink_slab+0x4b6/0x970
[ 781.753842] shrink_one+0x125/0x200
[ 781.753844] shrink_node+0xc75/0x13c0
[ 781.753846] balance_pgdat+0x50b/0xa80
[ 781.753847] kswapd+0x224/0x480
[ 781.753849] kthread+0xf3/0x120
[ 781.753850] ret_from_fork+0x40/0x70
[ 781.753852] ret_from_fork_asm+0x11/0x20
[ 781.753853]
other info that might help us debug this:
[ 781.753853] Chain exists of:
&delayed_node->mutex --> &q->q_usage_counter(io)#10 --> fs_reclaim
[ 781.753856] Possible unsafe locking scenario:
[ 781.753857] CPU0 CPU1
[ 781.753858] ---- ----
[ 781.753858] lock(fs_reclaim);
[ 781.753859] lock(&q->q_usage_counter(io)#10);
[ 781.753861] lock(fs_reclaim);
[ 781.753862] lock(&delayed_node->mutex);
[ 781.753863]
*** DEADLOCK ***
[ 781.753864] 2 locks held by kswapd0/141:
[ 781.753865] #0: ffffffffa00c8100 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xc9/0xa80
[ 781.753868] #1: ffff991d1dbc30e0 (&type->s_umount_key#30){++++}-{4:4}, at: super_cache_scan+0x39/0x1d0
[ 781.753872]
stack backtrace:
[ 781.753873] CPU: 13 UID: 0 PID: 141 Comm: kswapd0 Not tainted 6.12.0-08446-g228a1157fb9f #5
[ 781.753875] Hardware name: HP HP Pavilion Aero Laptop 13-be0xxx/8916, BIOS F.12 04/11/2023
[ 781.753876] Call Trace:
[ 781.753877] <TASK>
[ 781.753880] dump_stack_lvl+0x8d/0xe0
[ 781.753883] dump_stack+0x10/0x20
[ 781.753885] print_circular_bug+0x27d/0x350
[ 781.753887] check_noncircular+0x14c/0x170
[ 781.753889] __lock_acquire+0x1615/0x27d0
[ 781.753892] lock_acquire+0xc9/0x300
[ 781.753894] ? __btrfs_release_delayed_node.part.0+0x3f/0x280 [btrfs]
[ 781.753912] __mutex_lock+0xd9/0xe80
[ 781.753914] ? __btrfs_release_delayed_node.part.0+0x3f/0x280 [btrfs]
[ 781.753931] ? __btrfs_release_delayed_node.part.0+0x3f/0x280 [btrfs]
[ 781.753948] mutex_lock_nested+0x1b/0x30
[ 781.753949] ? mutex_lock_nested+0x1b/0x30
[ 781.753951] __btrfs_release_delayed_node.part.0+0x3f/0x280 [btrfs]
[ 781.753968] btrfs_remove_delayed_node+0x2a/0x40 [btrfs]
[ 781.753984] btrfs_evict_inode+0x1a5/0x440 [btrfs]
[ 781.754003] ? lock_release+0xda/0x2c0
[ 781.754007] evict+0x11f/0x2d0
[ 781.754009] dispose_list+0x39/0x80
[ 781.754010] prune_icache_sb+0x59/0x90
[ 781.754011] super_cache_scan+0x14e/0x1d0
[ 781.754014] do_shrink_slab+0x176/0x7a0
[ 781.754016] shrink_slab+0x4b6/0x970
[ 781.754018] ? mark_held_locks+0x4d/0x80
[ 781.754019] ? shrink_slab+0x2fe/0x970
[ 781.754021] ? shrink_slab+0x383/0x970
[ 781.754023] shrink_one+0x125/0x200
[ 781.754025] ? shrink_node+0xc59/0x13c0
[ 781.754027] shrink_node+0xc75/0x13c0
[ 781.754029] ? shrink_node+0xaa7/0x13c0
[ 781.754031] ? mem_cgroup_iter+0x352/0x470
[ 781.754034] balance_pgdat+0x50b/0xa80
[ 781.754035] ? balance_pgdat+0x50b/0xa80
[ 781.754037] ? finish_task_switch.isra.0+0xd7/0x3a0
[ 781.754042] kswapd+0x224/0x480
[ 781.754044] ? sugov_hold_freq+0xc0/0xc0
[ 781.754046] ? balance_pgdat+0xa80/0xa80
[ 781.754047] kthread+0xf3/0x120
[ 781.754049] ? kthread_insert_work_sanity_check+0x60/0x60
[ 781.754051] ret_from_fork+0x40/0x70
[ 781.754052] ? kthread_insert_work_sanity_check+0x60/0x60
[ 781.754054] ret_from_fork_asm+0x11/0x20
[ 781.754057] </TASK>
reply other threads:[~2024-11-23 20:20 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z0I5DApYx_uqT2pb@debian.local \
--to=chris.bainbridge@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox