* XFS Deadlock on Linux 6.12.82
@ 2026-04-22 15:25 Ammar Faizi
2026-04-22 20:52 ` Dave Chinner
0 siblings, 1 reply; 2+ messages in thread
From: Ammar Faizi @ 2026-04-22 15:25 UTC (permalink / raw)
To: Linux XFS Mailing List, Linux FSdevel Mailing List,
Linux Kernel Mailing List
Cc: Ammar Faizi, Yichun Zhang, Junlong Li, Alviro Iskandar Setiawan,
gwml
Hi,
While running Linux 6.12.82 with CONFIG_PROVE_LOCKING enabled, I
encountered the following lockdep splat. Based on the call trace, the
potential deadlock appears to be related to the XFS subsystem.
```
[ 795.914491] ======================================================
[ 795.918006] WARNING: possible circular locking dependency detected
[ 795.921528] 6.12.82+ #4 Tainted: G E
[ 795.924362] ------------------------------------------------------
[ 795.927870] kswapd0/1023 is trying to acquire lock:
[ 795.930669] ff11000211da9798 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 795.934476]
but task is already holding lock:
[ 795.936480] ffffffffb7cb9a40 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xa91/0x14b0
[ 795.939189]
which lock already depends on the new lock.
[ 795.941972]
the existing dependency chain (in reverse order) is:
[ 795.944572]
-> #1 (fs_reclaim){+.+.}-{0:0}:
[ 795.946509] __lock_acquire+0xbcd/0x1a20
[ 795.948046] lock_acquire.part.0+0xf7/0x320
[ 795.949659] fs_reclaim_acquire+0xc9/0x110
[ 795.951251] __kmalloc_noprof+0xcd/0x570
[ 795.952779] xfs_attr_shortform_list+0x52f/0x1420 [xfs]
[ 795.954877] xfs_attr_list+0x1e2/0x290 [xfs]
[ 795.956635] xfs_vn_listxattr+0xf8/0x190 [xfs]
[ 795.958467] listxattr+0x7b/0xf0
[ 795.959761] __x64_sys_flistxattr+0x135/0x1c0
[ 795.961451] do_syscall_64+0x90/0x170
[ 795.962893] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 795.964806]
-> #0 (&xfs_nondir_ilock_class){++++}-{3:3}:
[ 795.967128] check_prev_add+0x1b5/0x23e0
[ 795.968663] validate_chain+0xb1a/0xf60
[ 795.970170] __lock_acquire+0xbcd/0x1a20
[ 795.971695] lock_acquire.part.0+0xf7/0x320
[ 795.973315] down_read_nested+0x92/0x470
[ 795.974841] xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 795.976878] xfs_inode_mark_reclaimable+0x1ae/0x270 [xfs]
[ 795.979035] destroy_inode+0xb9/0x1a0
[ 795.980495] evict+0x53f/0x840
[ 795.981742] prune_icache_sb+0x19d/0x2d0
[ 795.983284] super_cache_scan+0x30d/0x4f0
[ 795.984840] do_shrink_slab+0x319/0xc90
[ 795.986343] shrink_slab_memcg+0x450/0x960
[ 795.987932] shrink_slab+0x40b/0x500
[ 795.989342] shrink_one+0x403/0x830
[ 795.990722] shrink_many+0x345/0xd30
[ 795.992130] shrink_node+0xe0c/0x1440
[ 795.993570] balance_pgdat+0xa10/0x14b0
[ 795.995082] kswapd+0x392/0x520
[ 795.996356] kthread+0x293/0x350
[ 795.997655] ret_from_fork+0x31/0x70
[ 795.999073] ret_from_fork_asm+0x1a/0x30
[ 796.000600]
other info that might help us debug this:
[ 796.003350] Possible unsafe locking scenario:
[ 796.005392] CPU0 CPU1
[ 796.006963] ---- ----
[ 796.008535] lock(fs_reclaim);
[ 796.009629] lock(&xfs_nondir_ilock_class);
[ 796.011953] lock(fs_reclaim);
[ 796.013901] rlock(&xfs_nondir_ilock_class);
[ 796.015425]
*** DEADLOCK ***
[ 796.017452] 2 locks held by kswapd0/1023:
[ 796.018833] #0: ffffffffb7cb9a40 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0xa91/0x14b0
[ 796.021650] #1: ff1100209f65c0e0 (&type->s_umount_key#56){++++}-{3:3}, at: super_cache_scan+0x7d/0x4f0
[ 796.024855]
stack backtrace:
[ 796.026377] CPU: 21 UID: 0 PID: 1023 Comm: kswapd0 Kdump: loaded Tainted: G E 6.12.82+ #4
[ 796.026381] Tainted: [E]=UNSIGNED_MODULE
[ 796.026382] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-4.fc41 04/01/2014
[ 796.026384] Call Trace:
[ 796.026388] <TASK>
[ 796.026396] dump_stack_lvl+0x5d/0x80
[ 796.026400] print_circular_bug.cold+0x38/0x48
[ 796.026404] check_noncircular+0x306/0x3f0
[ 796.026407] ? __pfx_check_noncircular+0x10/0x10
[ 796.026409] ? unwind_next_frame+0x1180/0x19e0
[ 796.026411] ? ret_from_fork_asm+0x1a/0x30
[ 796.026415] check_prev_add+0x1b5/0x23e0
[ 796.026418] validate_chain+0xb1a/0xf60
[ 796.026420] ? __pfx_validate_chain+0x10/0x10
[ 796.026423] ? validate_chain+0x14e/0xf60
[ 796.026425] __lock_acquire+0xbcd/0x1a20
[ 796.026428] lock_acquire.part.0+0xf7/0x320
[ 796.026432] ? xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 796.026556] ? __pfx_lock_acquire.part.0+0x10/0x10
[ 796.026558] ? trace_lock_acquire+0x12f/0x1a0
[ 796.026560] ? find_held_lock+0x2d/0x110
[ 796.026561] ? xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 796.026652] ? lock_acquire+0x31/0xc0
[ 796.026654] ? xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 796.026741] down_read_nested+0x92/0x470
[ 796.026745] ? xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 796.026831] ? __pfx_down_read_nested+0x10/0x10
[ 796.026834] ? trace_xfs_ilock+0xff/0x160 [xfs]
[ 796.026943] xfs_can_free_eofblocks+0x344/0x600 [xfs]
[ 796.027045] ? __pfx_xfs_can_free_eofblocks+0x10/0x10 [xfs]
[ 796.027132] ? do_raw_spin_lock+0x12e/0x270
[ 796.027135] ? xfs_inode_mark_reclaimable+0x1a2/0x270 [xfs]
[ 796.027236] xfs_inode_mark_reclaimable+0x1ae/0x270 [xfs]
[ 796.027328] destroy_inode+0xb9/0x1a0
[ 796.027331] evict+0x53f/0x840
[ 796.027333] ? __pfx_evict+0x10/0x10
[ 796.027335] ? do_raw_spin_unlock+0x14a/0x1f0
[ 796.027337] ? _raw_spin_unlock+0x23/0x40
[ 796.027339] ? list_lru_walk_one+0xaa/0xf0
[ 796.027341] prune_icache_sb+0x19d/0x2d0
[ 796.027344] ? prune_dcache_sb+0xe3/0x160
[ 796.027346] ? __pfx_prune_icache_sb+0x10/0x10
[ 796.027348] ? __pfx_prune_dcache_sb+0x10/0x10
[ 796.027350] ? lock_release+0xda/0x140
[ 796.027353] super_cache_scan+0x30d/0x4f0
[ 796.027356] do_shrink_slab+0x319/0xc90
[ 796.027359] shrink_slab_memcg+0x450/0x960
[ 796.027360] ? shrink_slab_memcg+0x16b/0x960
[ 796.027362] ? __pfx_shrink_slab_memcg+0x10/0x10
[ 796.027365] ? try_to_shrink_lruvec+0x48e/0x8a0
[ 796.027367] shrink_slab+0x40b/0x500
[ 796.027369] ? __pfx_shrink_slab+0x10/0x10
[ 796.027371] ? shrink_many+0x320/0xd30
[ 796.027373] ? __pfx_try_to_shrink_lruvec+0x10/0x10
[ 796.027375] ? __pfx___lock_release.isra.0+0x10/0x10
[ 796.027376] ? __pfx___lock_release.isra.0+0x10/0x10
[ 796.027378] ? __pfx_lock_acquire.part.0+0x10/0x10
[ 796.027380] shrink_one+0x403/0x830
[ 796.027383] shrink_many+0x345/0xd30
[ 796.027385] ? shrink_many+0x320/0xd30
[ 796.027387] ? shrink_many+0xa3/0xd30
[ 796.027390] shrink_node+0xe0c/0x1440
[ 796.027392] ? percpu_ref_put_many.constprop.0+0x7a/0x1d0
[ 796.027395] ? __pfx___lock_release.isra.0+0x10/0x10
[ 796.027396] ? __pfx_lock_acquire.part.0+0x10/0x10
[ 796.027399] ? __pfx_shrink_node+0x10/0x10
[ 796.027401] ? percpu_ref_put_many.constprop.0+0x7f/0x1d0
[ 796.027403] ? mem_cgroup_iter+0x598/0x880
[ 796.027405] balance_pgdat+0xa10/0x14b0
[ 796.027409] ? __pfx_balance_pgdat+0x10/0x10
[ 796.027410] ? find_held_lock+0x2d/0x110
[ 796.027412] ? __pfx___schedule+0x10/0x10
[ 796.027415] ? __pfx___lock_release.isra.0+0x10/0x10
[ 796.027416] ? __pfx_lock_acquire.part.0+0x10/0x10
[ 796.027418] ? trace_lock_acquire+0x12f/0x1a0
[ 796.027420] ? set_pgdat_percpu_threshold+0x1c9/0x340
[ 796.027423] ? __pfx_kswapd_try_to_sleep+0x10/0x10
[ 796.027425] ? do_raw_spin_lock+0x12e/0x270
[ 796.027427] ? __pfx_kswapd+0x10/0x10
[ 796.027430] kswapd+0x392/0x520
[ 796.027432] ? __pfx_kswapd+0x10/0x10
[ 796.027434] ? __kthread_parkme+0x86/0x140
[ 796.027437] ? __pfx_kswapd+0x10/0x10
[ 796.027439] kthread+0x293/0x350
[ 796.027441] ? __pfx_kthread+0x10/0x10
[ 796.027443] ret_from_fork+0x31/0x70
[ 796.027445] ? __pfx_kthread+0x10/0x10
[ 796.027446] ret_from_fork_asm+0x1a/0x30
[ 796.027450] </TASK>
```
Here is my partition layout:
```
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 128G 0 disk
├─sda1 8:1 0 2M 0 part
├─sda2 8:2 0 2G 0 part /boot
└─sda3 8:3 0 126G 0 part
├─rl-pool00_tmeta 253:0 0 128M 0 lvm
│ └─rl-pool00-tpool 253:2 0 125.7G 0 lvm
│ ├─rl-root 253:3 0 113.3G 0 lvm /
│ ├─rl-swap 253:4 0 4G 0 lvm [SWAP]
│ └─rl-pool00 253:5 0 125.7G 1 lvm
└─rl-pool00_tdata 253:1 0 125.7G 0 lvm
└─rl-pool00-tpool 253:2 0 125.7G 0 lvm
├─rl-root 253:3 0 113.3G 0 lvm /
├─rl-swap 253:4 0 4G 0 lvm [SWAP]
└─rl-pool00 253:5 0 125.7G 1 lvm
sr0 11:0 1 3.1G 0 rom
[root@localhost ~]# mount | grep xfs
/dev/mapper/rl-root on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=64k,sunit=128,swidth=128,noquota)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,nosuid,noexec,relatime)
/dev/sda2 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
[root@localhost ~]#
```
Let me know if someone needs more information to debug this issue.
Thank you!
--
Ammar Faizi
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: XFS Deadlock on Linux 6.12.82
2026-04-22 15:25 XFS Deadlock on Linux 6.12.82 Ammar Faizi
@ 2026-04-22 20:52 ` Dave Chinner
0 siblings, 0 replies; 2+ messages in thread
From: Dave Chinner @ 2026-04-22 20:52 UTC (permalink / raw)
To: Ammar Faizi
Cc: Linux XFS Mailing List, Linux FSdevel Mailing List,
Linux Kernel Mailing List, Yichun Zhang, Junlong Li,
Alviro Iskandar Setiawan, gwml
On Wed, Apr 22, 2026 at 10:25:05PM +0700, Ammar Faizi wrote:
> Hi,
>
> While running Linux 6.12.82 with CONFIG_PROVE_LOCKING enabled, I
> encountered the following lockdep splat. Based on the call trace, the
> potential deadlock appears to be related to the XFS subsystem.
Well known false positive.
Lockdep knows nothing about inode reference counts and how they
interact with memory reclaim. i.e. A referenced locked inode doing
memory allocation cannot be found by memory reclaim processing
unreferenced inodes, so reclaim cannot deadlock on inode locks held
by referenced inodes when doing memory allocation.
-Dave.
--
Dave Chinner
dgc@kernel.org
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-04-22 20:52 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-22 15:25 XFS Deadlock on Linux 6.12.82 Ammar Faizi
2026-04-22 20:52 ` Dave Chinner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox