public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* lockdep recursive locking warning on for-next
@ 2021-02-18 18:14 Brian Foster
  2021-02-18 18:49 ` Darrick J. Wong
  0 siblings, 1 reply; 9+ messages in thread
From: Brian Foster @ 2021-02-18 18:14 UTC (permalink / raw)
  To: Darrick J. Wong; +Cc: linux-xfs

Hi Darrick,

I'm seeing the warning below via xfs/167 on a test machine. It looks
like it's just complaining about nested freeze protection between the
scan invocation and an underlying transaction allocation for an inode
eofblocks trim. I suppose we could either refactor xfs_trans_alloc() to
drop and reacquire freeze protection around the scan, or alternatively
call __sb_writers_release() and __sb_writers_acquire() around the scan
to retain freeze protection and quiet lockdep. Hm?

BTW, the stack report also had me wondering whether we had or need any
nesting protection in these new scan invocations. For example, if we
have an fs with a bunch of tagged inodes and concurrent allocation
activity, would anything prevent an in-scan transaction allocation from
jumping back into the scan code to complete outstanding work? It looks
like that might not be possible right now because neither scan reserves
blocks, but they do both use transactions and that's quite a subtle
balance..

Brian

[  316.631387] ============================================
[  316.636697] WARNING: possible recursive locking detected
[  316.642010] 5.11.0-rc4 #35 Tainted: G        W I      
[  316.647148] --------------------------------------------
[  316.652462] fsstress/17733 is trying to acquire lock:
[  316.657515] ffff8e0fd1d90650 (sb_internal){++++}-{0:0}, at: xfs_free_eofblocks+0x104/0x1d0 [xfs]
[  316.666405] 
               but task is already holding lock:
[  316.672239] ffff8e0fd1d90650 (sb_internal){++++}-{0:0}, at: xfs_trans_alloc_inode+0x5f/0x160 [xfs]
[  316.681269] 
...
               stack backtrace:
[  316.774735] CPU: 38 PID: 17733 Comm: fsstress Tainted: G        W I       5.11.0-rc4 #35
[  316.782819] Hardware name: Dell Inc. PowerEdge R740/01KPX8, BIOS 1.6.11 11/20/2018
[  316.790386] Call Trace:
[  316.792844]  dump_stack+0x8b/0xb0
[  316.796168]  __lock_acquire.cold+0x159/0x2ab
[  316.800441]  lock_acquire+0x116/0x370
[  316.804106]  ? xfs_free_eofblocks+0x104/0x1d0 [xfs]
[  316.809045]  ? rcu_read_lock_sched_held+0x3f/0x80
[  316.813750]  ? kmem_cache_alloc+0x287/0x2b0
[  316.817937]  xfs_trans_alloc+0x1ad/0x310 [xfs]
[  316.822445]  ? xfs_free_eofblocks+0x104/0x1d0 [xfs]
[  316.827376]  xfs_free_eofblocks+0x104/0x1d0 [xfs]
[  316.832134]  xfs_blockgc_scan_inode+0x24/0x60 [xfs]
[  316.837074]  xfs_inode_walk_ag+0x202/0x4b0 [xfs]
[  316.841754]  ? xfs_inode_free_cowblocks+0xf0/0xf0 [xfs]
[  316.847040]  ? __lock_acquire+0x382/0x1e10
[  316.851142]  ? xfs_inode_free_cowblocks+0xf0/0xf0 [xfs]
[  316.856425]  xfs_inode_walk+0x66/0xc0 [xfs]
[  316.860670]  xfs_trans_alloc+0x160/0x310 [xfs]
[  316.865179]  xfs_trans_alloc_inode+0x5f/0x160 [xfs]
[  316.870119]  xfs_alloc_file_space+0x105/0x300 [xfs]
[  316.875048]  ? down_write_nested+0x30/0x70
[  316.879148]  xfs_file_fallocate+0x270/0x460 [xfs]
[  316.883913]  ? lock_acquire+0x116/0x370
[  316.887752]  ? __x64_sys_fallocate+0x3e/0x70
[  316.892026]  ? selinux_file_permission+0x105/0x140
[  316.896820]  vfs_fallocate+0x14d/0x3d0
[  316.900572]  __x64_sys_fallocate+0x3e/0x70
[  316.904669]  do_syscall_64+0x33/0x40
[  316.908250]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
...


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-02-22 22:54 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-02-18 18:14 lockdep recursive locking warning on for-next Brian Foster
2021-02-18 18:49 ` Darrick J. Wong
2021-02-18 19:12   ` Brian Foster
2021-02-18 19:31     ` Darrick J. Wong
2021-02-18 20:26       ` Brian Foster
2021-02-19  4:14       ` Dave Chinner
2021-02-19  4:28         ` Darrick J. Wong
2021-02-19  5:48           ` Dave Chinner
2021-02-22 22:53             ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox