public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* XFS performance oddity
@ 2010-11-23 12:24 Nick Piggin
  2010-11-23 12:30 ` Nick Piggin
  2010-11-23 20:58 ` Dave Chinner
  0 siblings, 2 replies; 7+ messages in thread
From: Nick Piggin @ 2010-11-23 12:24 UTC (permalink / raw)
  To: xfs

Hi,

Running parallel fs_mark (0 size inodes, fsync on close) on a ramdisk
ends up with XFS in funny patterns.

procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
id wa
24  1   6576 166396    252 393676  132  140 16900 80666 21308 104333  1 84 14  1
21  0   6712 433856    256 387080  100  224  9152 53487 13677 53732  0 55 45  0
 2  0   7068 463496    248 389100    0  364  2940 17896 4485 26122  0 33 65  2
 1  0   7068 464340    248 388928    0    0     0     0   66  207  0  0 100  0
 0  0   7068 464340    248 388928    0    0     0     0   79  200  0  0 100  0
 0  0   7068 464544    248 388928    0    0     0     0   65  199  0  0 100  0
 1  0   7068 464748    248 388928    0    0     0     0   79  201  0  0 100  0
 0  0   7068 465064    248 388928    0    0     0     0   66  202  0  0 100  0
 0  0   7068 465312    248 388928    0    0     0     0   80  200  0  0 100  0
 0  0   7068 465500    248 388928    0    0     0     0   65  199  0  0 100  0
 0  0   7068 465500    248 388928    0    0     0     0   80  202  0  0 100  0
 1  0   7068 465500    248 388928    0    0     0     0   66  203  0  0 100  0
 0  0   7068 465500    248 388928    0    0     0     0   79  200  0  0 100  0
23  0   7068 460332    248 388800    0    0  1416  8896 1981 7142  0  1 99  0
 6  0   6968 360248    248 403736   56    0 15568 95171 19438 110825  1 79 21  0
23  0   6904 248736    248 419704  392    0 17412 118270 20208 111396  1 82 17  0
 9  0   6884 266116    248 435904  128    0 14956 79756 18554 118020  1 76 23  0
 0  0   6848 219640    248 445760  212    0  9932 51572 12622 76491  0 60 40  0

Got a dump of sleeping tasks. Any ideas?


[  495.828231] SysRq : Show Blocked State
[  495.828457]   task                        PC stack   pid father
[  495.828639] rm            D 0000000000000000  3856  4267   1860 0x00000000
[  495.828942]  ffff8800444fbc58 0000000000000046 ffff880000000000 00000000ffffffff
[  495.829061]  0000000000000282 ffff8800444fa000 ffff8800444fa000 000000000000d6c8
[  495.829061]  ffff8800444fa010 ffff8800444fbfd8 ffff88012bc25d00 ffff880124333e00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3208  4268   1860 0x00000000
[  495.829061]  ffff880075ce7c58 0000000000000046 ffff880000000000 00000000ffffffff
[  495.829061]  0000000000000282 ffff880075ce6000 ffff880075ce6000 000000000000d6c8
[  495.829061]  ffff880075ce6010 ffff880075ce7fd8 ffff88012bd29f00 ffff880124330f80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3448  4269   1860 0x00000000
[  495.829061]  ffff880052411c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff880052410000 ffff880052410000 000000000000d6c8
[  495.829061]  ffff880052410010 ffff880052411fd8 ffff88012bcadd00 ffff880124335d00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3648  4270   1860 0x00000000
[  495.829061]  ffff88007361dc98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff88007361c000 ffff88007361c000 000000000000d6c8
[  495.829061]  ffff88007361c010 ffff88007361dfd8 ffff88012bc45d00 ffff880124330000
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3568  4271   1860 0x00000000
[  495.829061]  ffff88005e9d3c58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff88005e9d2000 ffff88005e9d2000 000000000000d6c8
[  495.829061]  ffff88005e9d2010 ffff88005e9d3fd8 ffff88012bcd5d00 ffff880124334d80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3568  4272   1860 0x00000000
[  495.829061]  ffff88007543bc58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff88007543a000 ffff88007543a000 000000000000d6c8
[  495.829061]  ffff88007543a010 ffff88007543bfd8 ffff88012bd91f00 ffff880124331f00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3568  4273   1860 0x00000000
[  495.829061]  ffff880076efbc98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff880076efa000 ffff880076efa000 000000000000d6c8
[  495.829061]  ffff880076efa010 ffff880076efbfd8 ffff88012bd2dd00 ffff880124336c80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3280  4274   1860 0x00000000
[  495.829061]  ffff8800767f7c58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff8800767f6000 ffff8800767f6000 000000000000d6c8
[  495.829061]  ffff8800767f6010 ffff8800767f7fd8 ffff88012bd95d00 ffff880124332e80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3488  4275   1860 0x00000000
[  495.829061]  ffff880075299c58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff880075298000 ffff880075298000 000000000000d6c8
[  495.829061]  ffff880075298010 ffff880075299fd8 ffffffff81c0b020 ffff8801e9be3e00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3056  4276   1860 0x00000000
[  495.829061]  ffff8800459b9c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff8800459b8000 ffff8800459b8000 000000000000d6c8
[  495.829061]  ffff8800459b8010 ffff8800459b9fd8 ffff88012bdd1f00 ffff8801e9be5d00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3224  4277   1860 0x00000000
[  495.829061]  ffff880081a65c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff880081a64000 ffff880081a64000 000000000000d6c8
[  495.829061]  ffff880081a64010 ffff880081a65fd8 ffff88012bc01f00 ffff8801e9be6c80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3488  4278   1860 0x00000000
[  495.829061]  ffff88007fcfbc98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff88007fcfa000 ffff88007fcfa000 000000000000d6c8
[  495.829061]  ffff88007fcfa010 ffff88007fcfbfd8 ffff88012bcd1f00 ffff8801e9be2e80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3360  4279   1860 0x00000000
[  495.829061]  ffff880082201c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff880082200000 ffff880082200000 000000000000d6c8
[  495.829061]  ffff880082200010 ffff880082201fd8 ffff88012be11f00 ffff8801e9be0000
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3456  4280   1860 0x00000000
[  495.829061]  ffff88004478dc58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff88004478c000 ffff88004478c000 000000000000d6c8
[  495.829061]  ffff88004478c010 ffff88004478dfd8 ffff88012bd05d00 ffff8801e9be0f80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3216  4281   1860 0x00000000
[  495.829061]  ffff8800478a9c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff8800478a8000 ffff8800478a8000 000000000000d6c8
[  495.829061]  ffff8800478a8010 ffff8800478a9fd8 ffff88012bd01f00 ffff8801ea026c80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3488  4282   1860 0x00000000
[  495.829061]  ffff88003e82dc98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff88003e82c000 ffff88003e82c000 000000000000d6c8
[  495.829061]  ffff88003e82c010 ffff88003e82dfd8 ffff88012bc6dd00 ffff8801ea025d00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3568  4283   1860 0x00000000
[  495.829061]  ffff88003ccafc58 0000000000000046 ffff88003ccafbd8 0000000000000000
[  495.829061]  0000000000000282 ffff88003ccae000 ffff88003ccae000 000000000000d6c8
[  495.829061]  ffff88003ccae010 ffff88003ccaffd8 ffff88012bc05d00 ffff8800777a8000
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3568  4284   1860 0x00000000
[  495.829061]  ffff880071d57c58 0000000000000046 ffff880000000000 00000000ffffffff
[  495.829061]  0000000000000282 ffff880071d56000 ffff880071d56000 000000000000d6c8
[  495.829061]  ffff880071d56010 ffff880071d57fd8 ffff88012bd55d00 ffff8800777a8f80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3488  4285   1860 0x00000000
[  495.829061]  ffff88004bd59c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff88004bd58000 ffff88004bd58000 000000000000d6c8
[  495.829061]  ffff88004bd58010 ffff88004bd59fd8 ffff88012bc69f00 ffff8800777a9f00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3568  4286   1860 0x00000000
[  495.829061]  ffff880075ebbc98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff880075eba000 ffff880075eba000 000000000000d6c8
[  495.829061]  ffff880075eba010 ffff880075ebbfd8 ffff88012bd51f00 ffff8800777aae80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  2624  4287   1860 0x00000000
[  495.829061]  ffff880037f1fc58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff880037f1e000 ffff880037f1e000 000000000000d6c8
[  495.829061]  ffff880037f1e010 ffff880037f1ffd8 ffff88012bca9f00 ffff8800777abe00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3440  4288   1860 0x00000000
[  495.829061]  ffff880070d21c58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff880070d20000 ffff880070d20000 000000000000d6c8
[  495.829061]  ffff880070d20010 ffff880070d21fd8 ffff88012bc41f00 ffff8800777acd80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  3056  4289   1860 0x00000000
[  495.829061]  ffff880047dd3c58 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000282 ffff880047dd2000 ffff880047dd2000 000000000000d6c8
[  495.829061]  ffff880047dd2010 ffff880047dd3fd8 ffff88012bc05d00 ffff8800777add00
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079358>] xlog_grant_log_space+0x2c8/0x8a0 [xfs]
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a61>] ? xfs_iunlock+0x121/0x1f0 [xfs]
[  495.829061]  [<ffffffffa008eb0c>] xfs_remove+0x12c/0x470 [xfs]
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffff8112f1e0>] ? __d_lookup+0x1c0/0x1f0
[  495.829061]  [<ffffffff8112f020>] ? __d_lookup+0x0/0x1f0
[  495.829061]  [<ffffffff81123d58>] ? vfs_unlink+0x68/0xf0
[  495.829061]  [<ffffffffa009f303>] xfs_vn_unlink+0x43/0x90 [xfs]
[  495.829061]  [<ffffffff81123d84>] vfs_unlink+0x94/0xf0
[  495.829061]  [<ffffffff81126893>] do_unlinkat+0x193/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b
[  495.829061] rm            D 0000000000000000  2728  4290   1860 0x00000000
[  495.829061]  ffff880056577c98 0000000000000046 ffff880000000000 0000000000000000
[  495.829061]  0000000000000286 ffff880056576000 ffff880056576000 000000000000d6c8
[  495.829061]  ffff880056576010 ffff880056577fd8 ffff88012bc21f00 ffff8800777aec80
[  495.829061] Call Trace:
[  495.829061]  [<ffffffffa0079228>] xlog_grant_log_space+0x198/0x8a0 [xfs]
[  495.829061]  [<ffffffff81606d3d>] ? sub_preempt_count+0x9d/0xd0
[  495.829061]  [<ffffffff810426e0>] ? default_wake_function+0x0/0x10
[  495.829061]  [<ffffffffa007a9bb>] xfs_log_reserve+0x16b/0x2b0 [xfs]
[  495.829061]  [<ffffffffa0088be4>] xfs_trans_reserve+0xe4/0x290 [xfs]
[  495.829061]  [<ffffffffa0067a1d>] ? xfs_iunlock+0xdd/0x1f0 [xfs]
[  495.829061]  [<ffffffffa009058c>] xfs_inactive+0x22c/0x520 [xfs]
[  495.829061]  [<ffffffff8127dfd6>] ? __init_rwsem+0x56/0x70
[  495.829061]  [<ffffffffa009ff8b>] xfs_fs_evict_inode+0xeb/0x180 [xfs]
[  495.829061]  [<ffffffff81130aa2>] evict+0x22/0xb0
[  495.829061]  [<ffffffff81131075>] iput+0x1c5/0x2b0
[  495.829061]  [<ffffffff81126825>] do_unlinkat+0x125/0x1d0
[  495.829061]  [<ffffffff8111d231>] ? sys_newfstatat+0x31/0x50
[  495.829061]  [<ffffffff816028c9>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[  495.829061]  [<ffffffff81126a3d>] sys_unlinkat+0x1d/0x40
[  495.829061]  [<ffffffff8100312b>] system_call_fastpath+0x16/0x1b

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: XFS performance oddity
  2010-11-23 12:24 XFS performance oddity Nick Piggin
@ 2010-11-23 12:30 ` Nick Piggin
  2010-11-23 20:58 ` Dave Chinner
  1 sibling, 0 replies; 7+ messages in thread
From: Nick Piggin @ 2010-11-23 12:30 UTC (permalink / raw)
  To: Nick Piggin; +Cc: xfs

On Tue, Nov 23, 2010 at 11:24:49PM +1100, Nick Piggin wrote:
> Hi,
> 
> Running parallel fs_mark (0 size inodes, fsync on close) on a ramdisk
> ends up with XFS in funny patterns.
> 
> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> id wa
> 24  1   6576 166396    252 393676  132  140 16900 80666 21308 104333  1 84 14  1
> 21  0   6712 433856    256 387080  100  224  9152 53487 13677 53732  0 55 45  0
>  2  0   7068 463496    248 389100    0  364  2940 17896 4485 26122  0 33 65  2
>  1  0   7068 464340    248 388928    0    0     0     0   66  207  0  0 100  0
>  0  0   7068 464340    248 388928    0    0     0     0   79  200  0  0 100  0
>  0  0   7068 464544    248 388928    0    0     0     0   65  199  0  0 100  0
>  1  0   7068 464748    248 388928    0    0     0     0   79  201  0  0 100  0
>  0  0   7068 465064    248 388928    0    0     0     0   66  202  0  0 100  0
>  0  0   7068 465312    248 388928    0    0     0     0   80  200  0  0 100  0
>  0  0   7068 465500    248 388928    0    0     0     0   65  199  0  0 100  0
>  0  0   7068 465500    248 388928    0    0     0     0   80  202  0  0 100  0
>  1  0   7068 465500    248 388928    0    0     0     0   66  203  0  0 100  0
>  0  0   7068 465500    248 388928    0    0     0     0   79  200  0  0 100  0
> 23  0   7068 460332    248 388800    0    0  1416  8896 1981 7142  0  1 99  0
>  6  0   6968 360248    248 403736   56    0 15568 95171 19438 110825  1 79 21  0
> 23  0   6904 248736    248 419704  392    0 17412 118270 20208 111396  1 82 17  0
>  9  0   6884 266116    248 435904  128    0 14956 79756 18554 118020  1 76 23  0
>  0  0   6848 219640    248 445760  212    0  9932 51572 12622 76491  0 60 40  0
> 
> Got a dump of sleeping tasks. Any ideas?

Oh, this is with a lot of kernel debug options, including xfs debugging
turned on. So it might not be an interesting performance measurement.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: XFS performance oddity
  2010-11-23 12:24 XFS performance oddity Nick Piggin
  2010-11-23 12:30 ` Nick Piggin
@ 2010-11-23 20:58 ` Dave Chinner
  2010-11-24  0:50   ` Nick Piggin
  1 sibling, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2010-11-23 20:58 UTC (permalink / raw)
  To: Nick Piggin; +Cc: xfs

On Tue, Nov 23, 2010 at 11:24:49PM +1100, Nick Piggin wrote:
> Hi,
> 
> Running parallel fs_mark (0 size inodes, fsync on close) on a ramdisk
> ends up with XFS in funny patterns.
> 
> procs -----------memory---------- ---swap-- -----io---- -system--
> ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> id wa
> 24  1   6576 166396    252 393676  132  140 16900 80666 21308 104333  1 84 14  1
> 21  0   6712 433856    256 387080  100  224  9152 53487 13677 53732  0 55 45  0
>  2  0   7068 463496    248 389100    0  364  2940 17896 4485 26122  0 33 65  2
>  1  0   7068 464340    248 388928    0    0     0     0   66  207  0  0 100  0
>  0  0   7068 464340    248 388928    0    0     0     0   79  200  0  0 100  0
>  0  0   7068 464544    248 388928    0    0     0     0   65  199  0  0 100  0
>  1  0   7068 464748    248 388928    0    0     0     0   79  201  0  0 100  0
>  0  0   7068 465064    248 388928    0    0     0     0   66  202  0  0 100  0
>  0  0   7068 465312    248 388928    0    0     0     0   80  200  0  0 100  0
>  0  0   7068 465500    248 388928    0    0     0     0   65  199  0  0 100  0
>  0  0   7068 465500    248 388928    0    0     0     0   80  202  0  0 100  0
>  1  0   7068 465500    248 388928    0    0     0     0   66  203  0  0 100  0
>  0  0   7068 465500    248 388928    0    0     0     0   79  200  0  0 100  0
> 23  0   7068 460332    248 388800    0    0  1416  8896 1981 7142  0  1 99  0
>  6  0   6968 360248    248 403736   56    0 15568 95171 19438 110825  1 79 21  0
> 23  0   6904 248736    248 419704  392    0 17412 118270 20208 111396  1 82 17  0
>  9  0   6884 266116    248 435904  128    0 14956 79756 18554 118020  1 76 23  0
>  0  0   6848 219640    248 445760  212    0  9932 51572 12622 76491  0 60 40  0
> 
> Got a dump of sleeping tasks. Any ideas?

It is stuck waiting for log space to be freed up. Generally this is
caused by log IO completion not occurring or an unflushable object
preventing the tail from being moved forward.  What:

	- kernel are you running?
	- is the time resolution of the above output?
	- is the output of mkfs.xfs?
	- are your mount options?
	- is the fs_mark command line?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: XFS performance oddity
  2010-11-23 20:58 ` Dave Chinner
@ 2010-11-24  0:50   ` Nick Piggin
  2010-11-24  3:15     ` Dave Chinner
  0 siblings, 1 reply; 7+ messages in thread
From: Nick Piggin @ 2010-11-24  0:50 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Nick Piggin, xfs

On Wed, Nov 24, 2010 at 07:58:04AM +1100, Dave Chinner wrote:
> On Tue, Nov 23, 2010 at 11:24:49PM +1100, Nick Piggin wrote:
> > Hi,
> > 
> > Running parallel fs_mark (0 size inodes, fsync on close) on a ramdisk
> > ends up with XFS in funny patterns.
> > 
> > procs -----------memory---------- ---swap-- -----io---- -system--
> > ----cpu----
> >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> > id wa
> > 24  1   6576 166396    252 393676  132  140 16900 80666 21308 104333  1 84 14  1
> > 21  0   6712 433856    256 387080  100  224  9152 53487 13677 53732  0 55 45  0
> >  2  0   7068 463496    248 389100    0  364  2940 17896 4485 26122  0 33 65  2
> >  1  0   7068 464340    248 388928    0    0     0     0   66  207  0  0 100  0
> >  0  0   7068 464340    248 388928    0    0     0     0   79  200  0  0 100  0
> >  0  0   7068 464544    248 388928    0    0     0     0   65  199  0  0 100  0
> >  1  0   7068 464748    248 388928    0    0     0     0   79  201  0  0 100  0
> >  0  0   7068 465064    248 388928    0    0     0     0   66  202  0  0 100  0
> >  0  0   7068 465312    248 388928    0    0     0     0   80  200  0  0 100  0
> >  0  0   7068 465500    248 388928    0    0     0     0   65  199  0  0 100  0
> >  0  0   7068 465500    248 388928    0    0     0     0   80  202  0  0 100  0
> >  1  0   7068 465500    248 388928    0    0     0     0   66  203  0  0 100  0
> >  0  0   7068 465500    248 388928    0    0     0     0   79  200  0  0 100  0
> > 23  0   7068 460332    248 388800    0    0  1416  8896 1981 7142  0  1 99  0
> >  6  0   6968 360248    248 403736   56    0 15568 95171 19438 110825  1 79 21  0
> > 23  0   6904 248736    248 419704  392    0 17412 118270 20208 111396  1 82 17  0
> >  9  0   6884 266116    248 435904  128    0 14956 79756 18554 118020  1 76 23  0
> >  0  0   6848 219640    248 445760  212    0  9932 51572 12622 76491  0 60 40  0
> > 
> > Got a dump of sleeping tasks. Any ideas?
> 
> It is stuck waiting for log space to be freed up. Generally this is
> caused by log IO completion not occurring or an unflushable object
> preventing the tail from being moved forward.  What:

Yeah it's strage, it seems like it hits some timeout or gets kicked
along by background writeback or something. Missed wakeup somewhere?

BTW. I reproduced similar traces with debug options turned off, and
with delaylog.
 

> 	- kernel are you running?

2.6.37-rc3 vanilla

> 	- is the time resolution of the above output?

1 second

> 	- is the output of mkfs.xfs?

meta-data=/dev/ram0              isize=256    agcount=16, agsize=65536 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=1048576, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

> 	- are your mount options?

mount -o delaylog,logbsize=262144,nobarrier /dev/ram0 mnt

> 	- is the fs_mark command line?

../fs_mark -S1 -k -n 1000  -L 100 -s 0  -d scratch/0  -d scratch/1  -d
scratch/2  -d scratch/3  -d scratch/4  -d scratch/5  -d scratch/6  -d
scratch/7 -d scratch/8 -d scratch/9 -d scratch/10 -d scratch/11 -d
scratch/12 -d scratch/13 -d scratch/14 -d scratch/15 -d scratch/16 -d
scratch/17 -d scratch/18 -d scratch/19 -d scratch/20 -d scratch/21 -d
scratch/22 -d scratch/23
for f in scratch/* ; do rm -rf $f & done ; wait

Ran it again, and yes it has locked up for a long long time, it seems
to be in the rm phase, but I think I've seen similar stall (although not
so long) in the fs_mark phase too.



procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 6  0      0 547088    420 222872    0    0  1720 736397 23423 384539  7 32 61  0
 2  0      0 613604    420 225772    0    0  3960 492183 14276 228470  6 32 61  0
11  0      0 796704    420 241464    0    0   548 607070 19239 340874  5 20 75  0
14  0      0 695524    420 259748    0    0     8 758151 23038 382252  7 32 61  0
19  0      0 604932    420 276504    0    0    80 784919 23011 368598  7 31 63  0
24  0      0 474676    468 280416    0    0 17068 561785 21134 321450  6 28 62  3
25  0    484 189040    296 346560  164  560 130348 132672 68459 93936  5 85  9  0
 1  1    484 197768    300 375956   20    0 30292 153619 9570 116342  1 61 35  3
 2  0    484 205788    300 374448    0    0     0     0  241  381  0  1 90  9
22  0    484 363124    300 392992    0    0 18572 99025 9520 129093  0 17 82  0
 1  1    484 541336    300 396900    0    0  6236 59965 4118 82291  0 33 60  6
 1  0    484 542916    300 397108    0    0     0     0   61   90  0  0 99  1
 0  0    484 542916    300 397056    0    0     0     0   71   74  0  0 100  0
 0  0    484 542916    300 397056    0    0     0     0   43   58  0  0 100  0
 0  0    484 543196    300 397056    0    0     0     0   63   66  0  0 100  0
 0  0    484 543196    300 397056    0    0     0     0   44   60  0  0 100  0
 0  0    484 543652    300 397320    0    0     0     0   68   81  0  0 100  0
 0  0    484 544040    300 397320    0    0     0     0   44   65  0  0 100  0
 0  0    484 544040    300 397320    0    0     0     0   64   68  0  0 100  0
 0  0    484 544040    300 397320    0    0     0     0   44   59  0  0 100  0
 1  0    484 544288    300 397320    0    0     0     0   65   69  0  0 100  0
 1  0    484 544288    300 397320    0    0     0     0   45   62  0  0 100  0
 1  0    484 544296    300 397320    0    0     0     0   64   65  0  0 100  0
 0  0    484 544544    300 397320    0    0     0     0   45   62  0  0 100  0
 0  0    484 544544    300 397320    0    0     0     0   64   70  0  0 100  0

Every blocked task (sampled several sysrq+w) is stuck in
xlog_grant_log_space.

[  226.558091] rm            D 0000000000000008     0  3933   1510
[  226.558091] Call Trace:
[  226.558091]  [<ffffffffa004cbc8>] xlog_grant_log_space+0x158/0x3e0
[xfs]
[  226.558091]  [<ffffffffa00609e5>] ? kmem_zone_zalloc+0x35/0x50 [xfs]
[  226.558091]  [<ffffffff8103a1c0>] ? default_wake_function+0x0/0x10
[  226.558091]  [<ffffffffa004cf32>] xfs_log_reserve+0xe2/0xf0 [xfs]
[  226.558091]  [<ffffffffa005a5eb>] xfs_trans_reserve+0x9b/0x210 [xfs]
[  226.558091]  [<ffffffffa005a987>] ? xfs_trans_alloc+0x97/0xb0 [xfs]
[  226.558091]  [<ffffffffa005f1d7>] xfs_inactive+0x277/0x470 [xfs]
[  226.558091]  [<ffffffffa006b023>] xfs_fs_evict_inode+0xa3/0xb0 [xfs]
[  226.558091]  [<ffffffff810e3062>] evict+0x22/0xb0
[  226.558091]  [<ffffffff810e3c55>] iput+0x1c5/0x2c0
[  226.558091]  [<ffffffff810da420>] do_unlinkat+0x120/0x1d0
[  226.558091]  [<ffffffff810d15e1>] ? sys_newfstatat+0x31/0x50
[  226.558091]  [<ffffffff810da63d>] sys_unlinkat+0x1d/0x40
[  226.558091]  [<ffffffff81002deb>] system_call_fastpath+0x16/0x1b

Occasional 1-2s bursts of bi/bo activity which seem to happen every
30s, so it's probably some writeback thread kicking in (or an xfs
periodic thread?)

... and it eventually just recovered after 5 minutes.

Thanks,
Nick

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: XFS performance oddity
  2010-11-24  0:50   ` Nick Piggin
@ 2010-11-24  3:15     ` Dave Chinner
  2010-11-24  5:28       ` [PATCH] " Dave Chinner
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2010-11-24  3:15 UTC (permalink / raw)
  To: Nick Piggin; +Cc: xfs

On Wed, Nov 24, 2010 at 11:50:03AM +1100, Nick Piggin wrote:
> On Wed, Nov 24, 2010 at 07:58:04AM +1100, Dave Chinner wrote:
> > On Tue, Nov 23, 2010 at 11:24:49PM +1100, Nick Piggin wrote:
> > > Hi,
> > > 
> > > Running parallel fs_mark (0 size inodes, fsync on close) on a ramdisk
> > > ends up with XFS in funny patterns.
> > > 
> > > procs -----------memory---------- ---swap-- -----io---- -system--
> > > ----cpu----
> > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> > > id wa
> > > 24  1   6576 166396    252 393676  132  140 16900 80666 21308 104333  1 84 14  1
> > > 21  0   6712 433856    256 387080  100  224  9152 53487 13677 53732  0 55 45  0
> > >  2  0   7068 463496    248 389100    0  364  2940 17896 4485 26122  0 33 65  2
> > >  1  0   7068 464340    248 388928    0    0     0     0   66  207  0  0 100  0
> > >  0  0   7068 464340    248 388928    0    0     0     0   79  200  0  0 100  0
> > >  0  0   7068 464544    248 388928    0    0     0     0   65  199  0  0 100  0
> > >  1  0   7068 464748    248 388928    0    0     0     0   79  201  0  0 100  0
> > >  0  0   7068 465064    248 388928    0    0     0     0   66  202  0  0 100  0
> > >  0  0   7068 465312    248 388928    0    0     0     0   80  200  0  0 100  0
> > >  0  0   7068 465500    248 388928    0    0     0     0   65  199  0  0 100  0
> > >  0  0   7068 465500    248 388928    0    0     0     0   80  202  0  0 100  0
> > >  1  0   7068 465500    248 388928    0    0     0     0   66  203  0  0 100  0
> > >  0  0   7068 465500    248 388928    0    0     0     0   79  200  0  0 100  0
> > > 23  0   7068 460332    248 388800    0    0  1416  8896 1981 7142  0  1 99  0
> > >  6  0   6968 360248    248 403736   56    0 15568 95171 19438 110825  1 79 21  0
> > > 23  0   6904 248736    248 419704  392    0 17412 118270 20208 111396  1 82 17  0
> > >  9  0   6884 266116    248 435904  128    0 14956 79756 18554 118020  1 76 23  0
> > >  0  0   6848 219640    248 445760  212    0  9932 51572 12622 76491  0 60 40  0
> > > 
> > > Got a dump of sleeping tasks. Any ideas?
> > 
> > It is stuck waiting for log space to be freed up. Generally this is
> > caused by log IO completion not occurring or an unflushable object
> > preventing the tail from being moved forward.  What:
> 
> Yeah it's strage, it seems like it hits some timeout or gets kicked
> along by background writeback or something. Missed wakeup somewhere?

No idea yet.

> > 	- is the output of mkfs.xfs?
> 
> meta-data=/dev/ram0              isize=256    agcount=16, agsize=65536 blks
>          =                       sectsz=512   attr=2
> data     =                       bsize=4096   blocks=1048576, imaxpct=25
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=16384, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0

Ok, small log, small AGs.

> > 	- are your mount options?
> 
> mount -o delaylog,logbsize=262144,nobarrier /dev/ram0 mnt
> 
> > 	- is the fs_mark command line?
> 
> ../fs_mark -S1 -k -n 1000  -L 100 -s 0  -d scratch/0  -d scratch/1  -d
> scratch/2  -d scratch/3  -d scratch/4  -d scratch/5  -d scratch/6  -d
> scratch/7 -d scratch/8 -d scratch/9 -d scratch/10 -d scratch/11 -d
> scratch/12 -d scratch/13 -d scratch/14 -d scratch/15 -d scratch/16 -d
> scratch/17 -d scratch/18 -d scratch/19 -d scratch/20 -d scratch/21 -d
> scratch/22 -d scratch/23
> for f in scratch/* ; do rm -rf $f & done ; wait

Ok, so you are effectively doing a concurrent synchronous create of 2.4M
zero byte files.

BTW, how many CPU cores does your machine have? if it's more than 8, then
you're probably getting a fair bit of serialisation on the per-ag
structures. I normally use agcount=num_cpus * 2 for scalability
testing when running one load thread per CPU.

> Ran it again, and yes it has locked up for a long long time, it seems
> to be in the rm phase, but I think I've seen similar stall (although not
> so long) in the fs_mark phase too.

Ok, I've just reproduced a couple of short hangs (a few seconds)
during the rm phase so I should be able to track it down.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH] Re: XFS performance oddity
  2010-11-24  3:15     ` Dave Chinner
@ 2010-11-24  5:28       ` Dave Chinner
  2010-11-25  4:54         ` Nick Piggin
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2010-11-24  5:28 UTC (permalink / raw)
  To: Nick Piggin; +Cc: xfs

On Wed, Nov 24, 2010 at 02:15:24PM +1100, Dave Chinner wrote:
> On Wed, Nov 24, 2010 at 11:50:03AM +1100, Nick Piggin wrote:
> > On Wed, Nov 24, 2010 at 07:58:04AM +1100, Dave Chinner wrote:
> > > On Tue, Nov 23, 2010 at 11:24:49PM +1100, Nick Piggin wrote:
> > > > Hi,
> > > > 
> > > > Running parallel fs_mark (0 size inodes, fsync on close) on a ramdisk
> > > > ends up with XFS in funny patterns.
> > > > 
> > > > procs -----------memory---------- ---swap-- -----io---- -system--
> > > > ----cpu----
> > > >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy
> > > > id wa
> > > > 24  1   6576 166396    252 393676  132  140 16900 80666 21308 104333  1 84 14  1
> > > > 21  0   6712 433856    256 387080  100  224  9152 53487 13677 53732  0 55 45  0
> > > >  2  0   7068 463496    248 389100    0  364  2940 17896 4485 26122  0 33 65  2
> > > >  1  0   7068 464340    248 388928    0    0     0     0   66  207  0  0 100  0
> > > >  0  0   7068 464340    248 388928    0    0     0     0   79  200  0  0 100  0
> > > >  0  0   7068 464544    248 388928    0    0     0     0   65  199  0  0 100  0
> > > >  1  0   7068 464748    248 388928    0    0     0     0   79  201  0  0 100  0
> > > >  0  0   7068 465064    248 388928    0    0     0     0   66  202  0  0 100  0
> > > >  0  0   7068 465312    248 388928    0    0     0     0   80  200  0  0 100  0
> > > >  0  0   7068 465500    248 388928    0    0     0     0   65  199  0  0 100  0
> > > >  0  0   7068 465500    248 388928    0    0     0     0   80  202  0  0 100  0
> > > >  1  0   7068 465500    248 388928    0    0     0     0   66  203  0  0 100  0
> > > >  0  0   7068 465500    248 388928    0    0     0     0   79  200  0  0 100  0
> > > > 23  0   7068 460332    248 388800    0    0  1416  8896 1981 7142  0  1 99  0
> > > >  6  0   6968 360248    248 403736   56    0 15568 95171 19438 110825  1 79 21  0
> > > > 23  0   6904 248736    248 419704  392    0 17412 118270 20208 111396  1 82 17  0
> > > >  9  0   6884 266116    248 435904  128    0 14956 79756 18554 118020  1 76 23  0
> > > >  0  0   6848 219640    248 445760  212    0  9932 51572 12622 76491  0 60 40  0
> > > > 
> > > > Got a dump of sleeping tasks. Any ideas?
> > > 
> > > It is stuck waiting for log space to be freed up. Generally this is
> > > caused by log IO completion not occurring or an unflushable object
> > > preventing the tail from being moved forward.  What:
> > 
> > Yeah it's strage, it seems like it hits some timeout or gets kicked
> > along by background writeback or something. Missed wakeup somewhere?
> 
> No idea yet.

It's a maze of twisty passages. Essentially, though, the problem is
that the inode (or inode buffer) at the tail of the log cannot be flushed
because the inode buffer is stale, pinned and locked. Hence it
cannot be flushed by the xfsaild (can't lock the buffer) and
everything stopsi because there is no log space available.

There's a burst of activity every 30s because the xfssyncd forces
the log which causes unpinning, unlocking and freeing of the
troublesome buffer which them frees up some log space.
Unfortunately, once in this state, there's a good chance it will
immediately get stuck on another buffer as soon as the load ramps up
again.

I think the problem is in _xfs_buf_find() where we do the trylock
operation on the buffer. In xfs_buf_lock() (where we block on the
lock) we do this:

        if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
		xfs_log_force(bp->b_target->bt_mount, 0);

That is, if the buffer is pinned and stale, we do an async log force
before going to sleep on the lock. This prevents having to wait for
a long time for the buffer to become unlocked and was introduced in
commit ed3b4d6c ("xfs: Improve scalability of busy extent tracking).
The reason was:


    The only problem with this approach is that when a metadata buffer is
    marked stale (e.g. a directory block is removed), then buffer remains
    pinned and locked until the log goes to disk. The issue here is that
    if that stale buffer is reallocated in a subsequent transaction, the
    attempt to lock that buffer in the transaction will hang waiting
    the log to go to disk to unlock and unpin the buffer. Hence if
    someone tries to lock a pinned, stale, locked buffer we need to
    push on the log to get it unlocked ASAP. Effectively we are trading
    off a guaranteed log force for a much less common trigger for log
    force to occur.

What we are seeing here is exactly the same problem, only it hasn't
been noticed because we are doing a trylock operation and most of
the time failure to obtain the lock does not matter. however, it
certainly appears as we need to also trigger a log force if the
trylock fails on a pinned, stale buffer.

The patch below should fix the problem.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

xfs: push stale, pinned buffers on trylock failures

From: Dave Chinner <dchinner@redhat.com>

As reported by Nick Piggin, XFS is suffering from long pauses under
highly concurrent workloads when hosted on ramdisks. The problem is
that an inode buffer is stuck in the pinned state in memory and as a
result either the inode buffer or one of the inodes within the
buffer is stopping the tail of the log from being moved forward.

The system remains in this state until a periodic log force issued
by xfssyncd causes the buffer to be unpinned. The main problem is
that these are stale buffers, and are hence held locked until the
transaction/checkpoint that marked them state has been committed to
disk. When the filesystem gets into this state, only the xfssyncd
can cause the async transactions to be committed to disk and hence
unpin the inode buffer.

This problem was encountered when scaling the busy extent list, but
only the blocking lock interface was fixed to solve the problem.
Extend the same fix to the buffer trylock operations - if we fail to
lock a pinned, stale buffer, then force the log immediately so that
when the next attempt to lock it comes around, it will have been
unpinned.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
---
 fs/xfs/linux-2.6/xfs_buf.c |   36 +++++++++++++++++++-----------------
 1 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c
index aa1d353..94e69a7 100644
--- a/fs/xfs/linux-2.6/xfs_buf.c
+++ b/fs/xfs/linux-2.6/xfs_buf.c
@@ -488,29 +488,21 @@ found:
 	spin_unlock(&pag->pag_buf_lock);
 	xfs_perag_put(pag);
 
-	/* Attempt to get the semaphore without sleeping,
-	 * if this does not work then we need to drop the
-	 * spinlock and do a hard attempt on the semaphore.
+	/*
+	 * Attempt to get the semaphore without sleeping first. if we fail then
+	 * do a blocking lock if requested.
 	 */
-	if (down_trylock(&bp->b_sema)) {
+	if (xfs_buf_cond_lock(bp)) {
 		if (!(flags & XBF_TRYLOCK)) {
 			/* wait for buffer ownership */
 			xfs_buf_lock(bp);
 			XFS_STATS_INC(xb_get_locked_waited);
 		} else {
-			/* We asked for a trylock and failed, no need
-			 * to look at file offset and length here, we
-			 * know that this buffer at least overlaps our
-			 * buffer and is locked, therefore our buffer
-			 * either does not exist, or is this buffer.
-			 */
+			/* We asked for a trylock and failed.  */
 			xfs_buf_rele(bp);
 			XFS_STATS_INC(xb_busy_locked);
 			return NULL;
 		}
-	} else {
-		/* trylock worked */
-		XB_SET_OWNER(bp);
 	}
 
 	if (bp->b_flags & XBF_STALE) {
@@ -876,10 +868,18 @@ xfs_buf_rele(
  */
 
 /*
- *	Locks a buffer object, if it is not already locked.
- *	Note that this in no way locks the underlying pages, so it is only
- *	useful for synchronizing concurrent use of buffer objects, not for
- *	synchronizing independent access to the underlying pages.
+ *	Locks a buffer object, if it is not already locked.  Note that this in
+ *	no way locks the underlying pages, so it is only useful for
+ *	synchronizing concurrent use of buffer objects, not for synchronizing
+ *	independent access to the underlying pages.
+ *
+ *	If we come across a stale, pinned, locked buffer, we know that we are
+ *	being asked to lock a buffer that has been reallocated. Because it is
+ *	pinned, we know that the log has not been pushed to disk and hence it
+ *	will still be locked.  Rather than continuing to have trylock attempts
+ *	fail until someone else pushes the log, push it ourselves before
+ *	returning.  This means that the xfsaild will not get stuck trying
+ *	to push on stale inode buffers.
  */
 int
 xfs_buf_cond_lock(
@@ -890,6 +890,8 @@ xfs_buf_cond_lock(
 	locked = down_trylock(&bp->b_sema) == 0;
 	if (locked)
 		XB_SET_OWNER(bp);
+	else if (atomic_read(&bp->b_pin_count) && (bp->b_flags & XBF_STALE))
+		xfs_log_force(bp->b_target->bt_mount, 0);
 
 	trace_xfs_buf_cond_lock(bp, _RET_IP_);
 	return locked ? 0 : -EBUSY;

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] Re: XFS performance oddity
  2010-11-24  5:28       ` [PATCH] " Dave Chinner
@ 2010-11-25  4:54         ` Nick Piggin
  0 siblings, 0 replies; 7+ messages in thread
From: Nick Piggin @ 2010-11-25  4:54 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Nick Piggin, xfs

On Wed, Nov 24, 2010 at 04:28:23PM +1100, Dave Chinner wrote:
> xfs: push stale, pinned buffers on trylock failures
> 
> From: Dave Chinner <dchinner@redhat.com>
> 
> As reported by Nick Piggin, XFS is suffering from long pauses under
> highly concurrent workloads when hosted on ramdisks. The problem is
> that an inode buffer is stuck in the pinned state in memory and as a
> result either the inode buffer or one of the inodes within the
> buffer is stopping the tail of the log from being moved forward.
> 
> The system remains in this state until a periodic log force issued
> by xfssyncd causes the buffer to be unpinned. The main problem is
> that these are stale buffers, and are hence held locked until the
> transaction/checkpoint that marked them state has been committed to
> disk. When the filesystem gets into this state, only the xfssyncd
> can cause the async transactions to be committed to disk and hence
> unpin the inode buffer.
> 
> This problem was encountered when scaling the busy extent list, but
> only the blocking lock interface was fixed to solve the problem.
> Extend the same fix to the buffer trylock operations - if we fail to
> lock a pinned, stale buffer, then force the log immediately so that
> when the next attempt to lock it comes around, it will have been
> unpinned.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>

No more stalls, thanks.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-11-25  4:52 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-23 12:24 XFS performance oddity Nick Piggin
2010-11-23 12:30 ` Nick Piggin
2010-11-23 20:58 ` Dave Chinner
2010-11-24  0:50   ` Nick Piggin
2010-11-24  3:15     ` Dave Chinner
2010-11-24  5:28       ` [PATCH] " Dave Chinner
2010-11-25  4:54         ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox