public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* File system remain unresponsive until the system is rebooted.
@ 2012-01-30  9:18 Supratik Goswami
  2012-01-30 17:23 ` Peter Grandi
  2012-01-31  1:31 ` Dave Chinner
  0 siblings, 2 replies; 18+ messages in thread
From: Supratik Goswami @ 2012-01-30  9:18 UTC (permalink / raw)
  To: xfs

Hi

We are using RAID-0 volumes as PV's in our LVM stack and XFS as the filesystem.

The kernel logged the below call trace when the filesystem was being
expanded using "xfs_growfs" command.
We have used xfs_grofs at least 3 times earlier but did not cam across
this situation.

The files system remained unresponsive until we rebooted the system
and again increased the size of the filesystem.
This time it worked fine. Can you please tell us why xfs_grofs hanged suddenly ?

Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550724] flush-254:0   D
00000001016ff564     0 31679      2 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550728]
ffff8801de5df388 0000000000000246 ffff88002a3f57d0 ffff8801de5df308
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550732]
ffff8801deb59a10 ffff8801de5df350 ffff8801def9ec70 ffff8801de5dffd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550734]
ffff8801def9e8c0 ffff8801def9e8c0 ffff8801def9e8c0 ffff8801de5dffd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550737] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550748]
[<ffffffff812604c7>] ? xfs_btree_insert+0x67/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550754]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550757]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550761]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550764]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550770]
[<ffffffff81257f20>] xfs_bmap_btalloc+0x300/0xa90
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550774]
[<ffffffff812513c5>] ? xfs_bmap_search_multi_extents+0xa5/0x110
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550784]
[<ffffffff81251609>] ? xfs_bmap_search_extents+0x69/0xf0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550787]
[<ffffffff812586cc>] xfs_bmap_alloc+0x1c/0x40
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550790]
[<ffffffff8125928f>] xfs_bmapi+0xb9f/0x1290
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550796]
[<ffffffff8127a205>] xfs_iomap_write_allocate+0x1c5/0x3c0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550799]
[<ffffffff8127af46>] xfs_iomap+0x2a6/0x2e0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550804]
[<ffffffff81293998>] xfs_map_blocks+0x28/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550807]
[<ffffffff81294d7a>] xfs_page_state_convert+0x3da/0x720
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550809]
[<ffffffff81295211>] xfs_vm_writepage+0x71/0x120
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550815]
[<ffffffff810b5b62>] __writepage+0x12/0x40
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550817]
[<ffffffff810b6917>] write_cache_pages+0x1d7/0x3d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550820]
[<ffffffff810b5b50>] ? __writepage+0x0/0x40
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550823]
[<ffffffff810b6b2f>] generic_writepages+0x1f/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550826]
[<ffffffff81294020>] xfs_vm_writepages+0x70/0x90
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550829]
[<ffffffff810b6b5c>] do_writepages+0x1c/0x40
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550834]
[<ffffffff81111693>] writeback_single_inode+0x103/0x3f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550837]
[<ffffffff81111c64>] writeback_sb_inodes+0x184/0x2a0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550840]
[<ffffffff81111deb>] writeback_inodes_wb+0x6b/0x1f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550843]
[<ffffffff811121db>] wb_writeback+0x26b/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550847]
[<ffffffff8104c73a>] ? del_timer_sync+0x1a/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550850]
[<ffffffff811123bf>] wb_do_writeback+0x17f/0x190
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550853]
[<ffffffff8111241b>] bdi_writeback_task+0x4b/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550858]
[<ffffffff810c72f0>] ? bdi_start_fn+0x0/0x110
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550861]
[<ffffffff810c7371>] bdi_start_fn+0x81/0x110
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550863]
[<ffffffff810c72f0>] ? bdi_start_fn+0x0/0x110
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550868]
[<ffffffff81059e2e>] kthread+0x8e/0xa0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550873]
[<ffffffff8100a71a>] child_rip+0xa/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550876]
[<ffffffff81059da0>] ? kthread+0x0/0xa0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550878]
[<ffffffff8100a710>] ? child_rip+0x0/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550897] cm-httpserver D
00000001016fed9a     0 16726      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550900]
ffff8801dd869d28 0000000000000286 ffff8801dca1a8c0 ffff8801dd869ca8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550903]
ffff8801b1583032 ffff8801dd869cf0 ffff8801dd7f6870 ffff8801dd869fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550905]
ffff8801dd7f64c0 ffff8801dd7f64c0 ffff8801dd7f64c0 ffff8801dd869fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550908] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550916]
[<ffffffff812de22a>] ? security_inode_permission+0x1a/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550920]
[<ffffffff810fa91d>] ? __link_path_walk+0xed/0xf90
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550922]
[<ffffffff814b026c>] __mutex_lock_slowpath+0x11c/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550928]
[<ffffffff8130e314>] ? apparmor_file_alloc_security+0x24/0x90
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550930]
[<ffffffff814b005e>] mutex_lock+0x1e/0x40
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550933]
[<ffffffff810fce3d>] do_filp_open+0x3cd/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550936]
[<ffffffff810f8fdc>] ? path_put+0x2c/0x40
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550939]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550944]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550947]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550950]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550953]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550977] cm-gdoc       D
0000000000000002     0 16809      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550980]
ffff8801dcd65810 0000000000000286 000000008102a929 ffff8801dcd65790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550983]
ffff880002c67bd0 ffff8801dcd657d8 ffff88012679e8f0 ffff8801dcd65fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550985]
ffff88012679e540 ffff88012679e540 ffff88012679e540 ffff8801dcd65fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550990] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550995]
[<ffffffff81038c0f>] ? find_busiest_group+0x5f/0x440
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550997]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551003]
[<ffffffff813a37ee>] ? notify_remote_via_irq+0x5e/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551006]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551008]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551011]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551013]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551017]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551020]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551026]
[<ffffffff8102cddd>] ? update_sd_lb_stats+0x1fd/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551029]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551032]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551035]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551038]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551041]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551044]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551047]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551050]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551052]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551054]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551057]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551059]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551062]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551065]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551067]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551070]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551078] cm-gdoc       D
0000000000000001     0 16816      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551081]
ffff8801dea61a58 0000000000000286 ffff8800087f1078 ffff8801dea619d8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551083]
ffff880004a73000 ffff8801dea61a20 ffff8801dcc3ca70 ffff8801dea61fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551086]
ffff8801dcc3c6c0 ffff8801dcc3c6c0 ffff8801dcc3c6c0 ffff8801dea61fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551088] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551094]
[<ffffffff8127efc0>] ? xlog_state_get_iclog_space+0x60/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551097]
[<ffffffff814afcb5>] schedule_timeout+0x1e5/0x2c0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551099]
[<ffffffff8127efc0>] ? xlog_state_get_iclog_space+0x60/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551102]
[<ffffffff8127f8a8>] ? xlog_write+0x5d8/0x690
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551104]
[<ffffffff814b0cc8>] __down+0x78/0xf0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551109]
[<ffffffff8105eb6c>] down+0x3c/0x50
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551112]
[<ffffffff81296e0e>] xfs_buf_lock+0x1e/0x60
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551114]
[<ffffffff81297699>] _xfs_buf_find+0x149/0x270
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551117]
[<ffffffff8129781b>] xfs_buf_get_flags+0x5b/0x170
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551119]
[<ffffffff81297943>] xfs_buf_read_flags+0x13/0xa0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551122]
[<ffffffff8128cad9>] xfs_trans_read_buf+0x1c9/0x300
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551126]
[<ffffffff8126d3c1>] ? xfs_dir2_sf_removename+0x121/0x190
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551128]
[<ffffffff812708ef>] xfs_read_agi+0x6f/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551131]
[<ffffffff81276736>] xfs_iunlink+0x46/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551134]
[<ffffffff81042582>] ? current_fs_time+0x22/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551137]
[<ffffffff811113cd>] ? __mark_inode_dirty+0x3d/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551139]
[<ffffffff8129b90d>] ? xfs_ichgtime+0x1d/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551142]
[<ffffffff8128dd19>] xfs_droplink+0x59/0x70
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551144]
[<ffffffff8128f364>] xfs_remove+0x294/0x350
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551148]
[<ffffffff810f821e>] ? generic_permission+0x1e/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551150]
[<ffffffff8129b213>] xfs_vn_unlink+0x43/0x90
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551152]
[<ffffffff810f98c0>] vfs_unlink+0xa0/0xf0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551154]
[<ffffffff810fa755>] ? lookup_hash+0x35/0x50
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551156]
[<ffffffff810fc16e>] do_unlinkat+0x19e/0x1d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551159]
[<ffffffff810efe6d>] ? fput+0x1d/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551162]
[<ffffffff810ec218>] ? filp_close+0x58/0x90
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551164]
[<ffffffff810fc1b1>] sys_unlink+0x11/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551167]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551169]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551177] cm-gdoc       D
00000001016fea9d     0 16817      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551180]
ffff8801ddbad810 0000000000000286 0000000000000000 ffff8801ddbad790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551183]
ffff880002c67bd0 ffff8801ddbad7d8 ffff8801de886ab0 ffff8801ddbadfd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551185]
ffff8801de886700 ffff8801de886700 ffff8801de886700 ffff8801ddbadfd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551190] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551192]
[<ffffffff81038c0f>] ? find_busiest_group+0x5f/0x440
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551195]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551198]
[<ffffffff813a37ee>] ? notify_remote_via_irq+0x5e/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551200]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551203]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551205]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551208]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551210]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551212]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551217]
[<ffffffff8143ccc4>] ? ip_finish_output+0x134/0x310
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551220]
[<ffffffff8143cf58>] ? ip_output+0xb8/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551222]
[<ffffffff814b08ad>] ? schedule_hrtimeout_range+0xcd/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551225]
[<ffffffff8143bf10>] ? ip_local_out+0x20/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551227]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551230]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551233]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551236]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551239]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551241]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551244]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551246]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551249]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551251]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551253]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551256]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551259]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551261]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551264]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551267]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551274] cm-gdoc       D
00000001016fea61     0 16818      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551277]
ffff8801dd593810 0000000000000286 0000000000000000 ffff8801dd593790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551280]
ffff880002c67bd0 ffff8801dd5937d8 ffff8801ded64af0 ffff8801dd593fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551284]
ffff8801ded64740 ffff8801ded64740 ffff8801ded64740 ffff8801dd593fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551287] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551290]
[<ffffffff8100a505>] ? hypervisor_callback+0x25/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551293]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551295]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551298]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551300]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551302]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551305]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551307]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551310]
[<ffffffff8102cddd>] ? update_sd_lb_stats+0x1fd/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551312]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551315]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551318]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551321]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551324]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551326]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551329]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551331]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551333]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551336]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551338]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551340]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551343]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551346]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551348]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551351]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551359] cm-gdoc       D
00000001016fea55     0 16831      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551362]
ffff8801dde39810 0000000000000286 0000000000000000 ffff8801dde39790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551364]
ffff880002c67bd0 ffff8801dde397d8 ffff8801de888cb0 ffff8801dde39fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551367]
ffff8801de888900 ffff8801de888900 ffff8801de888900 ffff8801dde39fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551369] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551374]
[<ffffffff81038c0f>] ? find_busiest_group+0x5f/0x440
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551377]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551379]
[<ffffffff813a37ee>] ? notify_remote_via_irq+0x5e/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551382]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551384]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551387]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551389]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551391]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551394]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551396]
[<ffffffff8143ccc4>] ? ip_finish_output+0x134/0x310
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551399]
[<ffffffff8143cf58>] ? ip_output+0xb8/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551401]
[<ffffffff814b08ad>] ? schedule_hrtimeout_range+0xcd/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551404]
[<ffffffff8143bf10>] ? ip_local_out+0x20/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551406]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551409]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551412]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551415]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551417]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551420]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551423]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551426]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551428]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551430]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551433]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551435]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551438]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551441]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551443]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551446]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551453] cm-gdoc       D
0000000000000001     0 16837      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551456]
ffff8801dd5a7810 0000000000000286 0000000000000000 ffff8801dd5a7790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551459]
ffff880004a73000 ffff8801dd5a77d8 ffff8801dd59e4f0 ffff8801dd5a7fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551461]
ffff8801dd59e140 ffff8801dd59e140 ffff8801dd59e140 ffff8801dd5a7fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551464] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551469]
[<ffffffff81038c0f>] ? find_busiest_group+0x5f/0x440
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551472]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551474]
[<ffffffff813a37ee>] ? notify_remote_via_irq+0x5e/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551477]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551479]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551482]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551484]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551486]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551489]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551491]
[<ffffffff8143ccc4>] ? ip_finish_output+0x134/0x310
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551494]
[<ffffffff8143cf58>] ? ip_output+0xb8/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551496]
[<ffffffff814b08ad>] ? schedule_hrtimeout_range+0xcd/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551499]
[<ffffffff8143bf10>] ? ip_local_out+0x20/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551501]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551504]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551507]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551509]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551512]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551515]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551518]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551520]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551522]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551525]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551527]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551530]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551533]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551535]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551538]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551541]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551548] cm-gdoc       D
0000000000000001     0 16838      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551551]
ffff8801b15e9810 0000000000000286 0000000000000000 ffff8801b15e9790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551554]
ffff880004a73000 ffff8801b15e97d8 ffff8801deb9e530 ffff8801b15e9fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551556]
ffff8801deb9e180 ffff8801deb9e180 ffff8801deb9e180 ffff8801b15e9fd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551558] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551563]
[<ffffffff81038c0f>] ? find_busiest_group+0x5f/0x440
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551566]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551569]
[<ffffffff813a37ee>] ? notify_remote_via_irq+0x5e/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551571]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551574]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551576]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551578]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551581]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551583]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551586]
[<ffffffff8143ccc4>] ? ip_finish_output+0x134/0x310
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551588]
[<ffffffff8143cf58>] ? ip_output+0xb8/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551591]
[<ffffffff814b08ad>] ? schedule_hrtimeout_range+0xcd/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551593]
[<ffffffff8143bf10>] ? ip_local_out+0x20/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551596]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551598]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551601]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551604]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551606]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551609]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551612]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551614]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551616]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551618]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551621]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551623]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551626]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551629]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551631]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551634]
[<ffffffff81009b50>] ? system_call+0x0/0x52
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551641] cm-gdoc       D
0000000000000002     0 16859      1 0x00000000
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551644]
ffff88012646b810 0000000000000286 0000000000000000 ffff88012646b790
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551647]
ffff880002c67bd0 ffff88012646b7d8 ffff8801b1690a70 ffff88012646bfd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551649]
ffff8801b16906c0 ffff8801b16906c0 ffff8801b16906c0 ffff88012646bfd8
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551652] Call Trace:
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551656]
[<ffffffff81038c0f>] ? find_busiest_group+0x5f/0x440
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551659]
[<ffffffff814b172d>] rwsem_down_failed_common+0xbd/0x240
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551662]
[<ffffffff813a37ee>] ? notify_remote_via_irq+0x5e/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551664]
[<ffffffff814b1906>] rwsem_down_read_failed+0x26/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551667]
[<ffffffff813424b4>] call_rwsem_down_read_failed+0x14/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551669]
[<ffffffff814b0ab2>] ? down_read+0x12/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551671]
[<ffffffff81271882>] xfs_ialloc_ag_select+0x92/0x370
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551674]
[<ffffffff81421a95>] ? sch_direct_xmit+0x95/0x200
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551676]
[<ffffffff81272465>] xfs_dialloc+0x415/0x940
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551679]
[<ffffffff8143ccc4>] ? ip_finish_output+0x134/0x310
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551682]
[<ffffffff8143cf58>] ? ip_output+0xb8/0xc0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551684]
[<ffffffff814b08ad>] ? schedule_hrtimeout_range+0xcd/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551687]
[<ffffffff8143bf10>] ? ip_local_out+0x20/0x30
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551689]
[<ffffffff81275ead>] xfs_ialloc+0x5d/0x700
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551692]
[<ffffffff812806ec>] ? xlog_grant_log_space+0x3fc/0x450
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551695]
[<ffffffff8128dded>] xfs_dir_ialloc+0x7d/0x2d0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551697]
[<ffffffff81280814>] ? xfs_log_reserve+0xd4/0xe0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551700]
[<ffffffff8128f9d3>] xfs_create+0x3e3/0x5f0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551703]
[<ffffffff81103361>] ? __d_lookup+0xb1/0x180
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551705]
[<ffffffff8129afd2>] xfs_vn_mknod+0xa2/0x1b0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551708]
[<ffffffff8129b0fb>] xfs_vn_create+0xb/0x10
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551710]
[<ffffffff810f966f>] vfs_create+0xaf/0xd0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551712]
[<ffffffff810fa20c>] __open_namei_create+0xbc/0x100
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551715]
[<ffffffff810fd4d6>] do_filp_open+0xa66/0xba0
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551717]
[<ffffffff811089bb>] ? alloc_fd+0x4b/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551720]
[<ffffffff810ec404>] do_sys_open+0x64/0x160
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551723]
[<ffffffff810ec52b>] sys_open+0x1b/0x20
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551726]
[<ffffffff81009bb8>] system_call_fastpath+0x16/0x1b
Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.551728]
[<ffffffff81009b50>] ? system_call+0x0/0x52

-- 
Warm Regards

Supratik

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-30  9:18 File system remain unresponsive until the system is rebooted Supratik Goswami
@ 2012-01-30 17:23 ` Peter Grandi
  2012-01-31  1:31 ` Dave Chinner
  1 sibling, 0 replies; 18+ messages in thread
From: Peter Grandi @ 2012-01-30 17:23 UTC (permalink / raw)
  To: Linux fs XFS

> We are using RAID-0 volumes as PV's in our LVM stack and XFS
> as the filesystem.

LVM is in general a bad idea, and I have found that it
occasionally interacts not so well with XFS and other
filesystems under resource pressure.

It also seems from one of the backtraces that you are
complicating all this further by running under Xen,
perhaps on sparsely allocated virtual disks.

> [ ... ] The files system remained unresponsive until we
> rebooted the system and again increased the size of the
> filesystem. [ ... ]

Good luck. I know some people who also went the whole VM/LVM/XFS
stack way and had lots of problems. It is what I call the
"syntactic" approach: expecting that every syntactically valid
combination of features is going to work, and work well. Sure it
should :-).

Most of the hangs seem to happen during resource allocation,
and at least one is triggered by the flusher:

> Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550853] [<ffffffff8111241b>] bdi_writeback_task+0x4b/0xe0
> Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550858] [<ffffffff810c72f0>] ? bdi_start_fn+0x0/0x110
> Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550861] [<ffffffff810c7371>] bdi_start_fn+0x81/0x110
> Jan 26 03:05:47 ip-10-0-1-153 kernel: [241565.550863] [<ffffffff810c72f0>] ? bdi_start_fn+0x0/0x110

It could be that there is intense pressure on kernel memory,
often due to excessively loose flusher parameters.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-30  9:18 File system remain unresponsive until the system is rebooted Supratik Goswami
  2012-01-30 17:23 ` Peter Grandi
@ 2012-01-31  1:31 ` Dave Chinner
  2012-01-31  5:04   ` Supratik Goswami
  1 sibling, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2012-01-31  1:31 UTC (permalink / raw)
  To: Supratik Goswami; +Cc: xfs

On Mon, Jan 30, 2012 at 02:48:58PM +0530, Supratik Goswami wrote:
> Hi
> 
> We are using RAID-0 volumes as PV's in our LVM stack and XFS as the filesystem.
> 
> The kernel logged the below call trace when the filesystem was being
> expanded using "xfs_growfs" command.
> We have used xfs_grofs at least 3 times earlier but did not cam across
> this situation.
> 
> The files system remained unresponsive until we rebooted the system
> and again increased the size of the filesystem.
> This time it worked fine. Can you please tell us why xfs_grofs hanged suddenly ?

What kernel?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31  1:31 ` Dave Chinner
@ 2012-01-31  5:04   ` Supratik Goswami
  2012-01-31  7:19     ` Emmanuel Florac
                       ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: Supratik Goswami @ 2012-01-31  5:04 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

We are using Amazon EC2 instances.

ubuntu@ip-10-0-0-10:~$ uname -aLinux ip-10-0-0-10 2.6.32-318-ec2
#38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux
On Tue, Jan 31, 2012 at 7:01 AM, Dave Chinner <david@fromorbit.com> wrote:
> On Mon, Jan 30, 2012 at 02:48:58PM +0530, Supratik Goswami wrote:
>> Hi
>>
>> We are using RAID-0 volumes as PV's in our LVM stack and XFS as the filesystem.
>>
>> The kernel logged the below call trace when the filesystem was being
>> expanded using "xfs_growfs" command.
>> We have used xfs_grofs at least 3 times earlier but did not cam across
>> this situation.
>>
>> The files system remained unresponsive until we rebooted the system
>> and again increased the size of the filesystem.
>> This time it worked fine. Can you please tell us why xfs_grofs hanged suddenly ?
>
> What kernel?
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com



-- 
Warm Regards

Supratik

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31  5:04   ` Supratik Goswami
@ 2012-01-31  7:19     ` Emmanuel Florac
  2012-01-31  9:04     ` Stan Hoeppner
  2012-01-31 19:44     ` Dave Chinner
  2 siblings, 0 replies; 18+ messages in thread
From: Emmanuel Florac @ 2012-01-31  7:19 UTC (permalink / raw)
  To: Supratik Goswami; +Cc: xfs

Le Tue, 31 Jan 2012 10:34:10 +0530 vous écriviez:

> We are using Amazon EC2 instances.
> 

You can't know for sure what's happening behind the scenes. The most
common problem of EC2 instances is IO starving, so this is hardly
surprising.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31  5:04   ` Supratik Goswami
  2012-01-31  7:19     ` Emmanuel Florac
@ 2012-01-31  9:04     ` Stan Hoeppner
  2012-01-31 11:08       ` Emmanuel Florac
  2012-01-31 20:50       ` Dave Chinner
  2012-01-31 19:44     ` Dave Chinner
  2 siblings, 2 replies; 18+ messages in thread
From: Stan Hoeppner @ 2012-01-31  9:04 UTC (permalink / raw)
  To: xfs

On 1/30/2012 11:04 PM, Supratik Goswami wrote:
> We are using Amazon EC2 instances.

               ^^^^^^^^^^
I'd have never thought I would see those words on this list, except
maybe as a joke, or as an example of one of the the worst possible
platforms for XFS.

I wish EC2 had been asked about during the QA session after Dave's
presentation.  I'm guessing some laughter would have been involved. ;)

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31  9:04     ` Stan Hoeppner
@ 2012-01-31 11:08       ` Emmanuel Florac
  2012-02-01 12:31         ` Peter Grandi
  2012-01-31 20:50       ` Dave Chinner
  1 sibling, 1 reply; 18+ messages in thread
From: Emmanuel Florac @ 2012-01-31 11:08 UTC (permalink / raw)
  To: stan; +Cc: xfs

Le Tue, 31 Jan 2012 03:04:18 -0600
Stan Hoeppner <stan@hardwarefreak.com> écrivait:

> I'd have never thought I would see those words on this list, except
> maybe as a joke, or as an example of one of the the worst possible
> platforms for XFS.

Oh come on, be nice for once :) People are constantly brainwashed about
how "cloud computing" will solve financial crisis and world hunger,
and didn't you notice that Amazon now sells an over-the-top storage
platform that'll cover everyone needs real soon now? If you believe
their marketing, of course. Don't forget to check:
http://aws.amazon.com/storagegateway/
 
> I wish EC2 had been asked about during the QA session after Dave's
> presentation.  I'm guessing some laughter would have been involved. ;)

Is there a filesystem that's really suitable for EC2? What about
workloads? my impression is that EC2 is fine for whatever doesn't need
any QoS. Prototyping, for instance. 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31  5:04   ` Supratik Goswami
  2012-01-31  7:19     ` Emmanuel Florac
  2012-01-31  9:04     ` Stan Hoeppner
@ 2012-01-31 19:44     ` Dave Chinner
  2 siblings, 0 replies; 18+ messages in thread
From: Dave Chinner @ 2012-01-31 19:44 UTC (permalink / raw)
  To: Supratik Goswami; +Cc: xfs

On Tue, Jan 31, 2012 at 10:34:10AM +0530, Supratik Goswami wrote:
> We are using Amazon EC2 instances.
> 
> ubuntu@ip-10-0-0-10:~$ uname -aLinux ip-10-0-0-10 2.6.32-318-ec2
> #38-Ubuntu SMP Thu Sep 1 18:09:30 UTC 2011 x86_64 GNU/Linux

The growfs hang problem was fixed in 2.6.34.

On earlier kernels, if you do a grow while the system is under
allocation load it could deadlock. growing on a mostly idle
filesystem was generally OK, but under heavy load problems could
occur. This hang is what xfstests 104 exercises...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31  9:04     ` Stan Hoeppner
  2012-01-31 11:08       ` Emmanuel Florac
@ 2012-01-31 20:50       ` Dave Chinner
  2012-02-01  0:20         ` Stan Hoeppner
  1 sibling, 1 reply; 18+ messages in thread
From: Dave Chinner @ 2012-01-31 20:50 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: xfs

On Tue, Jan 31, 2012 at 03:04:18AM -0600, Stan Hoeppner wrote:
> On 1/30/2012 11:04 PM, Supratik Goswami wrote:
> > We are using Amazon EC2 instances.
> 
>                ^^^^^^^^^^
> I'd have never thought I would see those words on this list, except
> maybe as a joke, or as an example of one of the the worst possible
> platforms for XFS.

I don't agree with you there. If the workload works best on XFs, it
doesn't matter what the underlying storage device is. e.g. if it's a
fsync heavy workload, it will still perform better on XFS on EC2
than btrfs on EC2...

> I wish EC2 had been asked about during the QA session after Dave's
> presentation.  I'm guessing some laughter would have been involved. ;)

You'd be wrong about that. There are as many good uses of cloud
services as there are bad ones, yet the same decisions about storage
need to be made even when services are remotely hosted....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31 20:50       ` Dave Chinner
@ 2012-02-01  0:20         ` Stan Hoeppner
  2012-02-01 11:40           ` Peter Grandi
  0 siblings, 1 reply; 18+ messages in thread
From: Stan Hoeppner @ 2012-02-01  0:20 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On 1/31/2012 2:50 PM, Dave Chinner wrote:
> On Tue, Jan 31, 2012 at 03:04:18AM -0600, Stan Hoeppner wrote:
>> On 1/30/2012 11:04 PM, Supratik Goswami wrote:
>>> We are using Amazon EC2 instances.
>>
>>                ^^^^^^^^^^
>> I'd have never thought I would see those words on this list, except
>> maybe as a joke, or as an example of one of the the worst possible
>> platforms for XFS.
> 
> I don't agree with you there. If the workload works best on XFs, it
> doesn't matter what the underlying storage device is. e.g. if it's a
> fsync heavy workload, it will still perform better on XFS on EC2
> than btrfs on EC2...
> 
>> I wish EC2 had been asked about during the QA session after Dave's
>> presentation.  I'm guessing some laughter would have been involved. ;)
> 
> You'd be wrong about that. There are as many good uses of cloud
> services as there are bad ones, yet the same decisions about storage
> need to be made even when services are remotely hosted....

Maybe I should have elaborated a bit.  My thinking is that workloads
that would require XFS, or benefit most from it, are probably going to
need more guarantees WRT bandwidth and IOPS being available
consistently, vs sharing said resources with other systems in the cloud
infrastructure.  Additionally, you have driven the point home many times
WRT tuning XFS to the underlying hardware, specifically stripe
alignment.  I'd bet alignment would be a bit tricky to achieve in a
cloud environment such as EC2.

In summary, I wasn't saying XFS is bad on EC2.  I was simply saying EC2
is probably bad for the typical workloads where XFS best flexes its muscles.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-01  0:20         ` Stan Hoeppner
@ 2012-02-01 11:40           ` Peter Grandi
  2012-02-01 23:55             ` Dave Chinner
  2012-02-02 22:54             ` Peter Grandi
  0 siblings, 2 replies; 18+ messages in thread
From: Peter Grandi @ 2012-02-01 11:40 UTC (permalink / raw)
  To: Linux fs XFS

[ ... ]

>>> We are using Amazon EC2 instances.

>>> [ ... ]  one of the the worst possible platforms for XFS.

>> I don't agree with you there. If the workload works best on
>> XFs, it doesn't matter what the underlying storage device is.
>> e.g. if it's a fsync heavy workload, it will still perform
>> better on XFS on EC2 than btrfs on EC2...

There are special cases, but «fsync heavy» is a bit of bad
example.

In general file system designs are not at all independent of the
expected storage platform, and some designs are far better than
others for specific storage platforms, and viceversa. This goes
all the way back to the 4BSD filesystem being specifically
optimized for rotational latency.

[ ... ]

>> You'd be wrong about that. There are as many good uses of
>> cloud services as there are bad ones,

VMs are not "cloud" services, those are more like remotely
hosted services, used via SOAP/REST. VMs are more like
colocation on the cheap.

>> yet the same decisions about storage need to be made even
>> when services are remotely hosted....

The basic problem with VM platforms is that they have completely
different latency (and somewhat different bandwidth) and
scheduling characteristics from "real" hardware, in particular
the relative costs of several operations are very different than
on "real" hardware, and the design tradeoffs that are good for
"real" hardware may not be relevant or may even be bad for VMs.

In addition VM "disks" can be implemented in crazy ways, like
with sparse files, and those impact severely achievable
performance levels.

> [ ... ] workloads that would require XFS, or benefit most from
> it, are probably going to need more guarantees WRT bandwidth
> and IOPS being available consistently, vs sharing said
> resources with other systems in the cloud infrastructure.

This is almost there, but «consistently» is a bit of an
understatement. It is not just that in VMs resources are
shared and subject to externally induced loads.

What matters is that the storage layer performance envelope have
roughly the same tradeoffs as those for which a certain design
has been aimed at. Even differently shaped hardware, like flash
SSD, can have very different performance envelopes than rotating
disks, or sets of rotating disks. A VM running on its own on
a certain platform still has different latencies and tradeoffs
than the underlying platform.

> Additionally, you have driven the point home many times WRT
> tuning XFS to the underlying hardware, specifically stripe
> alignment.

That as usual only matters for RMW-oriented storage layers, and
we don't really know what storage layer EC2 uses (hopefully not
one with RMW problems as parity RAID is known to be quite ill
suited to VM disks).

[ ... ]

> [ ... ] EC2 is probably bad for the typical workloads where
> XFS best flexes its muscles.

That's probably a good point but not quite the apposite one
here.

In the case raised by the OP, he had a large delay and "forgot"
to say he was running the system under layers (of unknown
structure) of virtualization.

In that case the latency (and bandwidth) profiles of both the
computing and the storage platforms can be very different from
those XFS has been aimed at, and I would not be surprised by
starvation or locking problems. Eventually DaveC pointed out a
known locking one during 'growfs', so not dependent on the
latency profile of the platform.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-01-31 11:08       ` Emmanuel Florac
@ 2012-02-01 12:31         ` Peter Grandi
  2012-02-01 14:31           ` Emmanuel Florac
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Grandi @ 2012-02-01 12:31 UTC (permalink / raw)
  To: Linux fs XFS

[ ... ]

> Is there a filesystem that's really suitable for EC2? What
> about workloads? my impression is that EC2 is fine for
> whatever doesn't need any QoS. Prototyping, for instance.

That's one use, but it is wider than that. Services like that
are good for "emarassingly parallel" workloads, where the QoS of
*a single* element does not matter, or even the *performance*
(or the *reliability*) of a single element is less important, at
least compared to the ability to throw a lot of cheap ones at a
problem.

Largely the same domain of application as the Google platform,
where their "embarassingly parallel" workload is log generation
and analysis.

Which advises that on EC2 simpler is better, and 'ext2' might be
most appropriate for non-shared applications. XFS, being like JFS
a rather general purpose design, looks appropriate, even if as
mentioned in another reply, it is aimed at massive and highly
parallel storage layers with highly threaded applications.

Aside note: I think that on most VM systems using "virtual disks"
of any sort except to store he OS filetree which is mostly RO is a
bad idea, and I suffered a lot last year dealing with a rather
hastily thrown together setup of that sort.

In that case I eliminated all but the root filetree VM disks and
replaced them with filetrees exported via NFS from XFS on the
underlying VM host itself (that is not over the network).

This improved performance tremendously (in part because in most VM
layers virtual NICs are more efficient than virtual disk adapters)
but in particular much faster check/repair and much reduced crazy
latencies during backups, because I could run check/repair and the
backups *on the real machine*, where XFS performed a lot better
without the VM overheads and "skewed" latencies.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-01 12:31         ` Peter Grandi
@ 2012-02-01 14:31           ` Emmanuel Florac
  2012-02-01 22:20             ` Peter Grandi
  0 siblings, 1 reply; 18+ messages in thread
From: Emmanuel Florac @ 2012-02-01 14:31 UTC (permalink / raw)
  To: Peter Grandi; +Cc: Linux fs XFS

Le Wed, 1 Feb 2012 12:31:53 +0000
pg_xf2@xf2.for.sabi.co.UK (Peter Grandi) écrivait:

> In that case I eliminated all but the root filetree VM disks and
> replaced them with filetrees exported via NFS from XFS on the
> underlying VM host itself (that is not over the network).
> 
> This improved performance tremendously (in part because in most VM
> layers virtual NICs are more efficient than virtual disk adapters)
> but in particular much faster check/repair and much reduced crazy
> latencies during backups, because I could run check/repair and the
> backups *on the real machine*, where XFS performed a lot better
> without the VM overheads and "skewed" latencies.
> 

Thank you for all the good info. To add a last note, I use iSCSI to
export lvm LVs to VMs from the host, and it works fine. Exporting
files living on an XFS works well enough, too, though slightly slower.

It can be useful particularly for windows VM, because many windows app
really behave poorly with network shares (or refuse to use them
altogether).

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-01 14:31           ` Emmanuel Florac
@ 2012-02-01 22:20             ` Peter Grandi
  0 siblings, 0 replies; 18+ messages in thread
From: Peter Grandi @ 2012-02-01 22:20 UTC (permalink / raw)
  To: Linux fs XFS

>>> [ ... ] my impression is that EC2 is fine for whatever
>>> doesn't need any QoS. Prototyping, for instance. [ ... ]

>> [ ... ] *performance* (or the *reliability*) of a single
>> element is less important, at least compared to the ability
>> to throw a lot of cheap ones at a problem.

BTW, here I am not implying that EC2 allows one to «throw a lot
of cheap ones at a problem», because the published "retail" price
list is fairly expensive. But I guess that if one wants to buy «a
lot» of VMs as bulk purchase Amazon can do a deal.

>> In that case I eliminated all but the root filetree VM disks
>> and replaced them with filetrees exported via NFS from XFS on
>> the underlying VM host itself (that is not over the network).
>> [ ... ] because I could run check/repair and the backups *on
>> the real machine*, where XFS performed a lot better without
>> the VM overheads and "skewed" latencies.

> [ ... ] iSCSI to export lvm LVs to VMs from the host, and it
> works fine. Exporting files living on an XFS works well
> enough, too, though slightly slower.

iSCSI is a good alternative because it uses the better NIC
emulation in most VM layers, but I think that NFS is really a
better alternative overall, if suitable, because it gives the
inestimable option of running all the heavy hitting "maintenance"
stuff on the server itself, without any overheads, while
otherwise you must run it inside each VM.

Even if NFS has three problems that iSCSI does not have:

* It is a bit of a not awesome network filesystem, with a number
  of limitations, but NFSv4 seems Oki-ish.

* It has a reputation of not playing that well with NFS, but
  IIRC the stack issues happen only on 32b systems.

* While the server side is fairly good performance in Linux,
  the NFS client in Linux has some non trivial performance
  issues.

The problem is that there aren't much better network filesystems
around. Samba/SMB have a particularly rich and well done Linux
implementation, and are fully POSIX compatible, but performance
can be disappointing with the client in older kernels. A number
of sires have been discovering Gluster, and now that it is a Red
Hat product I guess we will here more of it especially in
relation to XFS.

BTW an attractive alternative to my usual favorite filesystems,
JFS and XFS, is the somewhat underestimated OCFS2, which is
well-maintained, and which can work pretty well in standalone
more, but also in share-disk mode, and that might be useful with
iSCSI to do backups etc. on another system than the client VM,
for example the server itself.

Also, an alternative to VMs is often using the pretty good
Linux-VServer.org "containers" (extended 'chroot's in effect),
which have zero overheads and where the only limitation is that
all "containers" must share the same running kernel, and can
share the same filesystem like exporting from NFS but without the
networking overhead. Xen (or UML) style paravirtualization is
next best (no need to emulate complicated "real" devices).

> It can be useful particularly for windows VM, because many
> windows app really behave poorly with network shares (or
> refuse to use them altogether).

That's a good point, and then one can also use the iSCSI daemon
on Linux to turn into a SAN server, but I guess you been there
and done that.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-01 11:40           ` Peter Grandi
@ 2012-02-01 23:55             ` Dave Chinner
  2012-02-02 22:54             ` Peter Grandi
  1 sibling, 0 replies; 18+ messages in thread
From: Dave Chinner @ 2012-02-01 23:55 UTC (permalink / raw)
  To: Peter Grandi; +Cc: Linux fs XFS

On Wed, Feb 01, 2012 at 11:40:19AM +0000, Peter Grandi wrote:
> [ ... ]
> 
> >>> We are using Amazon EC2 instances.
> 
> >>> [ ... ]  one of the the worst possible platforms for XFS.
> 
> >> I don't agree with you there. If the workload works best on
> >> XFs, it doesn't matter what the underlying storage device is.
> >> e.g. if it's a fsync heavy workload, it will still perform
> >> better on XFS on EC2 than btrfs on EC2...
> 
> There are special cases, but «fsync heavy» is a bit of bad
> example.

It's actually a really good example of where XFS will be better
than other filesystems.  Why? Because XFS does less log IO due to
aggregation of log writes during concurrent fsyncs. The more latency
there is on a log write, the more aggregation that occurs.  On a
platform where the IO subsystem is going to give you unpredictable
IO latencies, that's exactly what want.

Sure, it was designed to optimise spinning rust performance, but
that same design is also optimal for virtual devices with
unpredictable IO latency...

> In general file system designs are not at all independent of the
> expected storage platform, and some designs are far better than
> others for specific storage platforms, and viceversa.

Sure, but filesystems also have inherent capabilities that are
independent of the underlying storage. In these cases, the
underlying storage really doesn't matter if the filesystem can't do
what the application needs.  Allocation parallelism, CPU
parallelism, minimal concurrent fsync latency, etc are all
characteristics of filesystems that are independent of the
underlying storage. If you need those characteristics in your
remotely hosted VMs, then XFS is what you want regardless of how
much storage capability you buy for those VMs....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-01 11:40           ` Peter Grandi
  2012-02-01 23:55             ` Dave Chinner
@ 2012-02-02 22:54             ` Peter Grandi
  2012-02-03  1:32               ` Dave Chinner
  2012-02-15 11:38               ` Michael Monnerie
  1 sibling, 2 replies; 18+ messages in thread
From: Peter Grandi @ 2012-02-02 22:54 UTC (permalink / raw)
  To: Linux fs XFS

[ ... ]

>>>>> We are using Amazon EC2 instances.

>>>>> [ ... ]  one of the the worst possible platforms for XFS.

>>>> I don't agree with you there. If the workload works best on
>>>> XFs, it doesn't matter what the underlying storage device
>>>> is.  e.g. if it's a fsync heavy workload, it will still
>>>> perform better on XFS on EC2 than btrfs on EC2...

>> There are special cases, but «fsync heavy» is a bit of bad
>> example.

> It's actually a really good example of where XFS will be
> better than other filesystems.

But this is better at being less bad. Because we are talking here
about «fsync heavy» workloads on a VM, and these should not be
run on a VM if performance matters. That's why I wrote about a
«bad example» on which to discuss XFS for a VM.

But even with «fsync heavy» workloads in general your argument is
not exactly appropriate:

> Why? Because XFS does less log IO due to aggregation of log
> writes during concurrent fsyncs.

But «fsync heavy» does not necessarily means «concurrent fsyncs»,
for me it typically means logging or database apps where every
'write' is 'fsync'ed, even if there is a single thread. But let's
imagine for a moment we were talking about the special case where
«fsync heavy» involves a high degree of concurrency.

> The more latency there is on a log write, the more aggregation
> that occurs.

This seems to describe hardcoding in XFS a decision to trade
worse latency for better throughput, understandable as XFS was
after all quite clearly aimed at high throughput (or isochronous
throughput), rather than low latency (except for metadata, and
that has been "fixed" with 'delaylog').

Unless you mean that if the latency is low, then aggregation does
not take place, but then it is hard for me to see how that can be
*predicted*. I am assuming that in the above you refer to:

https://lwn.net/Articles/476267/
  the XFS transaction subsystem is that most transactions are
  asynchronous. That is, they don't commit to disk until either a
  log buffer is filled (a log buffer can hold multiple transactions)
  or a synchronous operation forces the log buffers holding the
  transactions to disk. This means that XFS is doing aggregation of
  transactions in memory - batching them, if you like - to minimise
  the impact of the log IO on transaction throughput.
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/ch07s04s02.html
  The delaylog mount option also improves sustained metadata
  modification performance by reducing the number of changes to the
  log. It achieves this by aggregating individual changes in memory
  before writing them to the log: frequently modified metadata is
  written to the log periodically instead of on every modification.
  This option increases the memory usage of tracking dirty metadata
  and increases the potential lost operations when a crash occurs,
  but can improve metadata modification speed and scalability by an
  order of magnitude or more. Use of this option does not reduce
  data or metadata integrity when fsync, fdatasync or sync are used
  to ensure data and metadata is written to disk.

BTW curious note in the latter:

  However, under fsync-heavy workloads, small log buffers can be
  noticeably faster than large buffers with a large stripe unit
  alignment.

> On a platform where the IO subsystem is going to give you
> unpredictable IO latencies, that's exactly what want.

This then the argument that on platforms with bad latency that
decision works still works well because then you might as well go
for throughput.

But if someone really aims to run some kind of «fsync heavy»
workload on a high-latency and highly-variable latency VM,
usually their aim is to *minimize* the additional latency the
filesystem imposes, because «fsync heavy» workloads tend to be
transactional, and persisting data without delay is part of their
goal.

> Sure, it was designed to optimise spinning rust performance,
> but that same design is also optimal for virtual devices with
> unpredictable IO latency...

Ahhhh, now the «bad example» has become a worse one :-).

The argument you are making here is one for crass layering
violation: that the filesystem code should embed storage-layer
specific optimizations within it, and then one might get lucky
with other storage layers of similar profile. Tsk tsk :-). At
least it is not as breathtakingly inane as putting plug/unplug
block io subsystem.

But even on spinning rust, and on real host, and even forgiving
the layering violation, I question the aim to get better
throughput at the expense of worse latency for «fsync heavy»
loads, and even for the type of workloads for which this tradeoff
is good.

Because *my* argument is that how often 'fsync' "happens" should
be a decision by the application programmer; if they want higher
throughput at the cost of higher latency, they should issue it
less frequently, as 'fsync' should be executed with as low a
latency as possible.

Your underlying argument for XFS and its handling of «fsync
heavy» workloads (and it is the same argument for 'delaylog' I
guess) seems to me that applications issue 'fsync' too often, and
thus we can briefly hold them back to bunch them up, and people
like the extra throughput more than they dislike the extra latency.

Which reminds me of a discussions I had some time ago with some
misguided person who argued that 'fsync' and Linux barriers only
require ordering constraints, and don't imply any actual writing
to persistent storage, or within any specific timeframe, where
instead I was persuaded that their main purpose (no matter what
POSIX says :->) is to commit to persistent storage as quickly as
possible.

It looks like that XFS has gone more the way of something like
his position, because admittedly in practice keeping commits a
bit looser does deliver better throughput (hints of O_PONIES
here).

But again, that's not what should be happening. Perhaps POSIX
should have provided :-) two barrier operations, a purely
ordering one, and a commit-now one. And application writers would
use them at the right times. And ponies for everybody :-).

>> In general file system designs are not at all independent of
>> the expected storage platform, and some designs are far better
>> than others for specific storage platforms, and viceversa.

> Sure, but filesystems also have inherent capabilities that are
> independent of the underlying storage.

But the example you make is not a «capability», it is the
hardcoded assumption that it is better to trade worse latency for
better throughput, which only makes sense for workloads that
don't want tight latency, or else storage layers that don't
support it.

> In these cases, the underlying storage really doesn't matter if
> the filesystem can't do what the application needs.  Allocation
> parallelism, CPU parallelism, minimal concurrent fsync latency,

But you seemed to be describing above that XFS good at "maximal
concurrent fsync throughput" by disregarding «minimal concurrent
fsync latency» (as in «less log IO due to aggregation of log
writes during concurrent fsyncs. The more latency there is on a
log write, the more aggregation»).

> etc are all characteristics of filesystems that are independent
> of the underlying storage.

Ahhhh, but this is a totally different argument from embedding
specific latency/throughput tradeoffs in the storage layer.

This is an argument that a well designed filesystem that does
have bottlenecks on any aspect of the performance envelope is a
good general purpose one. Well, you can try to design one :-).

XFS comes close, like JFS and OCFS2, but it does have, as you
have pointed out above, workload-specific (which can turn into
storage-friendly) tradeoffs. And since Red Hat's acquisition of
GlusterFS I guess (or at least I hope) that XFS will be even more
central to their strategy.

BTW as to that, did a brief search and found this amusing
article, yet another proof that reality surpasses imagination:

  http://bioteam.net/2010/07/playing-with-nfs-glusterfs-on-amazon-cc1-4xlarge-ec2-instance-types/

Ah I was totally unaware of the AWS Compute Cluster service.

> If you need those characteristics in your remotely hosted VMs,
> then XFS is what you want regardless of how much storage
> capability you buy for those VMs....

Possibly, but from also a practical viewpoint that is again a
moderately bizarre argument, because workloads requiring high
levels of «Allocation parallelism, CPU parallelism, minimal
concurrent fsync latency» beg to be run on an Altix, or similar,
not on a bunch of random EC2 shared hosts running Xen VMs.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-02 22:54             ` Peter Grandi
@ 2012-02-03  1:32               ` Dave Chinner
  2012-02-15 11:38               ` Michael Monnerie
  1 sibling, 0 replies; 18+ messages in thread
From: Dave Chinner @ 2012-02-03  1:32 UTC (permalink / raw)
  To: Peter Grandi; +Cc: Linux fs XFS

On Thu, Feb 02, 2012 at 10:54:09PM +0000, Peter Grandi wrote:
> [ ... ]
> 
> >>>>> We are using Amazon EC2 instances.
> 
> >>>>> [ ... ]  one of the the worst possible platforms for XFS.
> 
> >>>> I don't agree with you there. If the workload works best on
> >>>> XFs, it doesn't matter what the underlying storage device
> >>>> is.  e.g. if it's a fsync heavy workload, it will still
> >>>> perform better on XFS on EC2 than btrfs on EC2...
> 
> >> There are special cases, but «fsync heavy» is a bit of bad
> >> example.
> 
> > It's actually a really good example of where XFS will be
> > better than other filesystems.
> 
> But this is better at being less bad. Because we are talking here
> about «fsync heavy» workloads on a VM, and these should not be
> run on a VM if performance matters. That's why I wrote about a
> «bad example» on which to discuss XFS for a VM.

Whether or not you should put a workload that does fsyncs in a VM is
a completely different argument altogether. It's not a meaningful
argument to make when we are talking about how filesystems deal with
unpredictable storage latencies or what filesystem to use in a
virtualised environment.

> But even with «fsync heavy» workloads in general your argument is
> not exactly appropriate:
> 
> > Why? Because XFS does less log IO due to aggregation of log
> > writes during concurrent fsyncs.
> 
> But «fsync heavy» does not necessarily means «concurrent fsyncs»,
> for me it typically means logging or database apps where every
> 'write' is 'fsync'ed, even if there is a single thread.

Doesn't matter if there's concurrent fsyncs - XFS will aggregreate all
transactions while there is one fsync or anything else that triggers
log forces in progress. It's a generic solution to the "we're doing
too many synchronous transactions really close together" problem.

> But let's
> imagine for a moment we were talking about the special case where
> «fsync heavy» involves a high degree of concurrency.
> 
> > The more latency there is on a log write, the more aggregation
> > that occurs.
> 
> This seems to describe hardcoding in XFS a decision to trade
> worse latency for better throughput,

Except it doesn't. XFS's mechanism is well known to -minimise-
journal latency without increasing individual or maximum latencies
as load increases. This then translates directly into higher
sustained throughputs because less time is spent by applications
waiting for IO completions because there is less IO being done.

Yes, you can trade off latency for throughput - that's easy to do -
but a well designed system acheives high throughput by minimising
the impact unavoidable latencies. That's what the XFS journal does.
And quite frankly, it does't matter what the source of the latency
is or whether it is unpredictable. If you can't avoid it, you have
to design to minimise the impact.

> understandable as XFS was
> after all quite clearly aimed at high throughput (or isochronous
> throughput), rather than low latency (except for metadata, and
> that has been "fixed" with 'delaylog').

I like how you say "fixed" in a way that implies you don't beleive
that it is fixed...

> Unless you mean that if the latency is low, then aggregation does
> not take place,

That's exactly what I'm saying.

> but then it is hard for me to see how that can be
> *predicted*.

That's because it doesn't need to be predicted.  We *know* if a
journal write is currently in progress or not and we can wait on it
to complete. It doesn't matter how long it takes to complete - if it
is instantenous, then aggregation does not occur simply due to the
very short wait time.  If the IO takes a long time to complete, then
lots of aggregation of transaction commits will occur before we
submit the next IO.

Smarter people than me designed this stuff - I've just learnt from
what they've done and built on top of it....

> I am assuming that in the above you refer to:
> 
> https://lwn.net/Articles/476267/

Documentation/filesystems/xfs-delayed-logging-design.txt is a better
reference to use.

> the XFS transaction subsystem is
> that most transactions are asynchronous. That is, they don't
> commit to disk until either a log buffer is filled (a log buffer
> can hold multiple transactions) or a synchronous operation forces
> the log buffers holding the transactions to disk. This means that
> XFS is doing aggregation of transactions in memory - batching
> them, if you like - to minimise the impact of the log IO on
> transaction throughput.

That's part of it. This describes the pre-delaylog method of
aggregation, but even delaylog relies on this mechanism because
checkpoints are a journalled transaction just like all transactions
were pre-delaylog.

The point about fsync is that it is just an asynchronous transaction
as well. It is made synchronous by then pushing the log buffer to
disk. But it will only do that immeidately if the previous log
buffer is idle. If the previous log buffer is under IO, then it will
wait to start the IO on the current log buffer, allowing further
aggregation to occur.

> BTW curious note in the latter:
> 
>   However, under fsync-heavy workloads, small log buffers can be
>   noticeably faster than large buffers with a large stripe unit
>   alignment.

Because setting a log stripe unit (LSU) mean the size of the log IO is
padded. A 32k LSU means the minimum log IO size is 32k, while an
fsync transaciton is usually only a couple of hundred bytes. Without
an LSU, than means a solitary fsync transaction being written to disk
will be 512 bytes vs 32kB with a LSU and that means the non LSU-log will
complete IO faster. Same goes for LSU=32k vs LSU=256k.

> > On a platform where the IO subsystem is going to give you
> > unpredictable IO latencies, that's exactly what want.
> 
> This then the argument that on platforms with bad latency that
> decision works still works well because then you might as well go
> for throughput.

If one fsync takes X, and you can make 10 concurrent fsyncs take X,
why wouldn't you optimise to enable the latter case? It doesn't
matter if X is 10us, 1ms or even 1s - having an algorithm that works
independently of the magnitude of the storage latency will result in
good throughput no matter the storage characteristics. That's what
users want - something that just works without needing to tweak it
differently to perform optimally on all their different systems...

> But if someone really aims to run some kind of «fsync heavy»
> workload on a high-latency and highly-variable latency VM, usually
> their aim is to *minimize* the additional latency the filesystem
> imposes, because «fsync heavy» workloads tend to be transactional,
> and persisting data without delay is part of their goal.

I still don't understand what part of "use XFS for this workload"
you are saying is wrong?

> > Sure, it was designed to optimise spinning rust performance, but
> > that same design is also optimal for virtual devices with
> > unpredictable IO latency...
> 
> Ahhhh, now the «bad example» has become a worse one :-).
> 
> The argument you are making here is one for crass layering
> violation: that the filesystem code should embed storage-layer
> specific optimizations within it, and then one might get lucky
> with other storage layers of similar profile. Tsk tsk :-). At
> least it is not as breathtakingly inane as putting plug/unplug
> block io subsystem.

Filesystems are nothing but a dense concentration algorithms that
are optimal for as wide a range of known storage behaviours as
possible.

> XFS comes close, like JFS and OCFS2, but it does have, as you have
> pointed out above, workload-specific (which can turn into
> storage-friendly) tradeoffs. And since Red Hat's acquisition of
> GlusterFS I guess (or at least I hope) that XFS will be even more
> central to their strategy.

http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html-single/User_Guide/index.html#sect-User_Guide-gssa_prepare-chec_min_req

"File System Requirements

Red Hat recommends XFS when formatting the disk sub-system. ..."


-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: File system remain unresponsive until the system is rebooted.
  2012-02-02 22:54             ` Peter Grandi
  2012-02-03  1:32               ` Dave Chinner
@ 2012-02-15 11:38               ` Michael Monnerie
  1 sibling, 0 replies; 18+ messages in thread
From: Michael Monnerie @ 2012-02-15 11:38 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1998 bytes --]

Am Donnerstag, 2. Februar 2012, 22:54:09 schrieb Peter Grandi:
> This then the argument that on platforms with bad latency that
> decision works still works well because then you might as well go
> for throughput.

Hi, I just took these lines to reply to your whole mail. I guess that 
the advantage of XFS will grow on a shared storage type like you 
typically have on a VM environment. The aggregation XFS does can result 
in a more bursty type of I/O, with larger I/Os happening at once. That 
always is better for RAID storage - which you normally have in a VM 
environment. Also, all better RAID controllers, and especially 
enterprise RAIDs, have large write buffers, so even more aggregation 
occurs at the storage itself, helping throughput maximisation.

I don't know of any scientific investigation of "which filesystem is 
better in a VM environment" that could be referenced in a generic way, 
mostly because there are so many variables there that it doesn't 
neccessarily fit your own use case. Maybe someone can point me to such 
research material.
My hope is - and that is what Dave is arguing - that minimising I/O 
"disturbances" by metadata work like log file handling helps keeping 
overall throughput on a shared storage type in a VM environment high. 
And that seems very reasonable. 
I don't really understand your argument about delay for a single thread 
fsync. First, XFS should do this quicker by "batching" transactions, and 
second, overall storage throughput is usually much more important than 
that of a single server performance - at least in a VM environment. I 
need to run 50 servers on a storage with acceptable performance, and if 
one server needs more performance than is available, you need to do 
something else - there are lots of options then.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2012-02-15 11:39 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-01-30  9:18 File system remain unresponsive until the system is rebooted Supratik Goswami
2012-01-30 17:23 ` Peter Grandi
2012-01-31  1:31 ` Dave Chinner
2012-01-31  5:04   ` Supratik Goswami
2012-01-31  7:19     ` Emmanuel Florac
2012-01-31  9:04     ` Stan Hoeppner
2012-01-31 11:08       ` Emmanuel Florac
2012-02-01 12:31         ` Peter Grandi
2012-02-01 14:31           ` Emmanuel Florac
2012-02-01 22:20             ` Peter Grandi
2012-01-31 20:50       ` Dave Chinner
2012-02-01  0:20         ` Stan Hoeppner
2012-02-01 11:40           ` Peter Grandi
2012-02-01 23:55             ` Dave Chinner
2012-02-02 22:54             ` Peter Grandi
2012-02-03  1:32               ` Dave Chinner
2012-02-15 11:38               ` Michael Monnerie
2012-01-31 19:44     ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox