public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Sudden File System Corruption
@ 2013-12-05  2:55 Mike Dacre
  2013-12-05  3:40 ` Dave Chinner
                   ` (3 more replies)
  0 siblings, 4 replies; 25+ messages in thread
From: Mike Dacre @ 2013-12-05  2:55 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 8257 bytes --]

Hi Folks,

Apologies if this is the wrong place to post or if this has been answered
already.

I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an XFS
filesystem and has been online for over a year.  It is accessed by 23
different machines connected via Infiniband over NFS v3.  I haven't had any
major problems yet, one drive failed but it was easily replaced.

However, today the drive suddenly stopped responding and started returning
IO errors when any requests were made.  This happened while it was being
accessed by  5 different users, one was doing a very large rm operation (rm
*sh on thousands on files in a directory).  Also, about 30 minutes before
we had connected the globus connect endpoint to allow easy file transfers
to SDSC.

I rebooted the machine which hosts it and checked the RAID6 logs, no
physical problems with the drives at all.  I tried to mount and got the
following error:

XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file
fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
mount: Structure needs cleaning

I ran xfs_check and got the following message:
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_check.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.


I checked the log and found the following message:

Dec  4 18:26:33 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:26:33 fruster kernel: XFS (sda1): Starting recovery (logdev:
internal)
Dec  4 18:26:36 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO
at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:26:36 fruster kernel:
Dec  4 18:26:36 fruster kernel: Pid: 5491, comm: mount Not tainted
2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:26:36 fruster kernel: Call Trace:
Dec  4 18:26:36 fruster kernel: [<ffffffffa045b0ef>] ?
xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ?
xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0430c2b>] ?
xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ?
xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa046de2d>] ?
xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa04796ea>] ?
xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa046ded2>] ?
xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0471f34>] ?
xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa046a3ac>] ?
xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0475a61>] ?
xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa048d6f4>] ?
xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffff811847ce>] ?
get_sb_bdev+0x18e/0x1d0
Dec  4 18:26:36 fruster kernel: [<ffffffffa048d4d0>] ?
xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa048b5b8>] ?
xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffff81183c1b>] ?
vfs_kern_mount+0x7b/0x1b0
Dec  4 18:26:36 fruster kernel: [<ffffffff81183dc2>] ?
do_kern_mount+0x52/0x130
Dec  4 18:26:36 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:26:36 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:26:36 fruster kernel: [<ffffffff8100b072>] ?
system_call_fastpath+0x16/0x1b
Dec  4 18:26:36 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:26:36 fruster kernel: XFS (sda1): log mount finish failed


I went back and looked at the log from around the time the drive died and
found this message:
Dec  4 17:58:16 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO
at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 17:58:16 fruster kernel:
Dec  4 17:58:16 fruster kernel: Pid: 4548, comm: nfsd Not tainted
2.6.32-358.23.2.el6.x86_64 #1
Dec  4 17:58:16 fruster kernel: Call Trace:
Dec  4 17:58:16 fruster kernel: [<ffffffffa045b0ef>] ?
xfs_error_report+0x3f/0x50 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ?
xfs_free_extent+0x101/0x130 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0430c2b>] ?
xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ?
xfs_free_extent+0x101/0x130 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa043c89d>] ?
xfs_bmap_finish+0x15d/0x1a0 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa04626ff>] ?
xfs_itruncate_finish+0x15f/0x320 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa047e370>] ?
xfs_inactive+0x330/0x480 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa04793f4>] ?
_xfs_trans_commit+0x214/0x2a0 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa048b9a0>] ?
xfs_fs_clear_inode+0xa0/0xd0 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffff8119d31c>] ?
clear_inode+0xac/0x140
Dec  4 17:58:16 fruster kernel: [<ffffffff8119dad6>] ?
generic_delete_inode+0x196/0x1d0
Dec  4 17:58:16 fruster kernel: [<ffffffff8119db75>] ?
generic_drop_inode+0x65/0x80
Dec  4 17:58:16 fruster kernel: [<ffffffff8119c9c2>] ? iput+0x62/0x70
Dec  4 17:58:16 fruster kernel: [<ffffffff81199610>] ?
dentry_iput+0x90/0x100
Dec  4 17:58:16 fruster kernel: [<ffffffff8119c278>] ? d_delete+0xe8/0xf0
Dec  4 17:58:16 fruster kernel: [<ffffffff8118fe99>] ? vfs_unlink+0xd9/0xf0
Dec  4 17:58:16 fruster kernel: [<ffffffffa071cf4f>] ?
nfsd_unlink+0x1af/0x250 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0723f03>] ?
nfsd3_proc_remove+0x83/0x120 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa071543e>] ?
nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa068e624>] ?
svc_process_common+0x344/0x640 [sunrpc]
Dec  4 17:58:16 fruster kernel: [<ffffffff81063990>] ?
default_wake_function+0x0/0x20
Dec  4 17:58:16 fruster kernel: [<ffffffffa068ec60>] ?
svc_process+0x110/0x160 [sunrpc]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0715b62>] ? nfsd+0xc2/0x160
[nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffff81096a36>] ? kthread+0x96/0xa0
Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0ca>] ? child_rip+0xa/0x20
Dec  4 17:58:16 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 17:58:16 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x8)
called from line 3863 of file fs/xfs/xfs_bmap.c.  Return address =
0xffffffffa043c8d6
Dec  4 17:58:16 fruster kernel: XFS (sda1): Corruption of in-memory data
detected.  Shutting down filesystem
Dec  4 17:58:16 fruster kernel: XFS (sda1): Please umount the filesystem
and rectify the problem(s)
Dec  4 17:58:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 17:58:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 17:59:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 17:59:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:00:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:00:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:01:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:01:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x1)
called from line 1061 of file fs/xfs/linux-2.6/xfs_buf.c.  Return address =
0xffffffffa04856e3
Dec  4 18:02:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.


I have attached the complete log from the time it died until now.

In the end, I successfully repaired the filesystem with `xfs_repair -L
/dev/sda1`.  However, I am nervous that some files may have been corrupted.

Do any of you have any idea what could have caused this problem?

Thanks,

Mike

[-- Attachment #1.2: Type: text/html, Size: 10010 bytes --]

[-- Attachment #2: server_log.txt --]
[-- Type: text/plain, Size: 168474 bytes --]

Dec  4 15:55:49 fruster kernel: INFO: task nfsd:4493 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: nfsd          D 0000000000000001     0  4493      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff88070432fae0 0000000000000046 0000000000000000 000000000000000e
Dec  4 15:55:49 fruster kernel: ffff88070432faa0 ffffffff811198c0 0000000000020000 0000000000000000
Dec  4 15:55:49 fruster kernel: ffff88077bd4e638 ffff88070432ffd8 000000000000fb88 ffff88077bd4e638
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffff811198c0>] ? find_get_pages_tag+0x40/0x130
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e10>] ? sync_page+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e953>] io_schedule+0x73/0xc0
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e4d>] sync_page+0x3d/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150f30f>] __wait_on_bit+0x5f/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a083>] wait_on_page_bit+0x73/0x80
Dec  4 15:55:49 fruster kernel: [<ffffffff81096de0>] ? wake_bit_function+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8112f115>] ? pagevec_lookup_tag+0x25/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a4ab>] wait_on_page_writeback_range+0xfb/0x190
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a678>] filemap_write_and_wait_range+0x78/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1d5e>] vfs_fsync_range+0x7e/0xe0
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1e2d>] vfs_fsync+0x1d/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa071bffb>] nfsd_commit+0x6b/0xa0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0722fdd>] nfsd3_proc_commit+0x9d/0x100 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071543e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa068e624>] svc_process_common+0x344/0x640 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa068ec60>] svc_process+0x110/0x160 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715b62>] nfsd+0xc2/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:55:49 fruster kernel: INFO: task nfsd:4497 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: nfsd          D 0000000000000001     0  4497      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff8808181d9ae0 0000000000000046 ffff8808181d9aa8 ffff8808181d9aa4
Dec  4 15:55:49 fruster kernel: ffff8808181d9aa0 ffff88082ec24300 ffff880028216700 0000000000000400
Dec  4 15:55:49 fruster kernel: ffff88071249a5f8 ffff8808181d9fd8 000000000000fb88 ffff88071249a5f8
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e10>] ? sync_page+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e953>] io_schedule+0x73/0xc0
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e4d>] sync_page+0x3d/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150f30f>] __wait_on_bit+0x5f/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a083>] wait_on_page_bit+0x73/0x80
Dec  4 15:55:49 fruster kernel: [<ffffffff81096de0>] ? wake_bit_function+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8112f115>] ? pagevec_lookup_tag+0x25/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a4ab>] wait_on_page_writeback_range+0xfb/0x190
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a678>] filemap_write_and_wait_range+0x78/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1d5e>] vfs_fsync_range+0x7e/0xe0
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1e2d>] vfs_fsync+0x1d/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa071bffb>] nfsd_commit+0x6b/0xa0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0722fdd>] nfsd3_proc_commit+0x9d/0x100 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071543e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa068e624>] svc_process_common+0x344/0x640 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa068ec60>] svc_process+0x110/0x160 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715b62>] nfsd+0xc2/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:55:49 fruster kernel: INFO: task nfsd:4508 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: nfsd          D 0000000000000007     0  4508      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff880777919790 0000000000000046 0000000000000000 0000000000014005
Dec  4 15:55:49 fruster kernel: ffff880777919730 ffffffffa0483515 ffff880777919720 0000000000017340
Dec  4 15:55:49 fruster kernel: ffff88081cc88638 ffff880777919fd8 000000000000fb88 ffff88081cc88638
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffffa0483515>] ? xfs_buf_cond_lock+0x25/0x80 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8150f035>] schedule_timeout+0x215/0x2e0
Dec  4 15:55:49 fruster kernel: [<ffffffffa047fdf7>] ? kmem_zone_alloc+0x77/0xf0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8150ff52>] __down+0x72/0xb0
Dec  4 15:55:49 fruster kernel: [<ffffffffa04848e5>] ? _xfs_buf_find+0xe5/0x230 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8109cb61>] down+0x41/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffffa0484751>] xfs_buf_lock+0x51/0x100 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa04848e5>] _xfs_buf_find+0xe5/0x230 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0484a64>] xfs_buf_get+0x34/0x1b0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa04571c3>] ? xfs_dir2_leafn_lookup_for_entry+0x113/0x360 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa04850ec>] xfs_buf_read+0x2c/0x100 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa047a947>] xfs_trans_read_buf+0x197/0x410 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0463274>] xfs_imap_to_bp+0x54/0x130 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa046533b>] xfs_iread+0x7b/0x1b0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8119d12e>] ? inode_init_always+0x11e/0x1c0
Dec  4 15:55:49 fruster kernel: [<ffffffffa045ff2e>] xfs_iget+0x27e/0x6e0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa045f720>] ? xfs_iunlock+0x20/0xd0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa047d8d6>] xfs_lookup+0xc6/0x110 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa048a584>] xfs_vn_lookup+0x54/0xa0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8118e4f2>] __lookup_hash+0x102/0x160
Dec  4 15:55:49 fruster kernel: [<ffffffff8118ee84>] lookup_one_len+0xb4/0x110
Dec  4 15:55:49 fruster kernel: [<ffffffffa071b18d>] nfsd_lookup_dentry+0x10d/0x500 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071b5b3>] nfsd_lookup+0x33/0xd0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0723c42>] nfsd3_proc_lookup+0x92/0xf0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071543e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa068e624>] svc_process_common+0x344/0x640 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa068ec60>] svc_process+0x110/0x160 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715b62>] nfsd+0xc2/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:55:49 fruster kernel: INFO: task nfsd:4513 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: nfsd          D 0000000000000000     0  4513      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff88075ac21ae0 0000000000000046 000000000000000e 000000000000000e
Dec  4 15:55:49 fruster kernel: ffff88075ac21aa0 ffffffff811198c0 ffff880028212f80 0000000000000002
Dec  4 15:55:49 fruster kernel: ffff88075a7de638 ffff88075ac21fd8 000000000000fb88 ffff88075a7de638
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffff811198c0>] ? find_get_pages_tag+0x40/0x130
Dec  4 15:55:49 fruster kernel: [<ffffffff810a2431>] ? ktime_get_ts+0xb1/0xf0
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e10>] ? sync_page+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e953>] io_schedule+0x73/0xc0
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e4d>] sync_page+0x3d/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150f30f>] __wait_on_bit+0x5f/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a083>] wait_on_page_bit+0x73/0x80
Dec  4 15:55:49 fruster kernel: [<ffffffff81096de0>] ? wake_bit_function+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8112f115>] ? pagevec_lookup_tag+0x25/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a4ab>] wait_on_page_writeback_range+0xfb/0x190
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a678>] filemap_write_and_wait_range+0x78/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1d5e>] vfs_fsync_range+0x7e/0xe0
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1e2d>] vfs_fsync+0x1d/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa071bffb>] nfsd_commit+0x6b/0xa0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0722fdd>] nfsd3_proc_commit+0x9d/0x100 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071543e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa068e624>] svc_process_common+0x344/0x640 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa068ec60>] svc_process+0x110/0x160 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715b62>] nfsd+0xc2/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:55:49 fruster kernel: INFO: task nfsd:4615 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: nfsd          D 0000000000000003     0  4615      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff88081752b9e0 0000000000000046 0000000000000000 ffff88081752b960
Dec  4 15:55:49 fruster kernel: ffff88081752b9b0 ffffffff8149a0be ffff8808001ddcc8 ffffffff81441b25
Dec  4 15:55:49 fruster kernel: ffff88078f1edab8 ffff88081752bfd8 000000000000fb88 ffff88078f1edab8
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffff8149a0be>] ? tcp_transmit_skb+0x40e/0x7b0
Dec  4 15:55:49 fruster kernel: [<ffffffff81441b25>] ? memcpy_toiovec+0x55/0x80
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e10>] ? sync_page+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e953>] io_schedule+0x73/0xc0
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e4d>] sync_page+0x3d/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150f30f>] __wait_on_bit+0x5f/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a083>] wait_on_page_bit+0x73/0x80
Dec  4 15:55:49 fruster kernel: [<ffffffff81096de0>] ? wake_bit_function+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8112f115>] ? pagevec_lookup_tag+0x25/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8112e035>] write_cache_pages+0x395/0x4c0
Dec  4 15:55:49 fruster kernel: [<ffffffff8112cbd0>] ? __writepage+0x0/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8112e184>] generic_writepages+0x24/0x30
Dec  4 15:55:49 fruster kernel: [<ffffffffa04816dd>] xfs_vm_writepages+0x5d/0x80 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8112e1b1>] do_writepages+0x21/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a5fb>] __filemap_fdatawrite_range+0x5b/0x60
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a65a>] filemap_write_and_wait_range+0x5a/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1d5e>] vfs_fsync_range+0x7e/0xe0
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1e2d>] vfs_fsync+0x1d/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa071bffb>] nfsd_commit+0x6b/0xa0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0722fdd>] nfsd3_proc_commit+0x9d/0x100 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071543e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa068e624>] svc_process_common+0x344/0x640 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa068ec60>] svc_process+0x110/0x160 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715b62>] nfsd+0xc2/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:55:49 fruster kernel: INFO: task nfsd:4689 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: nfsd          D 0000000000000006     0  4689      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff880101d41ae0 0000000000000046 ffff880101d41aa8 ffff880101d41aa4
Dec  4 15:55:49 fruster kernel: ffff880101d41aa0 ffff88082ec24d00 ffff880028216700 00000000000002ff
Dec  4 15:55:49 fruster kernel: ffff880792496638 ffff880101d41fd8 000000000000fb88 ffff880792496638
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e10>] ? sync_page+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e953>] io_schedule+0x73/0xc0
Dec  4 15:55:49 fruster kernel: [<ffffffff81119e4d>] sync_page+0x3d/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8150f30f>] __wait_on_bit+0x5f/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a083>] wait_on_page_bit+0x73/0x80
Dec  4 15:55:49 fruster kernel: [<ffffffff81096de0>] ? wake_bit_function+0x0/0x50
Dec  4 15:55:49 fruster kernel: [<ffffffff8112f115>] ? pagevec_lookup_tag+0x25/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a4ab>] wait_on_page_writeback_range+0xfb/0x190
Dec  4 15:55:49 fruster kernel: [<ffffffff8111a678>] filemap_write_and_wait_range+0x78/0x90
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1d5e>] vfs_fsync_range+0x7e/0xe0
Dec  4 15:55:49 fruster kernel: [<ffffffff811b1e2d>] vfs_fsync+0x1d/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa071bffb>] nfsd_commit+0x6b/0xa0 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0722fdd>] nfsd3_proc_commit+0x9d/0x100 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa071543e>] nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa068e624>] svc_process_common+0x344/0x640 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffffa068ec60>] svc_process+0x110/0x160 [sunrpc]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715b62>] nfsd+0xc2/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:55:49 fruster kernel: INFO: task flush-8:0:19778 blocked for more than 120 seconds.
Dec  4 15:55:49 fruster kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec  4 15:55:49 fruster kernel: flush-8:0     D 0000000000000000     0 19778      2 0x00000080
Dec  4 15:55:49 fruster kernel: ffff8807869876a0 0000000000000046 0000000000000000 ffff88079032e080
Dec  4 15:55:49 fruster kernel: 0000000000000001 ffff8807040af140 ffff88079032e080 ffff88081b8dad90
Dec  4 15:55:49 fruster kernel: ffff88079032e638 ffff880786987fd8 000000000000fb88 ffff88079032e638
Dec  4 15:55:49 fruster kernel: Call Trace:
Dec  4 15:55:49 fruster kernel: [<ffffffff810a2431>] ? ktime_get_ts+0xb1/0xf0
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e953>] io_schedule+0x73/0xc0
Dec  4 15:55:49 fruster kernel: [<ffffffff8125e8c8>] get_request_wait+0x108/0x1d0
Dec  4 15:55:49 fruster kernel: [<ffffffff81096da0>] ? autoremove_wake_function+0x0/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff81255d3d>] ? elv_merge+0x14d/0x200
Dec  4 15:55:49 fruster kernel: [<ffffffff8125ea2b>] blk_queue_bio+0x9b/0x5d0
Dec  4 15:55:49 fruster kernel: [<ffffffff8125d0ee>] generic_make_request+0x24e/0x500
Dec  4 15:55:49 fruster kernel: [<ffffffff811bb1b2>] ? bvec_alloc_bs+0x62/0x110
Dec  4 15:55:49 fruster kernel: [<ffffffff8125d42d>] submit_bio+0x8d/0x120
Dec  4 15:55:49 fruster kernel: [<ffffffffa0481a83>] xfs_submit_ioend_bio+0x33/0x40 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa0481b86>] xfs_submit_ioend+0xf6/0x140 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffffa048250f>] xfs_vm_writepage+0x36f/0x5a0 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8112cbe7>] __writepage+0x17/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8112de9d>] write_cache_pages+0x1fd/0x4c0
Dec  4 15:55:49 fruster kernel: [<ffffffff8112cbd0>] ? __writepage+0x0/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff8112e184>] generic_writepages+0x24/0x30
Dec  4 15:55:49 fruster kernel: [<ffffffffa04816dd>] xfs_vm_writepages+0x5d/0x80 [xfs]
Dec  4 15:55:49 fruster kernel: [<ffffffff8112e1b1>] do_writepages+0x21/0x40
Dec  4 15:55:49 fruster kernel: [<ffffffff811aca3d>] writeback_single_inode+0xdd/0x290
Dec  4 15:55:49 fruster kernel: [<ffffffff811ace4e>] writeback_sb_inodes+0xce/0x180
Dec  4 15:55:49 fruster kernel: [<ffffffff811acfab>] writeback_inodes_wb+0xab/0x1b0
Dec  4 15:55:49 fruster kernel: [<ffffffff811ad34b>] wb_writeback+0x29b/0x3f0
Dec  4 15:55:49 fruster kernel: [<ffffffff8150e1c0>] ? thread_return+0x4e/0x76e
Dec  4 15:55:49 fruster kernel: [<ffffffff811ad55b>] wb_do_writeback+0xbb/0x240
Dec  4 15:55:49 fruster kernel: [<ffffffff811ad743>] bdi_writeback_task+0x63/0x1b0
Dec  4 15:55:49 fruster kernel: [<ffffffff81096c67>] ? bit_waitqueue+0x17/0xd0
Dec  4 15:55:49 fruster kernel: [<ffffffff8113cc50>] ? bdi_start_fn+0x0/0x100
Dec  4 15:55:49 fruster kernel: [<ffffffff8113ccd6>] bdi_start_fn+0x86/0x100
Dec  4 15:55:49 fruster kernel: [<ffffffff8113cc50>] ? bdi_start_fn+0x0/0x100
Dec  4 15:55:49 fruster kernel: [<ffffffff81096a36>] kthread+0x96/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0ca>] child_rip+0xa/0x20
Dec  4 15:55:49 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 15:55:49 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 15:56:21 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 15:58:22 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:00:01 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:02:13 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:02:35 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:02:57 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:06:04 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:07:54 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:16:20 fruster pbs_server: LOG_ERROR::Job not found (15086) in svr_dequejob, Job has no queue
Dec  4 16:31:58 fruster dhcpd: DHCPREQUEST for 192.168.0.33 from 00:21:5a:46:ff:0a via eth1
Dec  4 16:31:58 fruster dhcpd: DHCPACK on 192.168.0.33 to 00:21:5a:46:ff:0a via eth1
Dec  4 16:43:22 fruster dhcpd: DHCPREQUEST for 192.168.0.26 from 00:1b:78:30:db:8a via eth1
Dec  4 16:43:22 fruster dhcpd: DHCPACK on 192.168.0.26 to 00:1b:78:30:db:8a via eth1
Dec  4 16:53:58 fruster dhcpd: DHCPREQUEST for 192.168.0.39 from 00:1e:0b:1e:55:34 via eth1
Dec  4 16:53:58 fruster dhcpd: DHCPACK on 192.168.0.39 to 00:1e:0b:1e:55:34 via eth1
Dec  4 17:30:04 fruster dhcpd: DHCPREQUEST for 192.168.0.29 from 00:1c:c4:bc:c9:e6 via eth1
Dec  4 17:30:04 fruster dhcpd: DHCPACK on 192.168.0.29 to 00:1c:c4:bc:c9:e6 via eth1
Dec  4 17:49:05 fruster dhcpd: DHCPREQUEST for 192.168.0.34 from 00:1b:78:2f:6b:ac via eth1
Dec  4 17:49:05 fruster dhcpd: DHCPACK on 192.168.0.34 to 00:1b:78:2f:6b:ac via eth1
Dec  4 17:58:15 fruster dhcpd: DHCPREQUEST for 192.168.0.32 from 00:1b:78:ca:bb:50 via eth1
Dec  4 17:58:15 fruster dhcpd: DHCPACK on 192.168.0.32 to 00:1b:78:ca:bb:50 via eth1
Dec  4 17:58:16 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 17:58:16 fruster kernel: 
Dec  4 17:58:16 fruster kernel: Pid: 4548, comm: nfsd Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 17:58:16 fruster kernel: Call Trace:
Dec  4 17:58:16 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa043c89d>] ? xfs_bmap_finish+0x15d/0x1a0 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa04626ff>] ? xfs_itruncate_finish+0x15f/0x320 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa047e370>] ? xfs_inactive+0x330/0x480 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa04793f4>] ? _xfs_trans_commit+0x214/0x2a0 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffffa048b9a0>] ? xfs_fs_clear_inode+0xa0/0xd0 [xfs]
Dec  4 17:58:16 fruster kernel: [<ffffffff8119d31c>] ? clear_inode+0xac/0x140
Dec  4 17:58:16 fruster kernel: [<ffffffff8119dad6>] ? generic_delete_inode+0x196/0x1d0
Dec  4 17:58:16 fruster kernel: [<ffffffff8119db75>] ? generic_drop_inode+0x65/0x80
Dec  4 17:58:16 fruster kernel: [<ffffffff8119c9c2>] ? iput+0x62/0x70
Dec  4 17:58:16 fruster kernel: [<ffffffff81199610>] ? dentry_iput+0x90/0x100
Dec  4 17:58:16 fruster kernel: [<ffffffff8119c278>] ? d_delete+0xe8/0xf0
Dec  4 17:58:16 fruster kernel: [<ffffffff8118fe99>] ? vfs_unlink+0xd9/0xf0
Dec  4 17:58:16 fruster kernel: [<ffffffffa071cf4f>] ? nfsd_unlink+0x1af/0x250 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0723f03>] ? nfsd3_proc_remove+0x83/0x120 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa071543e>] ? nfsd_dispatch+0xfe/0x240 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa068e624>] ? svc_process_common+0x344/0x640 [sunrpc]
Dec  4 17:58:16 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
Dec  4 17:58:16 fruster kernel: [<ffffffffa068ec60>] ? svc_process+0x110/0x160 [sunrpc]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0715b62>] ? nfsd+0xc2/0x160 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
Dec  4 17:58:16 fruster kernel: [<ffffffff81096a36>] ? kthread+0x96/0xa0
Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0ca>] ? child_rip+0xa/0x20
Dec  4 17:58:16 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
Dec  4 17:58:16 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x8) called from line 3863 of file fs/xfs/xfs_bmap.c.  Return address = 0xffffffffa043c8d6
Dec  4 17:58:16 fruster kernel: XFS (sda1): Corruption of in-memory data detected.  Shutting down filesystem
Dec  4 17:58:16 fruster kernel: XFS (sda1): Please umount the filesystem and rectify the problem(s)
Dec  4 17:58:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 17:58:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 17:59:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 17:59:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:00:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:00:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:01:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:01:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x1) called from line 1061 of file fs/xfs/linux-2.6/xfs_buf.c.  Return address = 0xffffffffa04856e3
Dec  4 18:02:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
Dec  4 18:02:19 fruster init: tty (/dev/tty2) main process (3592) killed by TERM signal
Dec  4 18:02:19 fruster init: tty (/dev/tty3) main process (3594) killed by TERM signal
Dec  4 18:02:19 fruster init: tty (/dev/tty4) main process (3596) killed by TERM signal
Dec  4 18:02:19 fruster init: tty (/dev/tty5) main process (3599) killed by TERM signal
Dec  4 18:02:19 fruster init: tty (/dev/tty6) main process (3601) killed by TERM signal
Dec  4 18:02:27 fruster snmpd[2654]: Received TERM or STOP signal...  shutting down...
Dec  4 18:02:27 fruster xinetd[2686]: Exiting...
Dec  4 18:02:27 fruster acpid: exiting
Dec  4 18:02:28 fruster ntpd[2694]: ntpd exiting on signal 15
Dec  4 18:02:43 fruster named[1956]: received control channel command 'stop'
Dec  4 18:02:43 fruster named[1956]: shutting down: flushing changes
Dec  4 18:02:43 fruster named[1956]: stopping command channel on 127.0.0.1#953
Dec  4 18:02:43 fruster named[1956]: stopping command channel on ::1#953
Dec  4 18:02:43 fruster named[1956]: no longer listening on 127.0.0.1#53
Dec  4 18:02:43 fruster named[1956]: no longer listening on ::1#53
Dec  4 18:02:43 fruster init: Disconnected from system bus
Dec  4 18:02:43 fruster console-kit-daemon[6245]: WARNING: no sender#012
Dec  4 18:02:43 fruster named[1956]: exiting
Dec  4 18:02:45 fruster rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Dec  4 18:02:45 fruster auditd[3697]: The audit daemon is exiting.
Dec  4 18:02:45 fruster kernel: __ratelimit: 48 callbacks suppressed
Dec  4 18:02:45 fruster kernel: type=1305 audit(1386208965.626:105861): audit_pid=0 old=3697 auid=4294967295 ses=4294967295 res=1
Dec  4 18:02:45 fruster kernel: type=1305 audit(1386208965.725:105862): audit_enabled=0 old=1 auid=4294967295 ses=4294967295 res=1
Dec  4 18:02:45 fruster nslcd[1879]: caught signal SIGTERM (15), shutting down
Dec  4 18:02:45 fruster nslcd[1879]: version 0.7.5 bailing out
Dec  4 18:02:45 fruster kernel: Kernel logging (proc) stopped.
Dec  4 18:02:45 fruster rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1892" x-info="http://www.rsyslog.com"] exiting on signal 15.
Dec  4 18:15:28 fruster kernel: imklog 5.8.10, log source = /proc/kmsg started.
Dec  4 18:15:28 fruster rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="2180" x-info="http://www.rsyslog.com"] start
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys cpuset
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys cpu
Dec  4 18:15:28 fruster kernel: Linux version 2.6.32-358.23.2.el6.x86_64 (mockbuild@sl6.fnal.gov) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Wed Oct 16 11:13:47 CDT 2013
Dec  4 18:15:28 fruster kernel: Command line: ro root=/dev/mapper/vg_fruster-lv_root rd_NO_LUKS rd_LVM_LV=vg_fruster/lv_root LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM  KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_fruster/lv_swap rhgb quiet
Dec  4 18:15:28 fruster kernel: KERNEL supported cpus:
Dec  4 18:15:28 fruster kernel:  Intel GenuineIntel
Dec  4 18:15:28 fruster kernel:  AMD AuthenticAMD
Dec  4 18:15:28 fruster kernel:  Centaur CentaurHauls
Dec  4 18:15:28 fruster kernel: BIOS-provided physical RAM map:
Dec  4 18:15:28 fruster kernel: BIOS-e820: 0000000000000000 - 000000000009c800 (usable)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 000000000009c800 - 00000000000a0000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 0000000000100000 - 00000000cd9f7000 (usable)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000cd9f7000 - 00000000cdbf8000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000cdbf8000 - 00000000cdc09000 (ACPI data)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000cdc09000 - 00000000cdd30000 (ACPI NVS)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000cdd30000 - 00000000ce808000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000ce808000 - 00000000ce809000 (usable)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000ce809000 - 00000000ce84c000 (ACPI NVS)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000ce84c000 - 00000000cec74000 (usable)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000cec74000 - 00000000ceff4000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000ceff4000 - 00000000cf000000 (usable)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000f8000000 - 00000000fc000000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000fed00000 - 00000000fed04000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000fed1c000 - 00000000fed20000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000fee00000 - 00000000fee01000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 00000000ff000000 - 0000000100000000 (reserved)
Dec  4 18:15:28 fruster kernel: BIOS-e820: 0000000100000000 - 000000082f000000 (usable)
Dec  4 18:15:28 fruster kernel: DMI 2.7 present.
Dec  4 18:15:28 fruster kernel: SMBIOS version 2.7 @ 0xF04C0
Dec  4 18:15:28 fruster kernel: AMI BIOS detected: BIOS may corrupt low RAM, working around it.
Dec  4 18:15:28 fruster kernel: last_pfn = 0x82f000 max_arch_pfn = 0x400000000
Dec  4 18:15:28 fruster kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
Dec  4 18:15:28 fruster kernel: total RAM covered: 32752M
Dec  4 18:15:28 fruster kernel: Found optimal setting for mtrr clean up
Dec  4 18:15:28 fruster kernel: gran_size: 64K 	chunk_size: 32M 	num_reg: 9  	lose cover RAM: 0G
Dec  4 18:15:28 fruster kernel: last_pfn = 0xcf000 max_arch_pfn = 0x400000000
Dec  4 18:15:28 fruster kernel: init_memory_mapping: 0000000000000000-00000000cf000000
Dec  4 18:15:28 fruster kernel: init_memory_mapping: 0000000100000000-000000082f000000
Dec  4 18:15:28 fruster kernel: RAMDISK: 36f5d000 - 37fef63b
Dec  4 18:15:28 fruster kernel: ACPI: RSDP 00000000000f0490 00024 (v02 ALASKA)
Dec  4 18:15:28 fruster kernel: ACPI: XSDT 00000000cdbfb078 0006C (v01 ALASKA    A M I 01072009 AMI  00010013)
Dec  4 18:15:28 fruster kernel: ACPI: FACP 00000000cdc066b0 0010C (v05 ALASKA    A M I 01072009 AMI  00010013)
Dec  4 18:15:28 fruster kernel: ACPI Warning: FADT (revision 5) is longer than ACPI 2.0 version, truncating length 0x10C to 0xF4 (20090903/tbfadt-288)
Dec  4 18:15:28 fruster kernel: ACPI: DSDT 00000000cdbfb180 0B529 (v02 ALASKA    A M I 00000022 INTL 20051117)
Dec  4 18:15:28 fruster kernel: ACPI: FACS 00000000cdd2e080 00040
Dec  4 18:15:28 fruster kernel: ACPI: APIC 00000000cdc067c0 00092 (v03 ALASKA    A M I 01072009 AMI  00010013)
Dec  4 18:15:28 fruster kernel: ACPI: FPDT 00000000cdc06858 00044 (v01 ALASKA    A M I 01072009 AMI  00010013)
Dec  4 18:15:28 fruster kernel: ACPI: MCFG 00000000cdc068a0 0003C (v01 ALASKA    A M I 01072009 MSFT 00000097)
Dec  4 18:15:28 fruster kernel: ACPI: HPET 00000000cdc068e0 00038 (v01 ALASKA    A M I 01072009 AMI. 00000005)
Dec  4 18:15:28 fruster kernel: ACPI: SSDT 00000000cdc06918 0036D (v01 SataRe SataTabl 00001000 INTL 20091112)
Dec  4 18:15:28 fruster kernel: ACPI: DMAR 00000000cdc08128 00080 (v01 INTEL      SNB  00000001 INTL 00000001)
Dec  4 18:15:28 fruster kernel: ACPI: SSDT 00000000cdc06ce0 009AA (v01  PmRef  Cpu0Ist 00003000 INTL 20051117)
Dec  4 18:15:28 fruster kernel: ACPI: SSDT 00000000cdc07690 00A92 (v01  PmRef    CpuPm 00003000 INTL 20051117)
Dec  4 18:15:28 fruster kernel: Setting APIC routing to flat.
Dec  4 18:15:28 fruster kernel: No NUMA configuration found
Dec  4 18:15:28 fruster kernel: Faking a node at 0000000000000000-000000082f000000
Dec  4 18:15:28 fruster kernel: Bootmem setup node 0 0000000000000000-000000082f000000
Dec  4 18:15:28 fruster kernel:  NODE_DATA [0000000000030000 - 0000000000063fff]
Dec  4 18:15:28 fruster kernel:  bootmap [0000000000100000 -  0000000000205dff] pages 106
Dec  4 18:15:28 fruster kernel: (8 early reservations) ==> bootmem [0000000000 - 082f000000]
Dec  4 18:15:28 fruster kernel:  #0 [0000000000 - 0000001000]   BIOS data page ==> [0000000000 - 0000001000]
Dec  4 18:15:28 fruster kernel:  #1 [0000006000 - 0000008000]       TRAMPOLINE ==> [0000006000 - 0000008000]
Dec  4 18:15:28 fruster kernel:  #2 [0001000000 - 000201b0e4]    TEXT DATA BSS ==> [0001000000 - 000201b0e4]
Dec  4 18:15:28 fruster kernel:  #3 [0036f5d000 - 0037fef63b]          RAMDISK ==> [0036f5d000 - 0037fef63b]
Dec  4 18:15:28 fruster kernel:  #4 [000009c800 - 0000100000]    BIOS reserved ==> [000009c800 - 0000100000]
Dec  4 18:15:28 fruster kernel:  #5 [000201c000 - 000201c40e]              BRK ==> [000201c000 - 000201c40e]
Dec  4 18:15:28 fruster kernel:  #6 [0000010000 - 0000013000]          PGTABLE ==> [0000010000 - 0000013000]
Dec  4 18:15:28 fruster kernel:  #7 [0000013000 - 0000030000]          PGTABLE ==> [0000013000 - 0000030000]
Dec  4 18:15:28 fruster kernel: found SMP MP-table at [ffff8800000fd8c0] fd8c0
Dec  4 18:15:28 fruster kernel: Reserving 131MB of memory at 48MB for crashkernel (System RAM: 33520MB)
Dec  4 18:15:28 fruster kernel: Zone PFN ranges:
Dec  4 18:15:28 fruster kernel:  DMA      0x00000010 -> 0x00001000
Dec  4 18:15:28 fruster kernel:  DMA32    0x00001000 -> 0x00100000
Dec  4 18:15:28 fruster kernel:  Normal   0x00100000 -> 0x0082f000
Dec  4 18:15:28 fruster kernel: Movable zone start PFN for each node
Dec  4 18:15:28 fruster kernel: early_node_map[6] active PFN ranges
Dec  4 18:15:28 fruster kernel:    0: 0x00000010 -> 0x0000009c
Dec  4 18:15:28 fruster kernel:    0: 0x00000100 -> 0x000cd9f7
Dec  4 18:15:28 fruster kernel:    0: 0x000ce808 -> 0x000ce809
Dec  4 18:15:28 fruster kernel:    0: 0x000ce84c -> 0x000cec74
Dec  4 18:15:28 fruster kernel:    0: 0x000ceff4 -> 0x000cf000
Dec  4 18:15:28 fruster kernel:    0: 0x00100000 -> 0x0082f000
Dec  4 18:15:28 fruster kernel: ACPI: PM-Timer IO Port: 0x408
Dec  4 18:15:28 fruster kernel: Setting APIC routing to flat.
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
Dec  4 18:15:28 fruster kernel: ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])
Dec  4 18:15:28 fruster kernel: ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
Dec  4 18:15:28 fruster kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-23
Dec  4 18:15:28 fruster kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  4 18:15:28 fruster kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  4 18:15:28 fruster kernel: Using ACPI (MADT) for SMP configuration information
Dec  4 18:15:28 fruster kernel: ACPI: HPET id: 0x8086a701 base: 0xfed00000
Dec  4 18:15:28 fruster kernel: SMP: Allowing 8 CPUs, 0 hotplug CPUs
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 000000000009c000 - 000000000009d000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 000000000009d000 - 00000000000a0000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000000e0000 - 0000000000100000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000cd9f7000 - 00000000cdbf8000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000cdbf8000 - 00000000cdc09000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000cdc09000 - 00000000cdd30000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000cdd30000 - 00000000ce808000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000ce809000 - 00000000ce84c000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000cec74000 - 00000000ceff4000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000cf000000 - 00000000f8000000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000f8000000 - 00000000fc000000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fc000000 - 00000000fec00000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fec00000 - 00000000fec01000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fec01000 - 00000000fed00000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fed00000 - 00000000fed04000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fed04000 - 00000000fed1c000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fed1c000 - 00000000fed20000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fed20000 - 00000000fee00000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fee00000 - 00000000fee01000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000fee01000 - 00000000ff000000
Dec  4 18:15:28 fruster kernel: PM: Registered nosave memory: 00000000ff000000 - 0000000100000000
Dec  4 18:15:28 fruster kernel: Allocating PCI resources starting at cf000000 (gap: cf000000:29000000)
Dec  4 18:15:28 fruster kernel: Booting paravirtualized kernel on bare hardware
Dec  4 18:15:28 fruster kernel: NR_CPUS:4096 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  4 18:15:28 fruster kernel: PERCPU: Embedded 31 pages/cpu @ffff880028200000 s94552 r8192 d24232 u262144
Dec  4 18:15:28 fruster kernel: pcpu-alloc: s94552 r8192 d24232 u262144 alloc=1*2097152
Dec  4 18:15:28 fruster kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec  4 18:15:28 fruster kernel: Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 8258282
Dec  4 18:15:28 fruster kernel: Policy zone: Normal
Dec  4 18:15:28 fruster kernel: Kernel command line: ro root=/dev/mapper/vg_fruster-lv_root rd_NO_LUKS rd_LVM_LV=vg_fruster/lv_root LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=131M@0M rd_NO_DM  KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_fruster/lv_swap rhgb quiet
Dec  4 18:15:28 fruster kernel: PID hash table entries: 4096 (order: 3, 32768 bytes)
Dec  4 18:15:28 fruster kernel: xsave/xrstor: enabled xstate_bv 0x7, cntxt size 0x340
Dec  4 18:15:28 fruster kernel: Checking aperture...
Dec  4 18:15:28 fruster kernel: No AGP bridge found
Dec  4 18:15:28 fruster kernel: dmar: Queued invalidation will be enabled to support x2apic and Intr-remapping.
Dec  4 18:15:28 fruster kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  4 18:15:28 fruster kernel: Placing 64MB software IO TLB between ffff880020000000 - ffff880024000000
Dec  4 18:15:28 fruster kernel: software IO TLB at phys 0x20000000 - 0x24000000
Dec  4 18:15:28 fruster kernel: Memory: 32806848k/34324480k available (5223k kernel code, 821536k absent, 696096k reserved, 7119k data, 1264k init)
Dec  4 18:15:28 fruster kernel: Hierarchical RCU implementation.
Dec  4 18:15:28 fruster kernel: NR_IRQS:33024 nr_irqs:472
Dec  4 18:15:28 fruster kernel: Extended CMOS year: 2000
Dec  4 18:15:28 fruster kernel: Console: colour VGA+ 80x25
Dec  4 18:15:28 fruster kernel: console [tty0] enabled
Dec  4 18:15:28 fruster kernel: allocated 134217728 bytes of page_cgroup
Dec  4 18:15:28 fruster kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroups
Dec  4 18:15:28 fruster kernel: Fast TSC calibration using PIT
Dec  4 18:15:28 fruster kernel: Detected 3292.665 MHz processor.
Dec  4 18:15:28 fruster kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6585.33 BogoMIPS (lpj=3292665)
Dec  4 18:15:28 fruster kernel: pid_max: default: 32768 minimum: 301
Dec  4 18:15:28 fruster kernel: Security Framework initialized
Dec  4 18:15:28 fruster kernel: SELinux:  Initializing.
Dec  4 18:15:28 fruster kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes)
Dec  4 18:15:28 fruster kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes)
Dec  4 18:15:28 fruster kernel: Mount-cache hash table entries: 256
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys ns
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys cpuacct
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys memory
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys devices
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys freezer
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys net_cls
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys blkio
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys perf_event
Dec  4 18:15:28 fruster kernel: Initializing cgroup subsys net_prio
Dec  4 18:15:28 fruster kernel: CPU: Physical Processor ID: 0
Dec  4 18:15:28 fruster kernel: CPU: Processor Core ID: 0
Dec  4 18:15:28 fruster kernel: mce: CPU supports 9 MCE banks
Dec  4 18:15:28 fruster kernel: CPU0: Thermal monitoring enabled (TM1)
Dec  4 18:15:28 fruster kernel: using mwait in idle threads.
Dec  4 18:15:28 fruster kernel: ACPI: Core revision 20090903
Dec  4 18:15:28 fruster kernel: ftrace: converting mcount calls to 0f 1f 44 00 00
Dec  4 18:15:28 fruster kernel: ftrace: allocating 21439 entries in 85 pages
Dec  4 18:15:28 fruster kernel: dmar: Host address width 36
Dec  4 18:15:28 fruster kernel: dmar: DRHD base: 0x000000fed90000 flags: 0x1
Dec  4 18:15:28 fruster kernel: dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c9008020660262 ecap f010da
Dec  4 18:15:28 fruster kernel: dmar: RMRR base: 0x000000cdb7a000 end: 0x000000cdb8bfff
Dec  4 18:15:28 fruster kernel: dmar: No ATSR found
Dec  4 18:15:28 fruster kernel: IOAPIC id 2 under DRHD base 0xfed90000
Dec  4 18:15:28 fruster kernel: HPET id 0 under DRHD base 0xfed90000
Dec  4 18:15:28 fruster kernel: Enabled IRQ remapping in x2apic mode
Dec  4 18:15:28 fruster kernel: Enabling x2apic
Dec  4 18:15:28 fruster kernel: Enabled x2apic
Dec  4 18:15:28 fruster kernel: APIC routing finalized to cluster x2apic.
Dec  4 18:15:28 fruster kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Dec  4 18:15:28 fruster kernel: CPU0: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz stepping 09
Dec  4 18:15:28 fruster kernel: Performance Events: PEBS fmt1+, SandyBridge events, Intel PMU driver.
Dec  4 18:15:28 fruster kernel: ... version:                3
Dec  4 18:15:28 fruster kernel: ... bit width:              48
Dec  4 18:15:28 fruster kernel: ... generic registers:      4
Dec  4 18:15:28 fruster kernel: ... value mask:             0000ffffffffffff
Dec  4 18:15:28 fruster kernel: ... max period:             000000007fffffff
Dec  4 18:15:28 fruster kernel: ... fixed-purpose events:   3
Dec  4 18:15:28 fruster kernel: ... event mask:             000000070000000f
Dec  4 18:15:28 fruster kernel: NMI watchdog enabled, takes one hw-pmu counter.
Dec  4 18:15:28 fruster kernel: Booting Node   0, Processors  #1 #2 #3 #4 #5 #6 #7 Ok.
Dec  4 18:15:28 fruster kernel: Brought up 8 CPUs
Dec  4 18:15:28 fruster kernel: Total of 8 processors activated (52682.64 BogoMIPS).
Dec  4 18:15:28 fruster kernel: devtmpfs: initialized
Dec  4 18:15:28 fruster kernel: PM: Registering ACPI NVS region at cdc09000 (1208320 bytes)
Dec  4 18:15:28 fruster kernel: PM: Registering ACPI NVS region at ce809000 (274432 bytes)
Dec  4 18:15:28 fruster kernel: regulator: core version 0.5
Dec  4 18:15:28 fruster kernel: NET: Registered protocol family 16
Dec  4 18:15:28 fruster kernel: ACPI FADT declares the system doesn't support PCIe ASPM, so disable it
Dec  4 18:15:28 fruster kernel: ACPI: bus type pci registered
Dec  4 18:15:28 fruster kernel: PCI: MCFG configuration 0: base f8000000 segment 0 buses 0 - 63
Dec  4 18:15:28 fruster kernel: PCI: MCFG area at f8000000 reserved in E820
Dec  4 18:15:28 fruster kernel: PCI: Using MMCONFIG at f8000000 - fbffffff
Dec  4 18:15:28 fruster kernel: PCI: Using configuration type 1 for base access
Dec  4 18:15:28 fruster kernel: bio: create slab <bio-0> at 0
Dec  4 18:15:28 fruster kernel: ACPI: Executed 1 blocks of module-level executable AML code
Dec  4 18:15:28 fruster kernel: ACPI Error (psargs-0359): [RAMB] Namespace lookup failure, AE_NOT_FOUND
Dec  4 18:15:28 fruster kernel: ACPI Exception: AE_NOT_FOUND, Could not execute arguments for [RAMW] (Region) (20090903/nsinit-347)
Dec  4 18:15:28 fruster kernel: ACPI: Interpreter enabled
Dec  4 18:15:28 fruster kernel: ACPI: (supports S0 S3 S4 S5)
Dec  4 18:15:28 fruster kernel: ACPI: Using IOAPIC for interrupt routing
Dec  4 18:15:28 fruster kernel: ACPI: Power Resource [FN00] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Power Resource [FN01] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Power Resource [FN02] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Power Resource [FN03] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Power Resource [FN04] (off)
Dec  4 18:15:28 fruster kernel: ACPI: No dock devices found.
Dec  4 18:15:28 fruster kernel: HEST: Table not found.
Dec  4 18:15:28 fruster kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  4 18:15:28 fruster kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-3e])
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [io  0x0000-0x0cf7]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [io  0x0d00-0xffff]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [mem 0x000a0000-0x000bffff]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [mem 0x000d8000-0x000dbfff]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [mem 0x000dc000-0x000dffff]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [mem 0x000e0000-0x000e3fff]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [mem 0x000e4000-0x000e7fff]
Dec  4 18:15:28 fruster kernel: pci_root PNP0A08:00: host bridge window [mem 0xd0000000-0xfeafffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:14.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:16.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1a.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1b.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:00:1d.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1f.2: PME# supported from D3hot
Dec  4 18:15:28 fruster kernel: pci 0000:00:1f.2: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0: PCI bridge to [bus 01-01]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1: PCI bridge to [bus 02-02]
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0: PCI bridge to [bus 03-03]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0: PCI bridge to [bus 04-04]
Dec  4 18:15:28 fruster kernel: pci 0000:05:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:05:00.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4: PCI bridge to [bus 05-05]
Dec  4 18:15:28 fruster kernel: pci 0000:06:00.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:06:00.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5: PCI bridge to [bus 06-06]
Dec  4 18:15:28 fruster kernel: pci 0000:07:00.0: PME# supported from D0 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:07:00.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6: PCI bridge to [bus 07-07]
Dec  4 18:15:28 fruster kernel: pci 0000:08:03.0: PME# supported from D2 D3hot D3cold
Dec  4 18:15:28 fruster kernel: pci 0000:08:03.0: PME# disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1e.0: PCI bridge to [bus 08-08] (subtractive decode)
Dec  4 18:15:28 fruster kernel: pci0000:00: Requesting ACPI _OSC control (0x1d)
Dec  4 18:15:28 fruster kernel: pci0000:00: ACPI _OSC control (0x18) granted
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 10 *11 12 14 15)
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 *10 11 12 14 15)
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 6 *10 11 12 14 15)
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKD] (IRQs *3 4 5 6 10 11 12 14 15)
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKE] (IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 10 11 12 14 15) *0, disabled.
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 10 *11 12 14 15)
Dec  4 18:15:28 fruster kernel: ACPI: PCI Interrupt Link [LNKH] (IRQs *3 4 5 6 10 11 12 14 15)
Dec  4 18:15:28 fruster kernel: vgaarb: device added: PCI:0000:01:00.0,decodes=io+mem,owns=io+mem,locks=none
Dec  4 18:15:28 fruster kernel: vgaarb: loaded
Dec  4 18:15:28 fruster kernel: vgaarb: bridge control possible 0000:01:00.0
Dec  4 18:15:28 fruster kernel: SCSI subsystem initialized
Dec  4 18:15:28 fruster kernel: usbcore: registered new interface driver usbfs
Dec  4 18:15:28 fruster kernel: usbcore: registered new interface driver hub
Dec  4 18:15:28 fruster kernel: usbcore: registered new device driver usb
Dec  4 18:15:28 fruster kernel: PCI: Using ACPI for IRQ routing
Dec  4 18:15:28 fruster kernel: NetLabel: Initializing
Dec  4 18:15:28 fruster kernel: NetLabel:  domain hash size = 128
Dec  4 18:15:28 fruster kernel: NetLabel:  protocols = UNLABELED CIPSOv4
Dec  4 18:15:28 fruster kernel: NetLabel:  unlabeled traffic allowed by default
Dec  4 18:15:28 fruster kernel: HPET: 8 timers in total, 5 timers will be used for per-cpu timer
Dec  4 18:15:28 fruster kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 25, 26, 27, 28, 29, 0
Dec  4 18:15:28 fruster kernel: hpet0: 8 comparators, 64-bit 14.318180 MHz counter
Dec  4 18:15:28 fruster kernel: Switching to clocksource hpet
Dec  4 18:15:28 fruster kernel: pnp: PnP ACPI init
Dec  4 18:15:28 fruster kernel: ACPI: bus type pnp registered
Dec  4 18:15:28 fruster kernel: pnp: PnP ACPI: found 15 devices
Dec  4 18:15:28 fruster kernel: ACPI: ACPI bus type pnp unregistered
Dec  4 18:15:28 fruster kernel: system 00:01: [mem 0xfed40000-0xfed44fff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0x0680-0x069f] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0x1000-0x100f] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0xffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0xffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0x0400-0x0453] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0x0458-0x047f] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0x0500-0x057f] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:05: [io  0x164e-0x164f] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:07: [io  0x0454-0x0457] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:08: [io  0x0290-0x029f] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0a: [io  0x04d0-0x04d1] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed1c000-0xfed1ffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed10000-0xfed17fff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed18000-0xfed18fff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed19000-0xfed19fff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xf8000000-0xfbffffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed20000-0xfed3ffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed90000-0xfed93fff] could not be reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfed45000-0xfed8ffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xff000000-0xffffffff] has been reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xfee00000-0xfeefffff] could not be reserved
Dec  4 18:15:28 fruster kernel: system 00:0e: [mem 0xd0000000-0xd0000fff] has been reserved
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0: PCI bridge to [bus 01-01]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0: PCI bridge to [bus 01-01]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0:   bridge window [io  0xe000-0xefff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0:   bridge window [mem 0xe0000000-0xf00fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1: PCI bridge to [bus 02-02]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1: PCI bridge to [bus 02-02]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1:   bridge window [io  0xd000-0xdfff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1:   bridge window [mem 0xf0600000-0xf06fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0: PCI bridge to [bus 03-03]
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0: PCI bridge to [bus 03-03]
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0:   bridge window [io  disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0:   bridge window [mem 0xf0500000-0xf05fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0:   bridge window [mem 0xf7800000-0xf7ffffff 64bit pref]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0: PCI bridge to [bus 04-04]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0: PCI bridge to [bus 04-04]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0:   bridge window [io  disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0:   bridge window [mem disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4: PCI bridge to [bus 05-05]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4: PCI bridge to [bus 05-05]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4:   bridge window [io  disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4:   bridge window [mem 0xf0400000-0xf04fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5: PCI bridge to [bus 06-06]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5: PCI bridge to [bus 06-06]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5:   bridge window [io  0xc000-0xcfff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5:   bridge window [mem 0xf0300000-0xf03fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6: PCI bridge to [bus 07-07]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6: PCI bridge to [bus 07-07]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6:   bridge window [io  0xb000-0xbfff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6:   bridge window [mem 0xf0200000-0xf02fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1e.0: PCI bridge to [bus 08-08]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1e.0: PCI bridge to [bus 08-08]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1e.0:   bridge window [io  0xa000-0xafff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1e.0:   bridge window [mem 0xf0100000-0xf01fffff]
Dec  4 18:15:28 fruster kernel: pci 0000:00:1e.0:   bridge window [mem pref disabled]
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:00:01.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:00:06.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.4: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.5: PCI INT B -> GSI 17 (level, low) -> IRQ 17
Dec  4 18:15:28 fruster kernel: pci 0000:00:1c.6: PCI INT C -> GSI 18 (level, low) -> IRQ 18
Dec  4 18:15:28 fruster kernel: NET: Registered protocol family 2
Dec  4 18:15:28 fruster kernel: IP route cache hash table entries: 524288 (order: 10, 4194304 bytes)
Dec  4 18:15:28 fruster kernel: TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
Dec  4 18:15:28 fruster kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
Dec  4 18:15:28 fruster kernel: TCP: Hash tables configured (established 524288 bind 65536)
Dec  4 18:15:28 fruster kernel: TCP reno registered
Dec  4 18:15:28 fruster kernel: NET: Registered protocol family 1
Dec  4 18:15:28 fruster kernel: pci 0000:00:14.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:00:14.0: PCI INT A disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:00:1a.0: PCI INT A disabled
Dec  4 18:15:28 fruster kernel: pci 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23
Dec  4 18:15:28 fruster kernel: pci 0000:00:1d.0: PCI INT A disabled
Dec  4 18:15:28 fruster kernel: pci 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: pci 0000:05:00.0: PCI INT A disabled
Dec  4 18:15:28 fruster kernel: Trying to unpack rootfs image as initramfs...
Dec  4 18:15:28 fruster kernel: Freeing initrd memory: 16969k freed
Dec  4 18:15:28 fruster kernel: audit: initializing netlink socket (disabled)
Dec  4 18:15:28 fruster kernel: type=2000 audit(1386209654.350:1): initialized
Dec  4 18:15:28 fruster kernel: HugeTLB registered 2 MB page size, pre-allocated 0 pages
Dec  4 18:15:28 fruster kernel: VFS: Disk quotas dquot_6.5.2
Dec  4 18:15:28 fruster kernel: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  4 18:15:28 fruster kernel: msgmni has been set to 32768
Dec  4 18:15:28 fruster kernel: alg: No test for stdrng (krng)
Dec  4 18:15:28 fruster kernel: ksign: Installing public key data
Dec  4 18:15:28 fruster kernel: Loading keyring
Dec  4 18:15:28 fruster kernel: - Added public key 1EC70CB89755E23
Dec  4 18:15:28 fruster kernel: - User ID: Red Hat, Inc. (Kernel Module GPG key)
Dec  4 18:15:28 fruster kernel: - Added public key D4A26C9CCD09BEDA
Dec  4 18:15:28 fruster kernel: - User ID: Red Hat Enterprise Linux Driver Update Program <secalert@redhat.com>
Dec  4 18:15:28 fruster kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Dec  4 18:15:28 fruster kernel: io scheduler noop registered
Dec  4 18:15:28 fruster kernel: io scheduler anticipatory registered
Dec  4 18:15:28 fruster kernel: io scheduler deadline registered
Dec  4 18:15:28 fruster kernel: io scheduler cfq registered (default)
Dec  4 18:15:28 fruster kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5
Dec  4 18:15:28 fruster kernel: pciehp: PCI Express Hot Plug Controller Driver version: 0.4
Dec  4 18:15:28 fruster kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  4 18:15:28 fruster kernel: ipmi message handler version 39.2
Dec  4 18:15:28 fruster kernel: IPMI System Interface driver.
Dec  4 18:15:28 fruster kernel: ipmi_si: Adding default-specified kcs state machine
Dec  4 18:15:28 fruster kernel: ipmi_si: Trying default-specified kcs state machine at i/o address 0xca2, slave address 0x0, irq 0
Dec  4 18:15:28 fruster kernel: ipmi_si: Interface detection failed
Dec  4 18:15:28 fruster kernel: ipmi_si: Adding default-specified smic state machine
Dec  4 18:15:28 fruster kernel: ipmi_si: Trying default-specified smic state machine at i/o address 0xca9, slave address 0x0, irq 0
Dec  4 18:15:28 fruster kernel: ipmi_si: Interface detection failed
Dec  4 18:15:28 fruster kernel: ipmi_si: Adding default-specified bt state machine
Dec  4 18:15:28 fruster kernel: ipmi_si: Trying default-specified bt state machine at i/o address 0xe4, slave address 0x0, irq 0
Dec  4 18:15:28 fruster kernel: ipmi_si: Interface detection failed
Dec  4 18:15:28 fruster kernel: ipmi_si: Unable to find any System Interface(s)
Dec  4 18:15:28 fruster kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Dec  4 18:15:28 fruster kernel: ACPI: Power Button [PWRB]
Dec  4 18:15:28 fruster kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input1
Dec  4 18:15:28 fruster kernel: ACPI: Power Button [PWRF]
Dec  4 18:15:28 fruster kernel: ACPI: Fan [FAN0] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Fan [FAN1] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Fan [FAN2] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Fan [FAN3] (off)
Dec  4 18:15:28 fruster kernel: ACPI: Fan [FAN4] (off)
Dec  4 18:15:28 fruster kernel: ACPI: SSDT 00000000cdba6a98 00303 (v01  PmRef    ApIst 00003000 INTL 20051117)
Dec  4 18:15:28 fruster kernel: ACPI: SSDT 00000000cdba7c18 00119 (v01  PmRef    ApCst 00003000 INTL 20051117)
Dec  4 18:15:28 fruster kernel: thermal LNXTHERM:01: registered as thermal_zone0
Dec  4 18:15:28 fruster kernel: ACPI: Thermal Zone [TZ00] (28 C)
Dec  4 18:15:28 fruster kernel: thermal LNXTHERM:02: registered as thermal_zone1
Dec  4 18:15:28 fruster kernel: ACPI: Thermal Zone [TZ01] (30 C)
Dec  4 18:15:28 fruster kernel: ERST: Table is not found!
Dec  4 18:15:28 fruster kernel: GHES: HEST is not enabled!
Dec  4 18:15:28 fruster kernel: Non-volatile memory driver v1.3
Dec  4 18:15:28 fruster kernel: Linux agpgart interface v0.103
Dec  4 18:15:28 fruster kernel: crash memory driver: version 1.1
Dec  4 18:15:28 fruster kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  4 18:15:28 fruster kernel: serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
Dec  4 18:15:28 fruster kernel: 00:0d: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
Dec  4 18:15:28 fruster kernel: brd: module loaded
Dec  4 18:15:28 fruster kernel: loop: module loaded
Dec  4 18:15:28 fruster kernel: input: Macintosh mouse button emulation as /devices/virtual/input/input2
Dec  4 18:15:28 fruster kernel: Fixed MDIO Bus: probed
Dec  4 18:15:28 fruster kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1a.0: EHCI Host Controller
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 1
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1a.0: debug port 2
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1a.0: irq 16, io mem 0xf0718000
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1a.0: USB 2.0 started, EHCI 1.00
Dec  4 18:15:28 fruster kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
Dec  4 18:15:28 fruster kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb usb1: Product: EHCI Host Controller
Dec  4 18:15:28 fruster kernel: usb usb1: Manufacturer: Linux 2.6.32-358.23.2.el6.x86_64 ehci_hcd
Dec  4 18:15:28 fruster kernel: usb usb1: SerialNumber: 0000:00:1a.0
Dec  4 18:15:28 fruster kernel: usb usb1: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 1-0:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 1-0:1.0: 2 ports detected
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1d.0: EHCI Host Controller
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1d.0: debug port 2
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1d.0: irq 23, io mem 0xf0717000
Dec  4 18:15:28 fruster kernel: ehci_hcd 0000:00:1d.0: USB 2.0 started, EHCI 1.00
Dec  4 18:15:28 fruster kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0002
Dec  4 18:15:28 fruster kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb usb2: Product: EHCI Host Controller
Dec  4 18:15:28 fruster kernel: usb usb2: Manufacturer: Linux 2.6.32-358.23.2.el6.x86_64 ehci_hcd
Dec  4 18:15:28 fruster kernel: usb usb2: SerialNumber: 0000:00:1d.0
Dec  4 18:15:28 fruster kernel: usb usb2: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 2-0:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 2-0:1.0: 2 ports detected
Dec  4 18:15:28 fruster kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Dec  4 18:15:28 fruster kernel: uhci_hcd: USB Universal Host Controller Interface driver
Dec  4 18:15:28 fruster kernel: PNP: PS/2 Controller [PNP0303:PS2K] at 0x60,0x64 irq 1
Dec  4 18:15:28 fruster kernel: PNP: PS/2 appears to have AUX port disabled, if this is incorrect please boot with i8042.nopnp
Dec  4 18:15:28 fruster kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  4 18:15:28 fruster kernel: mice: PS/2 mouse device common for all mice
Dec  4 18:15:28 fruster kernel: rtc_cmos 00:06: RTC can wake from S4
Dec  4 18:15:28 fruster kernel: rtc_cmos 00:06: rtc core: registered rtc_cmos as rtc0
Dec  4 18:15:28 fruster kernel: rtc0: alarms up to one month, y3k, 242 bytes nvram, hpet irqs
Dec  4 18:15:28 fruster kernel: cpuidle: using governor ladder
Dec  4 18:15:28 fruster kernel: cpuidle: using governor menu
Dec  4 18:15:28 fruster kernel: EFI Variables Facility v0.08 2004-May-17
Dec  4 18:15:28 fruster kernel: usbcore: registered new interface driver hiddev
Dec  4 18:15:28 fruster kernel: usbcore: registered new interface driver usbhid
Dec  4 18:15:28 fruster kernel: usbhid: v2.6:USB HID core driver
Dec  4 18:15:28 fruster kernel: TCP cubic registered
Dec  4 18:15:28 fruster kernel: Initializing XFRM netlink socket
Dec  4 18:15:28 fruster kernel: NET: Registered protocol family 17
Dec  4 18:15:28 fruster kernel: registered taskstats version 1
Dec  4 18:15:28 fruster kernel: rtc_cmos 00:06: setting system clock to 2013-12-05 02:14:15 UTC (1386209655)
Dec  4 18:15:28 fruster kernel: Initalizing network drop monitor service
Dec  4 18:15:28 fruster kernel: Freeing unused kernel memory: 1264k freed
Dec  4 18:15:28 fruster kernel: Write protecting the kernel read-only data: 10240k
Dec  4 18:15:28 fruster kernel: Freeing unused kernel memory: 900k freed
Dec  4 18:15:28 fruster kernel: Freeing unused kernel memory: 1672k freed
Dec  4 18:15:28 fruster kernel: dracut: dracut-004-283.el6
Dec  4 18:15:28 fruster kernel: dracut: rd_NO_LUKS: removing cryptoluks activation
Dec  4 18:15:28 fruster kernel: device-mapper: uevent: version 1.0.3
Dec  4 18:15:28 fruster kernel: device-mapper: ioctl: 4.23.6-ioctl (2012-07-25) initialised: dm-devel@redhat.com
Dec  4 18:15:28 fruster kernel: udev: starting version 147
Dec  4 18:15:28 fruster kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input3
Dec  4 18:15:28 fruster kernel: [drm] Initialized drm 1.1.0 20060810
Dec  4 18:15:28 fruster kernel: [drm] radeon defaulting to kernel modesetting.
Dec  4 18:15:28 fruster kernel: [drm] radeon kernel modesetting enabled.
Dec  4 18:15:28 fruster kernel: radeon 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: [drm] initializing kernel modesetting (TURKS 0x1002:0x6758 0x1682:0x3181).
Dec  4 18:15:28 fruster kernel: [drm] register mmio base: 0xF0020000
Dec  4 18:15:28 fruster kernel: [drm] register mmio size: 131072
Dec  4 18:15:28 fruster kernel: ATOM BIOS: TURKS
Dec  4 18:15:28 fruster kernel: radeon 0000:01:00.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
Dec  4 18:15:28 fruster kernel: radeon 0000:01:00.0: GTT: 512M 0x0000000040000000 - 0x000000005FFFFFFF
Dec  4 18:15:28 fruster kernel: [drm] Detected VRAM RAM=1024M, BAR=256M
Dec  4 18:15:28 fruster kernel: [drm] RAM width 128bits DDR
Dec  4 18:15:28 fruster kernel: [TTM] Zone  kernel: Available graphics memory: 16413828 kiB
Dec  4 18:15:28 fruster kernel: [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
Dec  4 18:15:28 fruster kernel: [TTM] Initializing pool allocator
Dec  4 18:15:28 fruster kernel: [TTM] Initializing DMA pool allocator
Dec  4 18:15:28 fruster kernel: [drm] radeon: 1024M of VRAM memory ready
Dec  4 18:15:28 fruster kernel: [drm] radeon: 512M of GTT memory ready.
Dec  4 18:15:28 fruster kernel: [drm] Supports vblank timestamp caching Rev 1 (10.10.2010).
Dec  4 18:15:28 fruster kernel: [drm] Driver supports precise vblank timestamp query.
Dec  4 18:15:28 fruster kernel: radeon 0000:01:00.0: radeon: using MSI.
Dec  4 18:15:28 fruster kernel: [drm] radeon: irq initialized.
Dec  4 18:15:28 fruster kernel: [drm] GART: num cpu pages 131072, num gpu pages 131072
Dec  4 18:15:28 fruster kernel: [drm] probing gen 2 caps for device 8086:151 = 3/e
Dec  4 18:15:28 fruster kernel: [drm] enabling PCIE gen 2 link speeds, disable with radeon.pcie_gen2=0
Dec  4 18:15:28 fruster kernel: [drm] Loading TURKS Microcode
Dec  4 18:15:28 fruster kernel: platform radeon_cp.0: firmware: requesting radeon/TURKS_pfp.bin
Dec  4 18:15:28 fruster kernel: platform radeon_cp.0: firmware: requesting radeon/TURKS_me.bin
Dec  4 18:15:28 fruster kernel: platform radeon_cp.0: firmware: requesting radeon/BTC_rlc.bin
Dec  4 18:15:28 fruster kernel: platform radeon_cp.0: firmware: requesting radeon/TURKS_mc.bin
Dec  4 18:15:28 fruster kernel: [drm] PCIE GART of 512M enabled (table at 0x0000000000040000).
Dec  4 18:15:28 fruster kernel: radeon 0000:01:00.0: WB enabled
Dec  4 18:15:28 fruster kernel: radeon 0000:01:00.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff88081da48c00
Dec  4 18:15:28 fruster kernel: [drm] ring test on 0 succeeded in 3 usecs
Dec  4 18:15:28 fruster kernel: [drm] ib test on ring 0 succeeded in 0 usecs
Dec  4 18:15:28 fruster kernel: [drm] Radeon Display Connectors
Dec  4 18:15:28 fruster kernel: [drm] Connector 0:
Dec  4 18:15:28 fruster kernel: [drm]   HDMI-A-1
Dec  4 18:15:28 fruster kernel: [drm]   HPD2
Dec  4 18:15:28 fruster kernel: [drm]   DDC: 0x6470 0x6470 0x6474 0x6474 0x6478 0x6478 0x647c 0x647c
Dec  4 18:15:28 fruster kernel: [drm]   Encoders:
Dec  4 18:15:28 fruster kernel: [drm]     DFP1: INTERNAL_UNIPHY2
Dec  4 18:15:28 fruster kernel: [drm] Connector 1:
Dec  4 18:15:28 fruster kernel: [drm]   DVI-I-1
Dec  4 18:15:28 fruster kernel: [drm]   HPD1
Dec  4 18:15:28 fruster kernel: [drm]   DDC: 0x6460 0x6460 0x6464 0x6464 0x6468 0x6468 0x646c 0x646c
Dec  4 18:15:28 fruster kernel: [drm]   Encoders:
Dec  4 18:15:28 fruster kernel: [drm]     DFP2: INTERNAL_UNIPHY
Dec  4 18:15:28 fruster kernel: [drm] Connector 2:
Dec  4 18:15:28 fruster kernel: [drm]   VGA-1
Dec  4 18:15:28 fruster kernel: [drm]   DDC: 0x6430 0x6430 0x6434 0x6434 0x6438 0x6438 0x643c 0x643c
Dec  4 18:15:28 fruster kernel: [drm]   Encoders:
Dec  4 18:15:28 fruster kernel: [drm]     CRT1: INTERNAL_KLDSCP_DAC1
Dec  4 18:15:28 fruster kernel: [drm] Internal thermal controller with fan control
Dec  4 18:15:28 fruster kernel: [drm] radeon: power management initialized
Dec  4 18:15:28 fruster kernel: [drm] fb mappable at 0xE0142000
Dec  4 18:15:28 fruster kernel: [drm] vram apper at 0xE0000000
Dec  4 18:15:28 fruster kernel: [drm] size 5242880
Dec  4 18:15:28 fruster kernel: [drm] fb depth is 24
Dec  4 18:15:28 fruster kernel: [drm]    pitch is 5120
Dec  4 18:15:28 fruster kernel: fbcon: radeondrmfb (fb0) is primary device
Dec  4 18:15:28 fruster kernel: usb 1-1: new high speed USB device number 2 using ehci_hcd
Dec  4 18:15:28 fruster kernel: Console: switching to colour frame buffer device 160x64
Dec  4 18:15:28 fruster kernel: fb0: radeondrmfb frame buffer device
Dec  4 18:15:28 fruster kernel: drm: registered panic notifier
Dec  4 18:15:28 fruster kernel: Slow work thread pool: Starting up
Dec  4 18:15:28 fruster kernel: Slow work thread pool: Ready
Dec  4 18:15:28 fruster kernel: [drm] Initialized radeon 2.22.0 20080528 for 0000:01:00.0 on minor 0
Dec  4 18:15:28 fruster kernel: dracut: Starting plymouth daemon
Dec  4 18:15:28 fruster kernel: usb 1-1: New USB device found, idVendor=8087, idProduct=0024
Dec  4 18:15:28 fruster kernel: usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 1-1: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 1-1:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 1-1:1.0: 6 ports detected
Dec  4 18:15:28 fruster kernel: dracut: rd_NO_DM: removing DM RAID activation
Dec  4 18:15:28 fruster kernel: dracut: rd_NO_MD: removing MD RAID activation
Dec  4 18:15:28 fruster kernel: megasas: 06.504.01.00-rh1 Mon. Oct. 8 17:00:00 PDT 2012
Dec  4 18:15:28 fruster kernel: megasas: 0x1000:0x0079:0x1000:0x9260: bus 2:slot 0:func 0
Dec  4 18:15:28 fruster kernel: megaraid_sas 0000:02:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
Dec  4 18:15:28 fruster kernel: megasas: FW now in Ready state
Dec  4 18:15:28 fruster kernel: megasas_init_mfi: fw_support_ieee=0
Dec  4 18:15:28 fruster kernel: megasas: INIT adapter done
Dec  4 18:15:28 fruster kernel: usb 2-1: new high speed USB device number 2 using ehci_hcd
Dec  4 18:15:28 fruster kernel: scsi0 : LSI SAS based MegaRAID driver
Dec  4 18:15:28 fruster kernel: scsi 0:0:8:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:9:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:10:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:11:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:12:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:13:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:14:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:15:0: Enclosure         LSI CORP SAS2X28          0717 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: usb 2-1: New USB device found, idVendor=8087, idProduct=0024
Dec  4 18:15:28 fruster kernel: usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 2-1: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 2-1:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 2-1:1.0: 8 ports detected
Dec  4 18:15:28 fruster kernel: scsi 0:0:16:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:17:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:18:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:19:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: Refined TSC clocksource calibration: 3292.520 MHz.
Dec  4 18:15:28 fruster kernel: Switching to clocksource tsc
Dec  4 18:15:28 fruster kernel: scsi 0:0:20:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:21:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:22:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:23:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: scsi 0:0:24:0: Direct-Access     ATA      WDC WD2002FAEX-0 1D05 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: usb 1-1.1: new high speed USB device number 3 using ehci_hcd
Dec  4 18:15:28 fruster kernel: scsi 0:2:0:0: Direct-Access     LSI      MR9260-4i        2.13 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19
Dec  4 18:15:28 fruster kernel: ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x1 impl SATA mode
Dec  4 18:15:28 fruster kernel: ahci 0000:00:1f.2: flags: 64bit ncq led clo pio slum part ems apst 
Dec  4 18:15:28 fruster kernel: scsi1 : ahci
Dec  4 18:15:28 fruster kernel: scsi2 : ahci
Dec  4 18:15:28 fruster kernel: scsi3 : ahci
Dec  4 18:15:28 fruster kernel: scsi4 : ahci
Dec  4 18:15:28 fruster kernel: scsi5 : ahci
Dec  4 18:15:28 fruster kernel: scsi6 : ahci
Dec  4 18:15:28 fruster kernel: ata1: SATA max UDMA/133 abar m2048@0xf0716000 port 0xf0716100 irq 35
Dec  4 18:15:28 fruster kernel: ata2: DUMMY
Dec  4 18:15:28 fruster kernel: ata3: DUMMY
Dec  4 18:15:28 fruster kernel: ata4: DUMMY
Dec  4 18:15:28 fruster kernel: ata5: DUMMY
Dec  4 18:15:28 fruster kernel: ata6: DUMMY
Dec  4 18:15:28 fruster kernel: usb 1-1.1: New USB device found, idVendor=1058, idProduct=1021
Dec  4 18:15:28 fruster kernel: usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
Dec  4 18:15:28 fruster kernel: usb 1-1.1: Product: Ext HDD 1021
Dec  4 18:15:28 fruster kernel: usb 1-1.1: Manufacturer: Western Digital
Dec  4 18:15:28 fruster kernel: usb 1-1.1: SerialNumber: 574D41565533393230313434
Dec  4 18:15:28 fruster kernel: usb 1-1.1: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec  4 18:15:28 fruster kernel: ata1.00: ACPI _SDD failed (AE 0x5)
Dec  4 18:15:28 fruster kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Dec  4 18:15:28 fruster kernel: ata1.00: ACPI _SDD failed (AE 0x5)
Dec  4 18:15:28 fruster kernel: ata1.00: ACPI: failed the second time, disabled
Dec  4 18:15:28 fruster kernel: ata1.00: ATA-8: OCZ-AGILITY3, 2.22, max UDMA/133
Dec  4 18:15:28 fruster kernel: ata1.00: 175836528 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Dec  4 18:15:28 fruster kernel: ata1.00: configured for UDMA/133
Dec  4 18:15:28 fruster kernel: scsi 1:0:0:0: Direct-Access     ATA      OCZ-AGILITY3     2.22 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: sd 1:0:0:0: [sdb] 175836528 512-byte logical blocks: (90.0 GB/83.8 GiB)
Dec  4 18:15:28 fruster kernel: sd 0:2:0:0: [sda] 54683238400 512-byte logical blocks: (27.9 TB/25.4 TiB)
Dec  4 18:15:28 fruster kernel: sd 1:0:0:0: [sdb] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 18:15:28 fruster kernel: sd 0:2:0:0: [sda] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 0:2:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 18:15:28 fruster kernel: sdb:
Dec  4 18:15:28 fruster kernel: sda: sdb1 sdb2
Dec  4 18:15:28 fruster kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: sda1
Dec  4 18:15:28 fruster kernel: sd 0:2:0:0: [sda] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: firewire_ohci 0000:08:03.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
Dec  4 18:15:28 fruster kernel: firewire_ohci: Added fw-ohci device 0000:08:03.0, OHCI version 1.10
Dec  4 18:15:28 fruster kernel: Initializing USB Mass Storage driver...
Dec  4 18:15:28 fruster kernel: scsi7 : SCSI emulation for USB Mass Storage devices
Dec  4 18:15:28 fruster kernel: usbcore: registered new interface driver usb-storage
Dec  4 18:15:28 fruster kernel: USB Mass Storage support registered.
Dec  4 18:15:28 fruster kernel: dracut: Scanning devices sdb2  for LVM logical volumes vg_fruster/lv_root vg_fruster/lv_swap 
Dec  4 18:15:28 fruster kernel: dracut: inactive '/dev/vg_fruster/lv_root' [52.10 GiB] inherit
Dec  4 18:15:28 fruster kernel: dracut: inactive '/dev/vg_fruster/lv_swap' [31.25 GiB] inherit
Dec  4 18:15:28 fruster kernel: EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: 
Dec  4 18:15:28 fruster kernel: dracut: Mounted root filesystem /dev/mapper/vg_fruster-lv_root
Dec  4 18:15:28 fruster kernel: SELinux:  Disabled at runtime.
Dec  4 18:15:28 fruster kernel: type=1404 audit(1386209662.280:2): selinux=0 auid=4294967295 ses=4294967295
Dec  4 18:15:28 fruster kernel: dracut: 
Dec  4 18:15:28 fruster kernel: dracut: Switching root
Dec  4 18:15:28 fruster kernel: readahead-collector: starting
Dec  4 18:15:28 fruster kernel: udev: starting version 147
Dec  4 18:15:28 fruster kernel: firewire_core: created device fw0: GUID 001e8c0000585c9e, S400
Dec  4 18:15:28 fruster kernel: snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Line as /devices/pci0000:00/0000:00:1b.0/sound/card0/input4
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Front Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input5
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Rear Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Front Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Line Out Side as /devices/pci0000:00/0000:00:1b.0/sound/card0/input8
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Line Out CLFE as /devices/pci0000:00/0000:00:1b.0/sound/card0/input9
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Line Out Surround as /devices/pci0000:00/0000:00:1b.0/sound/card0/input10
Dec  4 18:15:28 fruster kernel: input: HDA Intel PCH Line Out Front as /devices/pci0000:00/0000:00:1b.0/sound/card0/input11
Dec  4 18:15:28 fruster kernel: snd_hda_intel 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
Dec  4 18:15:28 fruster kernel: input: HD-Audio Generic HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input12
Dec  4 18:15:28 fruster kernel: ses 0:0:15:0: Attached Enclosure device
Dec  4 18:15:28 fruster kernel: ses 0:0:15:0: Attached scsi generic sg0 type 13
Dec  4 18:15:28 fruster kernel: sd 0:2:0:0: Attached scsi generic sg1 type 0
Dec  4 18:15:28 fruster kernel: sd 1:0:0:0: Attached scsi generic sg2 type 0
Dec  4 18:15:28 fruster kernel: mlx4_core: Mellanox ConnectX core driver v1.0-mlnx_ofed1.5.3 (November 3, 2011)
Dec  4 18:15:28 fruster kernel: mlx4_core: Initializing 0000:03:00.0
Dec  4 18:15:28 fruster kernel: mlx4_core 0000:03:00.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
Dec  4 18:15:28 fruster kernel: scsi 7:0:0:0: Direct-Access     WD       Ext HDD 1021     2002 PQ: 0 ANSI: 4
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] 2930272256 512-byte logical blocks: (1.50 TB/1.36 TiB)
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Test WP failed, assume Write Enabled
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Test WP failed, assume Write Enabled
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdc: sdc1
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Test WP failed, assume Write Enabled
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 7:0:0:0: [sdc] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: mlx4_core 0000:03:00.0: command 0x34 failed: fw status = 0x2
Dec  4 18:15:28 fruster kernel: mlx4_core 0000:03:00.0: Failed to qurey multi/single function mode: -1
Dec  4 18:15:28 fruster kernel: mlx4_core 0000:03:00.0: command 0x34 failed: fw status = 0x2
Dec  4 18:15:28 fruster kernel: mlx4_core 0000:03:00.0: failed to retrieve clp version : -1
Dec  4 18:15:28 fruster kernel: mlx4_en: Mellanox ConnectX HCA Ethernet driver v1.5.8.3 (June 2012)
Dec  4 18:15:28 fruster kernel: mlx4_en 0000:03:00.0: UDP RSS is not supported on this device.
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:00:14.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 3
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:00:14.0: irq 16, io mem 0xf0700000
Dec  4 18:15:28 fruster kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0002
Dec  4 18:15:28 fruster kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb usb3: Product: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: usb usb3: Manufacturer: Linux 2.6.32-358.23.2.el6.x86_64 xhci_hcd
Dec  4 18:15:28 fruster kernel: usb usb3: SerialNumber: 0000:00:14.0
Dec  4 18:15:28 fruster kernel: usb usb3: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 3-0:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 3-0:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 4
Dec  4 18:15:28 fruster kernel: usb usb4: config 1 interface 0 altsetting 0 endpoint 0x81 has no SuperSpeed companion descriptor
Dec  4 18:15:28 fruster kernel: usb usb4: New USB device found, idVendor=1d6b, idProduct=0003
Dec  4 18:15:28 fruster kernel: usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb usb4: Product: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: usb usb4: Manufacturer: Linux 2.6.32-358.23.2.el6.x86_64 xhci_hcd
Dec  4 18:15:28 fruster kernel: usb usb4: SerialNumber: 0000:00:14.0
Dec  4 18:15:28 fruster kernel: usb usb4: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 4-0:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 4-0:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: new USB bus registered, assigned bus number 5
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: irq 16, io mem 0xf0400000
Dec  4 18:15:28 fruster kernel: usb usb5: New USB device found, idVendor=1d6b, idProduct=0002
Dec  4 18:15:28 fruster kernel: usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb usb5: Product: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: usb usb5: Manufacturer: Linux 2.6.32-358.23.2.el6.x86_64 xhci_hcd
Dec  4 18:15:28 fruster kernel: usb usb5: SerialNumber: 0000:05:00.0
Dec  4 18:15:28 fruster kernel: usb usb5: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 5-0:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 5-0:1.0: 1 port detected
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: new USB bus registered, assigned bus number 6
Dec  4 18:15:28 fruster kernel: usb usb6: config 1 interface 0 altsetting 0 endpoint 0x81 has no SuperSpeed companion descriptor
Dec  4 18:15:28 fruster kernel: usb usb6: New USB device found, idVendor=1d6b, idProduct=0003
Dec  4 18:15:28 fruster kernel: usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb usb6: Product: xHCI Host Controller
Dec  4 18:15:28 fruster kernel: usb usb6: Manufacturer: Linux 2.6.32-358.23.2.el6.x86_64 xhci_hcd
Dec  4 18:15:28 fruster kernel: usb usb6: SerialNumber: 0000:05:00.0
Dec  4 18:15:28 fruster kernel: usb usb6: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 6-0:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 6-0:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  4 18:15:28 fruster kernel: iTCO_vendor_support: vendor-support=0
Dec  4 18:15:28 fruster kernel: iTCO_wdt: Intel TCO WatchDog Timer Driver v1.07rh
Dec  4 18:15:28 fruster kernel: iTCO_wdt: Found a Panther Point TCO device (Version=2, TCOBASE=0x0460)
Dec  4 18:15:28 fruster kernel: iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
Dec  4 18:15:28 fruster kernel: i801_smbus 0000:00:1f.3: PCI INT C -> GSI 18 (level, low) -> IRQ 18
Dec  4 18:15:28 fruster kernel: ACPI: resource 0000:00:1f.3 [io  0xf000-0xf01f] conflicts with ACPI region SMBI [io 0xf000-0xf00f]
Dec  4 18:15:28 fruster kernel: ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver
Dec  4 18:15:28 fruster kernel: microcode: CPU0 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU1 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU2 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU3 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU4 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU5 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU6 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: microcode: CPU7 sig=0x306a9, pf=0x2, revision=0x16
Dec  4 18:15:28 fruster kernel: platform microcode: firmware: requesting intel-ucode/06-3a-09
Dec  4 18:15:28 fruster kernel: Microcode Update Driver: v2.00 <tigran@aivazian.fsnet.co.uk>, Peter Oruba
Dec  4 18:15:28 fruster kernel: e1000e: Intel(R) PRO/1000 Network Driver - 2.1.4-k
Dec  4 18:15:28 fruster kernel: e1000e: Copyright(c) 1999 - 2012 Intel Corporation.
Dec  4 18:15:28 fruster kernel: e1000e 0000:06:00.0: Disabling ASPM L0s L1
Dec  4 18:15:28 fruster kernel: e1000e 0000:06:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
Dec  4 18:15:28 fruster kernel: e1000e 0000:06:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
Dec  4 18:15:28 fruster kernel: e1000e 0000:06:00.0: eth0: (PCI Express:2.5GT/s:Width x1) 30:85:a9:a4:22:5a
Dec  4 18:15:28 fruster kernel: e1000e 0000:06:00.0: eth0: Intel(R) PRO/1000 Network Connection
Dec  4 18:15:28 fruster kernel: e1000e 0000:06:00.0: eth0: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
Dec  4 18:15:28 fruster kernel: e1000e 0000:07:00.0: Disabling ASPM L0s L1
Dec  4 18:15:28 fruster kernel: e1000e 0000:07:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
Dec  4 18:15:28 fruster kernel: e1000e 0000:07:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
Dec  4 18:15:28 fruster kernel: usb 5-1: new high speed USB device number 2 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 5-1: New USB device found, idVendor=2109, idProduct=0811
Dec  4 18:15:28 fruster kernel: usb 5-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 5-1: Product: USB2.0 Hub
Dec  4 18:15:28 fruster kernel: usb 5-1: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: e1000e 0000:07:00.0: eth1: (PCI Express:2.5GT/s:Width x1) 30:85:a9:a4:22:5b
Dec  4 18:15:28 fruster kernel: e1000e 0000:07:00.0: eth1: Intel(R) PRO/1000 Network Connection
Dec  4 18:15:28 fruster kernel: e1000e 0000:07:00.0: eth1: MAC: 3, PHY: 8, PBA No: FFFFFF-0FF
Dec  4 18:15:28 fruster kernel: hub 5-1:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 5-1:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: parport_pc 00:09: reported by Plug and Play ACPI
Dec  4 18:15:28 fruster kernel: parport0: PC-style at 0x378, irq 5 [PCSPP]
Dec  4 18:15:28 fruster kernel: ppdev: user-space parallel port driver
Dec  4 18:15:28 fruster kernel: usb 6-2: new SuperSpeed USB device number 2 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-2: New USB device found, idVendor=2109, idProduct=0812
Dec  4 18:15:28 fruster kernel: usb 6-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 6-2: Product: USB3.0 Hub        
Dec  4 18:15:28 fruster kernel: usb 6-2: Manufacturer: VIA Labs, Inc. 
Dec  4 18:15:28 fruster kernel: usb 6-2: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 6-2:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 6-2:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: usb 6-3: new SuperSpeed USB device number 3 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-3: New USB device found, idVendor=2109, idProduct=0810
Dec  4 18:15:28 fruster kernel: usb 6-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 6-3: Product: 4-Port USB 3.0 Hub
Dec  4 18:15:28 fruster kernel: usb 6-3: Manufacturer: VIA Labs, Inc.
Dec  4 18:15:28 fruster kernel: usb 6-3: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 6-3:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 6-3:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: usb 6-4: new SuperSpeed USB device number 4 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-4: New USB device found, idVendor=152d, idProduct=0539
Dec  4 18:15:28 fruster kernel: usb 6-4: New USB device strings: Mfr=10, Product=11, SerialNumber=5
Dec  4 18:15:28 fruster kernel: usb 6-4: Product: USB to ATA/ATAPI Bridge
Dec  4 18:15:28 fruster kernel: usb 6-4: Manufacturer: JMicron
Dec  4 18:15:28 fruster kernel: usb 6-4: SerialNumber: 000000000000
Dec  4 18:15:28 fruster kernel: usb 6-4: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: scsi8 : SCSI emulation for USB Mass Storage devices
Dec  4 18:15:28 fruster kernel: usb 5-1.2: new high speed USB device number 3 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 5-1.2: New USB device found, idVendor=2109, idProduct=2812
Dec  4 18:15:28 fruster kernel: usb 5-1.2: New USB device strings: Mfr=0, Product=1, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 5-1.2: Product: USB2.0 Hub        
Dec  4 18:15:28 fruster kernel: usb 5-1.2: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 5-1.2:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 5-1.2:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: usb 5-1.3: new high speed USB device number 4 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 5-1.3: New USB device found, idVendor=2109, idProduct=3431
Dec  4 18:15:28 fruster kernel: usb 5-1.3: New USB device strings: Mfr=0, Product=1, SerialNumber=0
Dec  4 18:15:28 fruster kernel: usb 5-1.3: Product: USB2.0 Hub
Dec  4 18:15:28 fruster kernel: usb 5-1.3: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: hub 5-1.3:1.0: USB hub found
Dec  4 18:15:28 fruster kernel: hub 5-1.3:1.0: 4 ports detected
Dec  4 18:15:28 fruster kernel: EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: 
Dec  4 18:15:28 fruster kernel: usb 6-3.1: new SuperSpeed USB device number 5 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-3.1: New USB device found, idVendor=174c, idProduct=5106
Dec  4 18:15:28 fruster kernel: usb 6-3.1: New USB device strings: Mfr=2, Product=3, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb 6-3.1: Product: AS2105
Dec  4 18:15:28 fruster kernel: usb 6-3.1: Manufacturer: ASMedia
Dec  4 18:15:28 fruster kernel: usb 6-3.1: SerialNumber:      WD-WMAZA5166120
Dec  4 18:15:28 fruster kernel: usb 6-3.1: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: scsi9 : SCSI emulation for USB Mass Storage devices
Dec  4 18:15:28 fruster kernel: SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
Dec  4 18:15:28 fruster kernel: SGI XFS Quota Management subsystem
Dec  4 18:15:28 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:15:28 fruster kernel: usb 6-3.2: new SuperSpeed USB device number 6 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-3.2: New USB device found, idVendor=174c, idProduct=5106
Dec  4 18:15:28 fruster kernel: usb 6-3.2: New USB device strings: Mfr=2, Product=3, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb 6-3.2: Product: AS2105
Dec  4 18:15:28 fruster kernel: usb 6-3.2: Manufacturer: ASMedia
Dec  4 18:15:28 fruster kernel: usb 6-3.2: SerialNumber:      WD-WMC302169428
Dec  4 18:15:28 fruster kernel: usb 6-3.2: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: scsi10 : SCSI emulation for USB Mass Storage devices
Dec  4 18:15:28 fruster kernel: usb 6-3.3: new SuperSpeed USB device number 7 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-3.3: New USB device found, idVendor=174c, idProduct=5106
Dec  4 18:15:28 fruster kernel: usb 6-3.3: New USB device strings: Mfr=2, Product=3, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb 6-3.3: Product: AS2105
Dec  4 18:15:28 fruster kernel: usb 6-3.3: Manufacturer: ASMedia
Dec  4 18:15:28 fruster kernel: usb 6-3.3: SerialNumber:      WD-WMC1T1323221
Dec  4 18:15:28 fruster kernel: usb 6-3.3: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: scsi11 : SCSI emulation for USB Mass Storage devices
Dec  4 18:15:28 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:15:28 fruster kernel: usb 6-3.4: new SuperSpeed USB device number 8 using xhci_hcd
Dec  4 18:15:28 fruster kernel: usb 6-3.4: New USB device found, idVendor=174c, idProduct=5106
Dec  4 18:15:28 fruster kernel: usb 6-3.4: New USB device strings: Mfr=2, Product=3, SerialNumber=1
Dec  4 18:15:28 fruster kernel: usb 6-3.4: Product: AS2105
Dec  4 18:15:28 fruster kernel: usb 6-3.4: Manufacturer: ASMedia
Dec  4 18:15:28 fruster kernel: usb 6-3.4: SerialNumber:      WD-WMAZA5535631
Dec  4 18:15:28 fruster kernel: usb 6-3.4: configuration #1 chosen from 1 choice
Dec  4 18:15:28 fruster kernel: scsi12 : SCSI emulation for USB Mass Storage devices
Dec  4 18:15:28 fruster kernel: scsi 8:0:0:0: Direct-Access     WDC WD40 EFRX-68WT0N0     0X03 PQ: 0 ANSI: 6
Dec  4 18:15:28 fruster kernel: scsi 8:0:0:1: Direct-Access     WDC WD40 EFRX-68WT0N0     0X03 PQ: 0 ANSI: 6
Dec  4 18:15:28 fruster kernel: scsi 8:0:0:2: Direct-Access     WDC WD40 EFRX-68WT0N0     0X03 PQ: 0 ANSI: 6
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: Attached scsi generic sg4 type 0
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: Attached scsi generic sg5 type 0
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: Attached scsi generic sg6 type 0
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] 4096-byte physical blocks
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] 4096-byte physical blocks
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] 4096-byte physical blocks
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sde:
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdd:
Dec  4 18:15:28 fruster kernel: scsi 9:0:0:0: Direct-Access     WDC WD20 EARX-00PASB0     51.0 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: Attached scsi generic sg7 type 0
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: [sdg] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: [sdg] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: [sdg] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: [sdg] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdg:
Dec  4 18:15:28 fruster kernel: scsi 10:0:0:0: Direct-Access     WDC WD20 EZRX-00DC0B0     80.0 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: Attached scsi generic sg8 type 0
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: [sdh] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: [sdh] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: [sdh] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: [sdh] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdh:
Dec  4 18:15:28 fruster kernel: scsi 11:0:0:0: Direct-Access     WDC WD20 EZRX-00DC0B0     80.0 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: Attached scsi generic sg9 type 0
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: [sdi] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: [sdi] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: [sdi] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: [sdi] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdi:
Dec  4 18:15:28 fruster kernel: scsi 12:0:0:0: Direct-Access     WDC WD20 EARX-00PASB0     51.0 PQ: 0 ANSI: 5
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: Attached scsi generic sg10 type 0
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: [sdj] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB)
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: [sdj] Write Protect is off
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: [sdj] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: [sdj] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdj:
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sdf: sdg1
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: [sdg] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 9:0:0:0: [sdg] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: sdh1
Dec  4 18:15:28 fruster kernel: sdj1
Dec  4 18:15:28 fruster kernel: sdi1
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: [sdh] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 10:0:0:0: [sdh] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: [sdj] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 12:0:0:0: [sdj] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: [sdi] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 11:0:0:0: [sdi] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: md: bind<sdj1>
Dec  4 18:15:28 fruster kernel: md: bind<sdh1>
Dec  4 18:15:28 fruster kernel: unknown partition table
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 8:0:0:0: [sdd] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: sde1
Dec  4 18:15:28 fruster kernel: sdf1
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Very big device. Trying to use READ CAPACITY(16).
Dec  4 18:15:28 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:15:28 fruster kernel: 
Dec  4 18:15:28 fruster kernel: Pid: 1292, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:15:28 fruster kernel: Call Trace:
Dec  4 18:15:28 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:15:28 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:15:28 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:15:28 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:15:28 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:15:28 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:15:28 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:15:28 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:15:28 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:15:28 fruster kernel: EXT4-fs (sdc1): mounted filesystem with ordered data mode. Opts: 
Dec  4 18:15:28 fruster kernel: Adding 32767992k swap on /dev/mapper/vg_fruster-lv_swap.  Priority:-1 extents:1 across:32767992k SSD
Dec  4 18:15:28 fruster kernel: readahead-disable-service: delaying service auditd
Dec  4 18:15:28 fruster kernel: usb 6-3.3: reset SuperSpeed USB device number 7 using xhci_hcd
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88081c151cc0
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88081c151d08
Dec  4 18:15:28 fruster kernel: usb 6-3.1: reset SuperSpeed USB device number 5 using xhci_hcd
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88081f2fb500
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88081f2fb548
Dec  4 18:15:28 fruster kernel: md: bind<sdg1>
Dec  4 18:15:28 fruster kernel: md: bind<sdi1>
Dec  4 18:15:28 fruster kernel: usb 6-4: reset SuperSpeed USB device number 4 using xhci_hcd
Dec  4 18:15:28 fruster kernel: async_tx: api initialized (async)
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88079726f080
Dec  4 18:15:28 fruster kernel: xhci_hcd 0000:05:00.0: xHCI xhci_drop_endpoint called with disabled ep ffff88079726f0c8
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 8:0:0:2: [sdf] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Assuming drive cache: write through
Dec  4 18:15:28 fruster kernel: sd 8:0:0:1: [sde] Attached SCSI disk
Dec  4 18:15:28 fruster kernel: xor: automatically using best checksumming function: generic_sse
Dec  4 18:15:28 fruster kernel:   generic_sse: 14400.000 MB/sec
Dec  4 18:15:28 fruster kernel: xor: using function: generic_sse (14400.000 MB/sec)
Dec  4 18:15:28 fruster kernel: raid6: int64x1   3878 MB/s
Dec  4 18:15:28 fruster kernel: raid6: int64x2   4195 MB/s
Dec  4 18:15:28 fruster kernel: raid6: int64x4   3468 MB/s
Dec  4 18:15:28 fruster kernel: raid6: int64x8   2828 MB/s
Dec  4 18:15:28 fruster kernel: raid6: sse2x1    8906 MB/s
Dec  4 18:15:28 fruster kernel: raid6: sse2x2   10984 MB/s
Dec  4 18:15:28 fruster kernel: raid6: sse2x4   13015 MB/s
Dec  4 18:15:28 fruster kernel: raid6: using algorithm sse2x4 (13015 MB/s)
Dec  4 18:15:28 fruster kernel: raid6: using ssse3x2 recovery algorithm
Dec  4 18:15:28 fruster kernel: md: raid6 personality registered for level 6
Dec  4 18:15:28 fruster kernel: md: raid5 personality registered for level 5
Dec  4 18:15:28 fruster kernel: md: raid4 personality registered for level 4
Dec  4 18:15:28 fruster kernel: bio: create slab <bio-1> at 1
Dec  4 18:15:28 fruster kernel: md/raid:md0: device sdi1 operational as raid disk 1
Dec  4 18:15:28 fruster kernel: md/raid:md0: device sdg1 operational as raid disk 0
Dec  4 18:15:28 fruster kernel: md/raid:md0: device sdh1 operational as raid disk 3
Dec  4 18:15:28 fruster kernel: md/raid:md0: device sdj1 operational as raid disk 2
Dec  4 18:15:28 fruster kernel: md/raid:md0: allocated 4314kB
Dec  4 18:15:28 fruster kernel: md/raid:md0: raid level 5 active with 4 out of 4 devices, algorithm 2
Dec  4 18:15:28 fruster kernel: md0: detected capacity change from 0 to 6001163501568
Dec  4 18:15:28 fruster kernel: md0: unknown partition table
Dec  4 18:15:28 fruster kernel: md: bind<sdf1>
Dec  4 18:15:28 fruster kernel: md: bind<sde1>
Dec  4 18:15:28 fruster kernel: md/raid:md1: device sde1 operational as raid disk 1
Dec  4 18:15:28 fruster kernel: md/raid:md1: device sdf1 operational as raid disk 0
Dec  4 18:15:28 fruster kernel: md/raid:md1: allocated 2210kB
Dec  4 18:15:28 fruster kernel: md/raid:md1: raid level 5 active with 2 out of 2 devices, algorithm 2
Dec  4 18:15:28 fruster kernel: md1: detected capacity change from 0 to 4000776192000
Dec  4 18:15:28 fruster kernel: md1: unknown partition table
Dec  4 18:15:28 fruster kernel: mlx4_ib: Mellanox ConnectX InfiniBand driver v1.0-mlnx_ofed1.5.3 (November 3, 2011)
Dec  4 18:15:28 fruster kernel: NET: Registered protocol family 10
Dec  4 18:15:28 fruster kernel: lo: Disabled Privacy Extensions
Dec  4 18:15:28 fruster kernel: Default coalesing params for mtu:2044 - rx_frames:88 rx_usecs:16
Dec  4 18:15:28 fruster kernel: Default coalesing params for mtu:2044 - rx_frames:88 rx_usecs:16
Dec  4 18:15:28 fruster kernel: ib0: multicast join failed for ff12:401b:ffff:0000:0000:0000:ffff:ffff, status -22
Dec  4 18:15:28 fruster kernel: ib0: multicast join failed for ff12:401b:ffff:0000:0000:0000:ffff:ffff, status -22
Dec  4 18:15:28 fruster kernel: ADDRCONF(NETDEV_UP): ib0: link is not ready
Dec  4 18:15:28 fruster kernel: ib0: multicast join failed for ff12:401b:ffff:0000:0000:0000:ffff:ffff, status -22
Dec  4 18:15:28 fruster kernel: ib0: enabling connected mode will cause multicast packet drops
Dec  4 18:15:28 fruster kernel: ib0: mtu > 2044 will cause multicast packet drops.
Dec  4 18:15:28 fruster kernel: ib0: mtu > 2044 will cause multicast packet drops.
Dec  4 18:15:28 fruster kernel: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready
Dec  4 18:15:28 fruster kernel: ib1: enabling connected mode will cause multicast packet drops
Dec  4 18:15:28 fruster kernel: ib1: mtu > 2044 will cause multicast packet drops.
Dec  4 18:15:28 fruster kernel: ib1: mtu > 2044 will cause multicast packet drops.
Dec  4 18:15:28 fruster kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 18:15:28 fruster kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:15:28 fruster kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 18:15:28 fruster kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 18:15:28 fruster kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:15:28 fruster kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec  4 18:15:28 fruster cpuspeed: Enabling ondemand cpu frequency scaling governor
Dec  4 18:15:28 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:28 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:29 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:29 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:30 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:30 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:31 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:31 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:32 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:32 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:33 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:33 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:34 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:34 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:35 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:35 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:36 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:36 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:37 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:37 fruster nslcd[2167]: [8b4567] no available LDAP server found
Dec  4 18:15:37 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:37 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:38 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:38 fruster nslcd[2167]: [8b4567] no available LDAP server found, sleeping 1 seconds
Dec  4 18:15:39 fruster nslcd[2167]: [8b4567] failed to bind to LDAP server ldap://127.0.0.1/: Can't contact LDAP server: Transport endpoint is not connected
Dec  4 18:15:39 fruster nslcd[2167]: [8b4567] no available LDAP server found
Dec  4 18:15:39 fruster named[2244]: starting BIND 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.5 -u named
Dec  4 18:15:39 fruster named[2244]: built with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-libtool' '--localstatedir=/var' '--enable-threads' '--enable-ipv6' '--with-pic' '--disable-static' '--disable-openssl-version-check' '--with-dlz-ldap=yes' '--with-dlz-postgres=yes' '--with-dlz-mysql=yes' '--with-dlz-filesystem=yes' '--with-gssapi=yes' '--disable-isc-spnego' '--with-docbook-xsl=/usr/share/sgml/docbook/xsl-stylesheets' '--enable-fixed-rrset' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS= -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' 'CPPFLAGS= -DDIG_SIGCHASE'
Dec  4 18:15:39 fruster named[2244]: ----------------------------------------------------
Dec  4 18:15:39 fruster named[2244]: BIND 9 is maintained by Internet Systems Consortium,
Dec  4 18:15:39 fruster named[2244]: Inc. (ISC), a non-profit 501(c)(3) public-benefit 
Dec  4 18:15:39 fruster named[2244]: corporation.  Support and training for BIND 9 are 
Dec  4 18:15:39 fruster named[2244]: available at https://www.isc.org/support
Dec  4 18:15:39 fruster named[2244]: ----------------------------------------------------
Dec  4 18:15:39 fruster named[2244]: adjusted limit on open files from 4096 to 1048576
Dec  4 18:15:39 fruster named[2244]: found 8 CPUs, using 8 worker threads
Dec  4 18:15:39 fruster named[2244]: using up to 4096 sockets
Dec  4 18:15:39 fruster named[2244]: loading configuration from '/etc/named.conf'
Dec  4 18:15:39 fruster named[2244]: reading built-in trusted keys from file '/etc/named.iscdlv.key'
Dec  4 18:15:39 fruster named[2244]: using default UDP/IPv4 port range: [1024, 65535]
Dec  4 18:15:39 fruster named[2244]: using default UDP/IPv6 port range: [1024, 65535]
Dec  4 18:15:39 fruster named[2244]: listening on IPv4 interface lo, 127.0.0.1#53
Dec  4 18:15:39 fruster named[2244]: listening on IPv6 interface lo, ::1#53
Dec  4 18:15:39 fruster named[2244]: generating session key for dynamic DNS
Dec  4 18:15:39 fruster named[2244]: sizing zone task pool based on 10 zones
Dec  4 18:15:39 fruster named[2244]: using built-in DLV key for view _default
Dec  4 18:15:39 fruster named[2244]: set up managed keys zone for view _default, file '/var/named/dynamic/managed-keys.bind'
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 10.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 16.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 17.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 18.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 19.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 20.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 21.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 22.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 23.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 24.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 25.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 26.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 27.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 28.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 29.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 30.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 31.172.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 168.192.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 127.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 254.169.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 2.0.192.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 100.51.198.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 113.0.203.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 255.255.255.255.IN-ADDR.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: D.F.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 8.E.F.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 9.E.F.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: A.E.F.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: B.E.F.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: automatic empty zone: 8.B.D.0.1.0.0.2.IP6.ARPA
Dec  4 18:15:39 fruster named[2244]: command channel listening on 127.0.0.1#953
Dec  4 18:15:39 fruster named[2244]: command channel listening on ::1#953
Dec  4 18:15:39 fruster named[2244]: zone 0.in-addr.arpa/IN: loaded serial 0
Dec  4 18:15:39 fruster named[2244]: zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0
Dec  4 18:15:39 fruster named[2244]: zone 0.168.192.IN-ADDR.ARPA/IN: loaded serial 2013041800
Dec  4 18:15:39 fruster named[2244]: zone 1.168.192.IN-ADDR.ARPA/IN: loaded serial 2013041800
Dec  4 18:15:39 fruster named[2244]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
Dec  4 18:15:39 fruster named[2244]: zone Stanford.EDU/IN: loaded serial 2013041800
Dec  4 18:15:39 fruster named[2244]: zone fruster/IN: loaded serial 2013071700
Dec  4 18:15:39 fruster named[2244]: zone localhost.localdomain/IN: loaded serial 0
Dec  4 18:15:39 fruster named[2244]: zone localhost/IN: loaded serial 0
Dec  4 18:15:39 fruster named[2244]: managed-keys-zone ./IN: loaded serial 11007
Dec  4 18:15:39 fruster named[2244]: running
Dec  4 18:15:39 fruster rpc.statd[2289]: Version 1.2.3 starting
Dec  4 18:15:39 fruster sm-notify[2290]: Version 1.2.3 starting
Dec  4 18:15:39 fruster OpenSM[2304]: 
Dec  4 18:15:39 fruster OpenSM[2304]:  Loading Cached Option:guid = 0x001635ffffbf9b62#012
Dec  4 18:15:39 fruster OpenSM[2304]:  Loading Cached Option:subnet_prefix = 0xfe80808000000062#012
Dec  4 18:15:39 fruster OpenSM[2306]: /var/log/opensm.log log file opened
Dec  4 18:15:39 fruster OpenSM[2306]: OpenSM 3.3.9.MLNX_20111006_e52d5fc#012
Dec  4 18:15:39 fruster OpenSM[2306]: Entering DISCOVERING state#012
Dec  4 18:15:39 fruster OpenSM[2306]: Entering MASTER state#012
Dec  4 18:15:39 fruster OpenSM[2306]: SUBNET UP#012
Dec  4 18:15:40 fruster kernel: RPC: Registered named UNIX socket transport module.
Dec  4 18:15:40 fruster kernel: RPC: Registered udp transport module.
Dec  4 18:15:40 fruster kernel: RPC: Registered tcp transport module.
Dec  4 18:15:40 fruster kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  4 18:15:41 fruster kdump: kexec: loaded kdump kernel
Dec  4 18:15:41 fruster kdump: started up
Dec  4 18:15:41 fruster nslcd[2167]: [7b23c6] no available LDAP server found
Dec  4 18:15:41 fruster nslcd[2167]: [7b23c6] no available LDAP server found
Dec  4 18:15:41 fruster nslcd[2167]: [3c9869] no available LDAP server found
Dec  4 18:15:41 fruster nslcd[2167]: [3c9869] no available LDAP server found
Dec  4 18:15:41 fruster nslcd[2167]: [334873] no available LDAP server found
Dec  4 18:15:41 fruster nslcd[2167]: [334873] no available LDAP server found
Dec  4 18:15:41 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:15:41 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:15:42 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:15:42 fruster kernel: 
Dec  4 18:15:42 fruster kernel: Pid: 2461, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:15:42 fruster kernel: Call Trace:
Dec  4 18:15:42 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:15:42 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:15:42 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:15:42 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:15:42 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:15:42 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:15:42 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:15:42 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:15:42 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:15:42 fruster acpid: starting up
Dec  4 18:15:42 fruster acpid: 1 rule loaded
Dec  4 18:15:42 fruster acpid: waiting for events: event logging is off
Dec  4 18:15:42 fruster nslcd[2167]: [b0dc51] no available LDAP server found
Dec  4 18:15:42 fruster nslcd[2167]: [b0dc51] no available LDAP server found
Dec  4 18:15:42 fruster nslcd[2167]: [495cff] no available LDAP server found
Dec  4 18:15:42 fruster nslcd[2167]: [495cff] no available LDAP server found
Dec  4 18:15:42 fruster acpid: client connected from 2536[68:68]
Dec  4 18:15:42 fruster acpid: 1 client rule loaded
Dec  4 18:15:42 fruster nslcd[2167]: [e8944a] no available LDAP server found
Dec  4 18:15:42 fruster nslcd[2167]: [e8944a] no available LDAP server found
Dec  4 18:15:42 fruster nslcd[2167]: [5558ec] no available LDAP server found
Dec  4 18:15:42 fruster nslcd[2167]: [5558ec] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [8e1f29] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [8e1f29] no available LDAP server found
Dec  4 18:15:43 fruster kernel: Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Dec  4 18:15:43 fruster rpc.mountd[2625]: Version 1.2.3 starting
Dec  4 18:15:43 fruster kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Dec  4 18:15:43 fruster kernel: NFSD: starting 90-second grace period
Dec  4 18:15:43 fruster nslcd[2167]: [e87ccd] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [e87ccd] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [1b58ba] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [1b58ba] no available LDAP server found
Dec  4 18:15:43 fruster nscd: 2944 cannot stat() file `/etc/netgroup': No such file or directory
Dec  4 18:15:43 fruster kernel: netlink: 12 bytes leftover after parsing attributes.
Dec  4 18:15:43 fruster snmpd[2963]: NET-SNMP version 5.5
Dec  4 18:15:43 fruster LSI MegaRAID SNMP Agent: Agent Ver 3.18.0.2 (Jan 21st, 2013) Started
Dec  4 18:15:43 fruster xinetd[2995]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in.
Dec  4 18:15:43 fruster xinetd[2995]: Started working: 0 available services
Dec  4 18:15:43 fruster ntpd[3002]: ntpd 4.2.4p8@1.1612-o Wed Nov 24 19:02:17 UTC 2010 (1)
Dec  4 18:15:43 fruster ntpd[3003]: precision = 0.103 usec
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #1 wildcard, ::#123 Disabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #2 lo, ::1#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #3 ib0, fe80::216:35ff:ffbf:9b61#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #4 eth1, fe80::3285:a9ff:fea4:225b#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #5 eth0, fe80::3285:a9ff:fea4:225a#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #6 lo, 127.0.0.1#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #7 eth0, 171.64.63.152#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #8 eth1, 192.168.0.3#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on interface #9 ib0, 192.168.1.1#123 Enabled
Dec  4 18:15:43 fruster ntpd[3003]: Listening on routing socket on fd #26 for interface updates
Dec  4 18:15:43 fruster ntpd[3003]: kernel time sync status 2040
Dec  4 18:15:43 fruster ntpd[3003]: frequency initialized -2.702 PPM from /var/lib/ntp/drift
Dec  4 18:15:43 fruster nslcd[2167]: [7ed7ab] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [7ed7ab] no available LDAP server found
Dec  4 18:15:43 fruster kernel: Fusion MPT base driver 3.04.20
Dec  4 18:15:43 fruster kernel: Copyright (c) 1999-2008 LSI Corporation
Dec  4 18:15:43 fruster kernel: Fusion MPT misc device (ioctl) driver 3.04.20
Dec  4 18:15:43 fruster kernel: mptctl: Registered with Fusion MPT base driver
Dec  4 18:15:43 fruster kernel: mptctl: /dev/mptctl @ (major,minor=10,220)
Dec  4 18:15:43 fruster kernel: mpt2sas version 13.101.00.00 loaded
Dec  4 18:15:43 fruster nslcd[2167]: [b141f2] no available LDAP server found
Dec  4 18:15:43 fruster nslcd[2167]: [b141f2] no available LDAP server found
Dec  4 18:15:44 fruster dhcpd: Internet Systems Consortium DHCP Server 4.1.1-P1
Dec  4 18:15:44 fruster dhcpd: Copyright 2004-2010 Internet Systems Consortium.
Dec  4 18:15:44 fruster dhcpd: All rights reserved.
Dec  4 18:15:44 fruster dhcpd: For info, please visit https://www.isc.org/software/dhcp/
Dec  4 18:15:44 fruster dhcpd: Not searching LDAP since ldap-server, ldap-port and ldap-base-dn were not specified in the config file
Dec  4 18:15:44 fruster dhcpd: Wrote 0 class decls to leases file.
Dec  4 18:15:44 fruster dhcpd: Wrote 0 deleted host decls to leases file.
Dec  4 18:15:44 fruster dhcpd: Wrote 0 new dynamic host decls to leases file.
Dec  4 18:15:44 fruster dhcpd: Wrote 10 leases to leases file.
Dec  4 18:15:44 fruster dhcpd: Listening on LPF/eth1/30:85:a9:a4:22:5b/eth1
Dec  4 18:15:44 fruster dhcpd: Sending on   LPF/eth1/30:85:a9:a4:22:5b/eth1
Dec  4 18:15:44 fruster dhcpd: Listening on LPF/ib0//ib0
Dec  4 18:15:44 fruster dhcpd: Sending on   LPF/ib0//ib0
Dec  4 18:15:44 fruster dhcpd: Sending on   Socket/fallback/fallback-net
Dec  4 18:15:45 fruster nslcd[2167]: [b71efb] no available LDAP server found
Dec  4 18:15:45 fruster nslcd[2167]: [e2a9e3] no available LDAP server found
Dec  4 18:15:45 fruster nslcd[2167]: [45e146] no available LDAP server found
Dec  4 18:15:45 fruster nslcd[2167]: [5f007c] no available LDAP server found
Dec  4 18:15:46 fruster nslcd[2167]: [d062c2] no available LDAP server found
Dec  4 18:15:46 fruster nslcd[2167]: [200854] no available LDAP server found
Dec  4 18:15:47 fruster xCAT[3387]: Error loading module /opt/xcat/lib/perl/xCAT_plugin/blade.pm  ...skipping
Dec  4 18:15:47 fruster xCAT[3387]: Error loading module /opt/xcat/lib/perl/xCAT_plugin/bmcconfig.pm  ...skipping
Dec  4 18:15:47 fruster xCAT[3387]: Error loading module /opt/xcat/lib/perl/xCAT_plugin/ipmi.pm  ...skipping
Dec  4 18:15:47 fruster xCAT[3387]: Error loading module /opt/xcat/lib/perl/xCAT_plugin/lsslp.pm  ...skipping
Dec  4 18:15:47 fruster xCAT[3387]: Error loading module /opt/xcat/lib/perl/xCAT_plugin/remoteimmsetup.pm  ...skipping
Dec  4 18:15:47 fruster xCAT[3387]: Error loading module /opt/xcat/lib/perl/xCAT_plugin/slpdiscover.pm  ...skipping
Dec  4 18:16:59 fruster nslcd[2167]: [b127f8] ldap_result() timed out
Dec  4 18:17:04 fruster nslcd[2167]: [16231b] ldap_result() timed out
Dec  4 18:17:09 fruster nslcd[2167]: [b127f8] ldap_result() timed out
Dec  4 18:17:14 fruster nslcd[2167]: [16231b] ldap_result() timed out
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON042> Controller ID:  0   Shutdown command received from host 
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON000> Controller ID:  0   Firmware initialization started:  #012    ( PCI ID   0x79/ 0x1000/ 0x9260    / 0x1000)
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON001> Controller ID:  0   Image version:   2.130.393-2551
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON141> Controller ID:  0   Battery Present
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON261> Controller ID:  0  Package version  #012    12.14.0-0167
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON266> Controller ID:  0  Board Revision:   61A
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON164> Controller ID:  0   SES enclosure discovered:  #012    1
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON167> Controller ID:  0   Communication restored on enclosure:  #012    1
Dec  4 18:17:15 fruster MR_MONITOR[3503]: <MRMON170> Controller ID:  0   Fan removed on enclosure:   1  Fan#012      1
Dec  4 18:17:25 fruster kernel: readahead-collector: starting delayed service auditd
Dec  4 18:17:25 fruster auditd[3575]: Started dispatcher: /sbin/audispd pid: 3577
Dec  4 18:17:25 fruster audispd: No plugins found, exiting
Dec  4 18:17:25 fruster auditd[3575]: Init complete, auditd 2.2 listening for events (startup state enable)
Dec  4 18:17:25 fruster kernel: readahead-collector: sorting
Dec  4 18:17:25 fruster kernel: readahead-collector: finished
Dec  4 18:18:01 fruster kernel: coretemp coretemp.0: TjMax is 105 C.
Dec  4 18:18:01 fruster kernel: coretemp coretemp.0: TjMax is 105 C.
Dec  4 18:18:01 fruster kernel: coretemp coretemp.0: TjMax is 105 C.
Dec  4 18:18:01 fruster kernel: coretemp coretemp.0: TjMax is 105 C.
Dec  4 18:18:07 fruster kernel: e1000e: eth0 NIC Link is Down
Dec  4 18:18:07 fruster kernel: e1000e 0000:06:00.0: eth0: Reset adapter
Dec  4 18:18:07 fruster kernel: e1000e: eth1 NIC Link is Down
Dec  4 18:18:07 fruster kernel: e1000e 0000:07:00.0: eth1: Reset adapter
Dec  4 18:18:10 fruster kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:18:11 fruster kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:18:13 fruster slapd[2588]: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Credentials cache file '/var/run/openldap/slapd-proxy.tgt' not found)
Dec  4 18:18:13 fruster slapd[2588]: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Credentials cache file '/var/run/openldap/slapd-proxy.tgt' not found)
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON168> Controller ID:  0   Fan failed on enclosure:   1  Fan#012      1#012Event ID:168
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON170> Controller ID:  0   Fan removed on enclosure:   1  Fan#012      2
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON168> Controller ID:  0   Fan failed on enclosure:   1  Fan#012      2#012Event ID:168
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON170> Controller ID:  0   Fan removed on enclosure:   1  Fan#012      3
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON168> Controller ID:  0   Fan failed on enclosure:   1  Fan#012      3#012Event ID:168
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON173> Controller ID:  0   Power supply removed on enclosure:   1 #012    Power Supply   1
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON173> Controller ID:  0   Power supply removed on enclosure:   1 #012    Power Supply   2
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    15
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Enclosure  Device Id:   15
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:0
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   8
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:1
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   9
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:2
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   10
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:3
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   11
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:5
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   12
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:6
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   13
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:7
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   14
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:4
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   16
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:11
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   17
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:10
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   18
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:9
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   19
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:8
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   20
Dec  4 18:18:16 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.40:15003]
Dec  4 18:18:16 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node20:15003
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:15
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   21
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:14
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   22
Dec  4 18:18:16 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.37:15003]
Dec  4 18:18:16 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node17:15003
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:13
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   23
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON091> Controller ID:  0   PD inserted:  #012    Port 0 - 3:1:12
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON247> Controller ID:  0  Device inserted   Device Type:#012      Disk  Device Id:   24
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON044> Controller ID:  0   Time established since power on:   Time   2013-12-05, 02:13:31      39  Seconds
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON149> Controller ID:  0   Battery temperature is normal
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON450> Controller ID:  0  Periodic Battery Relearn was missed and Rescheduled   to :   2013-12-05, 06:55:32      -1  Seconds
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON160> Controller ID:  0   Battery relearn will start in 5 hours
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON389> Controller ID:  0  Host driver is loaded and operational
Dec  4 18:18:16 fruster MR_MONITOR[3503]: <MRMON044> Controller ID:  0   Time established since power on:   Time   2013-12-04, 18:17:45      292  Seconds
Dec  4 18:18:19 fruster dhcpd: receive_packet failed on eth1: Network is down
Dec  4 18:18:19 fruster dhcpd: receive_packet failed on ib0: Network is down
Dec  4 18:18:19 fruster kernel: lo: Disabled Privacy Extensions
Dec  4 18:18:19 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.25:15003]
Dec  4 18:18:19 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node05:15003
Dec  4 18:18:19 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.30:15003]
Dec  4 18:18:19 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node10:15003
Dec  4 18:18:19 fruster kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 18:18:20 fruster ntpd[3003]: Deleting interface #3 ib0, fe80::216:35ff:ffbf:9b61#123, interface stats: received=0, sent=0, dropped=0, active_time=157 secs
Dec  4 18:18:20 fruster ntpd[3003]: Deleting interface #4 eth1, fe80::3285:a9ff:fea4:225b#123, interface stats: received=0, sent=0, dropped=0, active_time=157 secs
Dec  4 18:18:20 fruster ntpd[3003]: Deleting interface #5 eth0, fe80::3285:a9ff:fea4:225a#123, interface stats: received=0, sent=0, dropped=0, active_time=157 secs
Dec  4 18:18:20 fruster ntpd[3003]: Deleting interface #7 eth0, 171.64.63.152#123, interface stats: received=0, sent=0, dropped=0, active_time=157 secs
Dec  4 18:18:20 fruster ntpd[3003]: Deleting interface #8 eth1, 192.168.0.3#123, interface stats: received=1, sent=1, dropped=0, active_time=157 secs
Dec  4 18:18:20 fruster ntpd[3003]: Deleting interface #9 ib0, 192.168.1.1#123, interface stats: received=0, sent=0, dropped=0, active_time=157 secs
Dec  4 18:18:22 fruster kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:18:22 fruster kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 18:18:23 fruster kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 18:18:25 fruster ntpd[3003]: Listening on interface #10 eth0, fe80::3285:a9ff:fea4:225a#123 Enabled
Dec  4 18:18:25 fruster ntpd[3003]: Listening on interface #11 eth0, 171.64.63.152#123 Enabled
Dec  4 18:18:28 fruster kernel: ADDRCONF(NETDEV_UP): ib0: link is not ready
Dec  4 18:18:28 fruster kernel: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready
Dec  4 18:18:28 fruster kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:18:28 fruster kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec  4 18:18:31 fruster ntpd[3003]: Listening on interface #12 ib0, fe80::216:35ff:ffbf:9b61#123 Enabled
Dec  4 18:18:31 fruster ntpd[3003]: Listening on interface #13 eth1, fe80::3285:a9ff:fea4:225b#123 Enabled
Dec  4 18:18:31 fruster ntpd[3003]: Listening on interface #14 eth1, 192.168.0.3#123 Enabled
Dec  4 18:18:32 fruster /etc/sysconfig/network-scripts/ifup-eth: Device ib1 has different MAC address than expected, ignoring.
Dec  4 18:18:33 fruster ntpd[3003]: Listening on interface #15 ib0, 192.168.1.1#123 Enabled
Dec  4 18:18:59 fruster ntpd[3003]: synchronized to LOCAL(0), stratum 10
Dec  4 18:18:59 fruster ntpd[3003]: kernel time sync status change 2001
Dec  4 18:19:06 fruster dhcpd: receive_packet failed on eth1: Network is down
Dec  4 18:19:06 fruster dhcpd: receive_packet failed on ib0: Network is down
Dec  4 18:19:06 fruster kernel: lo: Disabled Privacy Extensions
Dec  4 18:19:06 fruster kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 18:19:07 fruster ntpd[3003]: Deleting interface #10 eth0, fe80::3285:a9ff:fea4:225a#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs
Dec  4 18:19:07 fruster ntpd[3003]: Deleting interface #11 eth0, 171.64.63.152#123, interface stats: received=0, sent=0, dropped=0, active_time=42 secs
Dec  4 18:19:07 fruster ntpd[3003]: Deleting interface #12 ib0, fe80::216:35ff:ffbf:9b61#123, interface stats: received=0, sent=0, dropped=0, active_time=36 secs
Dec  4 18:19:07 fruster ntpd[3003]: Deleting interface #13 eth1, fe80::3285:a9ff:fea4:225b#123, interface stats: received=0, sent=0, dropped=0, active_time=36 secs
Dec  4 18:19:07 fruster ntpd[3003]: Deleting interface #14 eth1, 192.168.0.3#123, interface stats: received=1, sent=1, dropped=0, active_time=36 secs
Dec  4 18:19:07 fruster ntpd[3003]: Deleting interface #15 ib0, 192.168.1.1#123, interface stats: received=0, sent=0, dropped=0, active_time=34 secs
Dec  4 18:19:09 fruster kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:19:09 fruster kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.25:15003]
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Inappropriate ioctl for device (25) in tcp_connect_sockaddr, cannot connect to port -1 in socket_connect_addr - errno:9 Bad file descriptor
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node05:15003
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.37:15003]
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Inappropriate ioctl for device (25) in tcp_connect_sockaddr, cannot connect to port -1 in socket_connect_addr - errno:9 Bad file descriptor
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node17:15003
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.40:15003]
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Inappropriate ioctl for device (25) in tcp_connect_sockaddr, cannot connect to port -1 in socket_connect_addr - errno:9 Bad file descriptor
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node20:15003
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.30:15003]
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::Inappropriate ioctl for device (25) in tcp_connect_sockaddr, cannot connect to port -1 in socket_connect_addr - errno:9 Bad file descriptor
Dec  4 18:19:10 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node10:15003
Dec  4 18:19:11 fruster kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 18:19:12 fruster ntpd[3003]: Listening on interface #16 eth0, fe80::3285:a9ff:fea4:225a#123 Enabled
Dec  4 18:19:12 fruster ntpd[3003]: Listening on interface #17 eth0, 171.64.63.152#123 Enabled
Dec  4 18:19:14 fruster kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Dec  4 18:19:14 fruster kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec  4 18:19:15 fruster kernel: ADDRCONF(NETDEV_UP): ib0: link is not ready
Dec  4 18:19:15 fruster kernel: ADDRCONF(NETDEV_CHANGE): ib0: link becomes ready
Dec  4 18:19:18 fruster ntpd[3003]: Listening on interface #18 ib0, fe80::216:35ff:ffbf:9b61#123 Enabled
Dec  4 18:19:18 fruster ntpd[3003]: Listening on interface #19 eth1, fe80::3285:a9ff:fea4:225b#123 Enabled
Dec  4 18:19:18 fruster ntpd[3003]: Listening on interface #20 eth1, 192.168.0.3#123 Enabled
Dec  4 18:19:19 fruster kernel: ADDRCONF(NETDEV_UP): ib1: link is not ready
Dec  4 18:19:19 fruster kernel: ADDRCONF(NETDEV_CHANGE): ib1: link becomes ready
Dec  4 18:19:21 fruster ntpd[3003]: Listening on interface #21 ib1, fe80::216:35ff:ffbf:9b62#123 Enabled
Dec  4 18:19:21 fruster ntpd[3003]: Listening on interface #22 ib0, 192.168.1.1#123 Enabled
Dec  4 18:19:24 fruster ntpd[3003]: Listening on interface #23 ib1, 192.168.2.1#123 Enabled
Dec  4 18:20:14 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:20:14 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:20:15 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:20:15 fruster kernel: 
Dec  4 18:20:15 fruster kernel: Pid: 5131, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:20:15 fruster kernel: Call Trace:
Dec  4 18:20:15 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:20:15 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:20:15 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:20:15 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:20:15 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:20:15 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:20:15 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:20:15 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:20:15 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:22:06 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.30:15003]
Dec  4 18:22:06 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node10:15003
Dec  4 18:22:06 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.25:15003]
Dec  4 18:22:06 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node05:15003
Dec  4 18:22:06 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.37:15003]
Dec  4 18:22:06 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node17:15003
Dec  4 18:22:09 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.40:15003]
Dec  4 18:22:09 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node20:15003
Dec  4 18:23:52 fruster MR_MONITOR[3503]: <MRMON066> Controller ID:  0   Consistency Check started on VD:  #012    0
Dec  4 18:25:01 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:25:01 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:25:06 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.25:15003]
Dec  4 18:25:06 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node05:15003
Dec  4 18:25:06 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.30:15003]
Dec  4 18:25:06 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node10:15003
Dec  4 18:25:06 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.37:15003]
Dec  4 18:25:06 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node17:15003
Dec  4 18:25:09 fruster PBS_Server: LOG_ERROR::Bad file descriptor (9) in tcp_connect_sockaddr, Failed when trying to open tcp connection - connect() failed [rc = -1] [addr = 192.168.0.40:15003]
Dec  4 18:25:09 fruster PBS_Server: LOG_ERROR::send_hierarchy, Could not send mom hierarchy to host node20:15003
Dec  4 18:25:11 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:25:11 fruster kernel: 
Dec  4 18:25:11 fruster kernel: Pid: 5381, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:25:11 fruster kernel: Call Trace:
Dec  4 18:25:11 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:25:11 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:25:11 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:25:11 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:25:11 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:25:11 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:25:11 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:25:11 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:25:11 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:25:41 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:25:42 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:25:45 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:25:45 fruster kernel: 
Dec  4 18:25:45 fruster kernel: Pid: 5435, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:25:45 fruster kernel: Call Trace:
Dec  4 18:25:45 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:25:45 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:25:45 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:25:45 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:25:45 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:25:45 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:25:45 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:25:45 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:25:45 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:26:21 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:26:21 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:26:25 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:26:25 fruster kernel: 
Dec  4 18:26:25 fruster kernel: Pid: 5459, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:26:25 fruster kernel: Call Trace:
Dec  4 18:26:25 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:26:25 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:26:25 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:26:25 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:26:25 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:26:25 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:26:25 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:26:25 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:26:25 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:26:33 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:26:33 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
Dec  4 18:26:36 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
Dec  4 18:26:36 fruster kernel: 
Dec  4 18:26:36 fruster kernel: Pid: 5491, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
Dec  4 18:26:36 fruster kernel: Call Trace:
Dec  4 18:26:36 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
Dec  4 18:26:36 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
Dec  4 18:26:36 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
Dec  4 18:26:36 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
Dec  4 18:26:36 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
Dec  4 18:26:36 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
Dec  4 18:26:36 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Dec  4 18:26:36 fruster kernel: XFS (sda1): Failed to recover EFIs
Dec  4 18:26:36 fruster kernel: XFS (sda1): log mount finish failed
Dec  4 18:30:19 fruster kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: 
Dec  4 18:31:30 fruster dhcpd: DHCPREQUEST for 192.168.0.27 from 00:1b:78:30:c9:5e via eth1
Dec  4 18:31:30 fruster dhcpd: DHCPACK on 192.168.0.27 to 00:1b:78:30:c9:5e via eth1
Dec  4 18:41:05 fruster dhcpd: DHCPREQUEST for 192.168.0.35 from 00:1c:c4:c2:24:b4 via eth1
Dec  4 18:41:05 fruster dhcpd: DHCPACK on 192.168.0.35 to 00:1c:c4:c2:24:b4 via eth1
Dec  4 18:42:24 fruster kernel: XFS (sda1): Mounting Filesystem
Dec  4 18:42:24 fruster kernel: XFS (sda1): Ending clean mount
Dec  4 18:46:41 fruster dhcpd: DHCPREQUEST for 192.168.0.31 from 00:1b:78:31:79:7a via eth1
Dec  4 18:46:41 fruster dhcpd: DHCPACK on 192.168.0.31 to 00:1b:78:31:79:7a via eth1
Dec  4 18:48:54 fruster dhcpd: DHCPREQUEST for 192.168.0.36 from 00:1b:78:e1:2c:18 via eth1
Dec  4 18:48:54 fruster dhcpd: DHCPACK on 192.168.0.36 to 00:1b:78:e1:2c:18 via eth1 

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-05  2:55 Sudden File System Corruption Mike Dacre
@ 2013-12-05  3:40 ` Dave Chinner
  2013-12-05  3:46   ` Mike Dacre
  2013-12-05  8:10 ` Stan Hoeppner
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 25+ messages in thread
From: Dave Chinner @ 2013-12-05  3:40 UTC (permalink / raw)
  To: Mike Dacre; +Cc: xfs

On Wed, Dec 04, 2013 at 06:55:05PM -0800, Mike Dacre wrote:
> Hi Folks,
> 
> Apologies if this is the wrong place to post or if this has been answered
> already.
> 
> I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an XFS
> filesystem and has been online for over a year.  It is accessed by 23
> different machines connected via Infiniband over NFS v3.  I haven't had any
> major problems yet, one drive failed but it was easily replaced.
> 
> However, today the drive suddenly stopped responding and started returning
> IO errors when any requests were made.  This happened while it was being
> accessed by  5 different users, one was doing a very large rm operation (rm
> *sh on thousands on files in a directory).  Also, about 30 minutes before
> we had connected the globus connect endpoint to allow easy file transfers
> to SDSC.

So, you had a drive die and at roughly the same time XFS started
reporting corruption problems and shut down? Chances are that the
drive returned garbage to XFS before died completely and that's what
XFS detected and shut down on. If you are unlucky in this situation,
the corruption can get propagated into the log by changes that are
adjacent to the corrupted region, and then you have problems with log
recovery failing because the corruption gets replayed....

> I have attached the complete log from the time it died until now.
> 
> In the end, I successfully repaired the filesystem with `xfs_repair -L
> /dev/sda1`.  However, I am nervous that some files may have been corrupted.
> 
> Do any of you have any idea what could have caused this problem?

When corruption appears at roughly the same time a drive dies, it's
almost always caused by the drive that failed. RAID doesn't repvent
disks from returning crap to the OS because nobody configures the
arrays to do read-verify cycles that would catch such a condition.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-05  3:40 ` Dave Chinner
@ 2013-12-05  3:46   ` Mike Dacre
  2013-12-05  3:59     ` Dave Chinner
  0 siblings, 1 reply; 25+ messages in thread
From: Mike Dacre @ 2013-12-05  3:46 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 2510 bytes --]

Hi Dave, 

My apologies, I completely miscommunicated.  The drive dying was unrelated, it happened two months ago. I mentioned it only as background info, but I realize now that was stupid. There were no drive or RAID problems at all at the time the XFS mount died today. The drives are all fine and the RAID log shows nothing significant. 

Thanks, 

Mike 


-------- Original Message --------
From: Dave Chinner <david@fromorbit.com>
Sent: Wed Dec 04 19:40:34 PST 2013
To: Mike Dacre <mike.dacre@gmail.com>
Cc: xfs@oss.sgi.com
Subject: Re: Sudden File System Corruption

On Wed, Dec 04, 2013 at 06:55:05PM -0800, Mike Dacre wrote:
> Hi Folks,
> 
> Apologies if this is the wrong place to post or if this has been answered
> already.
> 
> I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an XFS
> filesystem and has been online for over a year.  It is accessed by 23
> different machines connected via Infiniband over NFS v3.  I haven't had any
> major problems yet, one drive failed but it was easily replaced.
> 
> However, today the drive suddenly stopped responding and started returning
> IO errors when any requests were made.  This happened while it was being
> accessed by  5 different users, one was doing a very large rm operation (rm
> *sh on thousands on files in a directory).  Also, about 30 minutes before
> we had connected the globus connect endpoint to allow easy file transfers
> to SDSC.

So, you had a drive die and at roughly the same time XFS started
reporting corruption problems and shut down? Chances are that the
drive returned garbage to XFS before died completely and that's what
XFS detected and shut down on. If you are unlucky in this situation,
the corruption can get propagated into the log by changes that are
adjacent to the corrupted region, and then you have problems with log
recovery failing because the corruption gets replayed....

> I have attached the complete log from the time it died until now.
> 
> In the end, I successfully repaired the filesystem with `xfs_repair -L
> /dev/sda1`.  However, I am nervous that some files may have been corrupted.
> 
> Do any of you have any idea what could have caused this problem?

When corruption appears at roughly the same time a drive dies, it's
almost always caused by the drive that failed. RAID doesn't repvent
disks from returning crap to the OS because nobody configures the
arrays to do read-verify cycles that would catch such a condition.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

[-- Attachment #1.2: Type: text/html, Size: 3200 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-05  3:46   ` Mike Dacre
@ 2013-12-05  3:59     ` Dave Chinner
  0 siblings, 0 replies; 25+ messages in thread
From: Dave Chinner @ 2013-12-05  3:59 UTC (permalink / raw)
  To: Mike Dacre; +Cc: xfs

On Wed, Dec 04, 2013 at 07:46:06PM -0800, Mike Dacre wrote:
> Hi Dave, 
> 
> My apologies, I completely miscommunicated.  The drive dying was
> unrelated, it happened two months ago. I mentioned it only as
> background info, but I realize now that was stupid. There were no
> drive or RAID problems at all at the time the XFS mount died
> today. The drives are all fine and the RAID log shows nothing
> significant. 

Still could be significant. Do you run periodic media scrubs on that
raid array?

And if it's not significant, then there's nothing that I can sugest
that woul dhave caused the problem. For all we know about the state
of the system at the time the problem occurred, it could have been
caused by a cosmic ray flipping a bit somewhere in the IO path. i.e.
trying to diagnose a failure like this without any other errors
showing and no corrupt filesystem image we can examine is no better
than trying to guess where the needle is in a haystack...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-05  2:55 Sudden File System Corruption Mike Dacre
  2013-12-05  3:40 ` Dave Chinner
@ 2013-12-05  8:10 ` Stan Hoeppner
       [not found]   ` <CAPd9ww9hsOFK6pxqRY-YtLLAkkJHCuSi1BaM4n9=2XTjNVAn2Q@mail.gmail.com>
  2013-12-05 17:40 ` Ben Myers
  2013-12-09 19:04 ` Eric Sandeen
  3 siblings, 1 reply; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-05  8:10 UTC (permalink / raw)
  To: Mike Dacre, xfs

On 12/4/2013 8:55 PM, Mike Dacre wrote:
...
> I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an XFS.

It's a 9260-4i, not a 9240, a huge difference.  I went digging through
your dmesg output because I knew the 9240 doesn't support RAID6.  A few
questions.  What is the LSI RAID configuration?

1.  Level -- confirm RAID6
2.  Strip size?  (eg 512KB)
3.  Stripe size? (eg 7168KB, 14*256)
4.  BBU module?
5.  Is write cache enabled?

What is the XFS geometry?

5.  xfs_info /dev/sda

A combination of these these being wrong could very well be part of your
problems.

...
> IO errors when any requests were made.  This happened while it was being

I didn't see any IO errors in your dmesg output.  None.

> accessed by  5 different users, one was doing a very large rm operation (rm
> *sh on thousands on files in a directory).  Also, about 30 minutes before
> we had connected the globus connect endpoint to allow easy file transfers
> to SDSC.

With delaylog enabled, which I believe it is in RHEL/CentOS 6, a single
big rm shouldn't kill the disks.  But with the combination of other
workloads it seems you may have been seeking the disks to death.

...
> In the end, I successfully repaired the filesystem with `xfs_repair -L
> /dev/sda1`.  However, I am nervous that some files may have been corrupted.

I'm sure your users will let you know.  I'd definitely have a look in
the directory that was targeted by the big rm operation which apparently
didn't finish when XFS shutdown.

> Do any of you have any idea what could have caused this problem?

Yes.  A few things.  The first is this, and it's a big one:

Dec  4 18:15:28 fruster kernel: io scheduler noop registered
Dec  4 18:15:28 fruster kernel: io scheduler anticipatory registered
Dec  4 18:15:28 fruster kernel: io scheduler deadline registered
Dec  4 18:15:28 fruster kernel: io scheduler cfq registered (default)

http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E

"As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
of the parallelization in XFS."

*Never* use the CFQ elevator with XFS, and never with a high performance
storage system.  In fact, IMHO, never use CFQ period.  It was horrible
even before 3.2.12.  It is certain that CFQ is playing a big part in
your 120s timeouts, though it may not be solely responsible for your IO
bottleneck.  Switch to deadline or noop immediately, deadline if LSI
write cache is disabled, noop if it is enabled.  Execute this manually
now, and add it to a startup script and verify it is being set at
startup, as it's not permanent:

echo deadline > /sys/block/sda/queue/scheduler

This one simple command line may help pretty dramatically, immediately,
assuming your hardware array parameters aren't horribly wrong for your
workloads, and your XFS alignment correctly matches the hardware geometry.

-- 
Stan





_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Fwd: Sudden File System Corruption
       [not found]   ` <CAPd9ww9hsOFK6pxqRY-YtLLAkkJHCuSi1BaM4n9=2XTjNVAn2Q@mail.gmail.com>
@ 2013-12-05 15:58     ` Mike Dacre
  2013-12-06  8:58       ` Stan Hoeppner
  0 siblings, 1 reply; 25+ messages in thread
From: Mike Dacre @ 2013-12-05 15:58 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 5330 bytes --]

Hi Stan,


On Thu, Dec 5, 2013 at 12:10 AM, Stan Hoeppner <stan@hardwarefreak.com>wrote:

> On 12/4/2013 8:55 PM, Mike Dacre wrote:
> ...
> > I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an
> XFS.
>
> It's a 9260-4i, not a 9240, a huge difference.  I went digging through
> your dmesg output because I knew the 9240 doesn't support RAID6.  A few
> questions.  What is the LSI RAID configuration?
>

You are right, sorry.  9260-4i

1.  Level -- confirm RAID6
>
Definitely RAID6

2.  Strip size?  (eg 512KB)
>
64KB

3.  Stripe size? (eg 7168KB, 14*256)
>
Not sure how to get this

4.  BBU module?
>
Yes. iBBU, state optimal, 97% charged.

5.  Is write cache enabled?
>
> Yes: Cahced IO and Write Back with BBU are enabled.

I have also attached an adapter summary (megaraid_adp_info.txt) and a
virtual and physical drive summary (megaraid_drive_info.txt).


> What is the XFS geometry?
>
> 5.  xfs_info /dev/sda
>

`xfs_info /dev/sda1`
meta-data =/dev/sda1          isize=256    agcount=26, agsize=268435455 blks
               =                         sectsz=512   attr=2
data         =                         bsize=4096   blocks=6835404288,
imaxpct=5
               =                         sunit=0      swidth=0 blks
naming    =version 2            bsize=4096   ascii-ci=0
log          =internal               bsize=4096   blocks=521728, version=2
              =                          sectsz=512   sunit=0 blks,
lazy-count=1
realtime   =none                   extsz=4096   blocks=0, rtextents=0

This is also attached as xfs_info.txt


> A combination of these these being wrong could very well be part of your
> problems.
>
> ...
> > IO errors when any requests were made.  This happened while it was being
>
> I didn't see any IO errors in your dmesg output.  None.
>
> Good point.  These happened while trying to ls.  I am not sure why I can't
find them in the log, they printed out to the console as 'Input/Output'
errors, simply stating that the ls command failed.


> > accessed by  5 different users, one was doing a very large rm operation
> (rm
> > *sh on thousands on files in a directory).  Also, about 30 minutes before
> > we had connected the globus connect endpoint to allow easy file transfers
> > to SDSC.
>
> With delaylog enabled, which I believe it is in RHEL/CentOS 6, a single
> big rm shouldn't kill the disks.  But with the combination of other
> workloads it seems you may have been seeking the disks to death.
>
That is possible, workloads can get really high sometimes.  I am not sure
how to control that without significantly impacting performance - I want a
single user to be able to use 98% IO capacity sometimes... but other times
I want the load to be split amongst many users.  Also, each user can
execute jobs simultaneously on 23 different computers, each acessing the
same drive via NFS.  This is a great system most of the time, but sometimes
the workloads on the drive get really high.

...
> > In the end, I successfully repaired the filesystem with `xfs_repair -L
> > /dev/sda1`.  However, I am nervous that some files may have been
> corrupted.
>
> I'm sure your users will let you know.  I'd definitely have a look in
> the directory that was targeted by the big rm operation which apparently
> didn't finish when XFS shutdown.
>
> > Do any of you have any idea what could have caused this problem?
>
> Yes.  A few things.  The first is this, and it's a big one:
>
> Dec  4 18:15:28 fruster kernel: io scheduler noop registered
> Dec  4 18:15:28 fruster kernel: io scheduler anticipatory registered
> Dec  4 18:15:28 fruster kernel: io scheduler deadline registered
> Dec  4 18:15:28 fruster kernel: io scheduler cfq registered (default)
>
>
> http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
>
> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
> of the parallelization in XFS."
>
> *Never* use the CFQ elevator with XFS, and never with a high performance
> storage system.  In fact, IMHO, never use CFQ period.  It was horrible
> even before 3.2.12.  It is certain that CFQ is playing a big part in
> your 120s timeouts, though it may not be solely responsible for your IO
> bottleneck.  Switch to deadline or noop immediately, deadline if LSI
> write cache is disabled, noop if it is enabled.  Execute this manually
> now, and add it to a startup script and verify it is being set at
> startup, as it's not permanent:
>
> echo deadline > /sys/block/sda/queue/scheduler
>
> Wow, this is huge, I can't believe I missed that.  I have switched it to
noop now as we use write caching.  I have been trying to figure out for a
while why I would keep getting timeouts when the NFS load was high.  If you
have any other suggestions for how I can improve performance, I would
greatly appreciate it.


> This one simple command line may help pretty dramatically, immediately,
> assuming your hardware array parameters aren't horribly wrong for your
> workloads, and your XFS alignment correctly matches the hardware geometry.
>
> Great, thanks.  Our workloads vary considerably as we are a biology
research lab, sometimes we do lots of seeks, other times we are almost
maxing out read or write speed with massively parallel processes all
accessing the disk at the same time.


> --
> Stan
>
>
>
-Mike

[-- Attachment #1.2: Type: text/html, Size: 9252 bytes --]

[-- Attachment #2: megaraid_adp_info.txt --]
[-- Type: text/plain, Size: 9926 bytes --]

Adapter #0

==============================================================================
                    Versions
                ================
Product Name    : LSI MegaRAID SAS 9260-4i
Serial No       : SV14821972
FW Package Build: 12.14.0-0167

                    Mfg. Data
                ================
Mfg. Date       : 11/24/11
Rework Date     : 00/00/00
Revision No     : 61A
Battery FRU     : N/A

                Image Versions in Flash:
                ================
FW Version         : 2.130.393-2551
BIOS Version       : 3.28.00_4.14.05.00_0x05270000
Preboot CLI Version: 04.04-020:#%00009
WebBIOS Version    : 6.0-52-e_48-Rel
NVDATA Version     : 2.09.03-0045
Boot Block Version : 2.02.00.00-0000
BOOT Version       : 09.250.01.219

                Pending Images in Flash
                ================
None

                PCI Info
                ================
Controller Id	: 0000
Vendor Id       : 1000
Device Id       : 0079
SubVendorId     : 1000
SubDeviceId     : 9260

Host Interface  : PCIE

ChipRevision    : B4

Link Speed 	     : 0 
Number of Frontend Port: 0 
Device Interface  : PCIE

Number of Backend Port: 4 
Port  :  Address
0        500304800129497f 
1        0000000000000000 
2        0000000000000000 
3        0000000000000000 

                HW Configuration
                ================
SAS Address      : 500605b004137820
BBU              : Present
Alarm            : Present
NVRAM            : Present
Serial Debugger  : Present
Memory           : Present
Flash            : Present
Memory Size      : 512MB
TPM              : Absent
On board Expander: Absent
Upgrade Key      : Absent
Temperature sensor for ROC    : Absent
Temperature sensor for controller    : Absent


                Settings
                ================
Current Time                     : 7:21:54 12/5, 2013
Predictive Fail Poll Interval    : 300sec
Interrupt Throttle Active Count  : 16
Interrupt Throttle Completion    : 50us
Rebuild Rate                     : 30%
PR Rate                          : 30%
BGI Rate                         : 30%
Check Consistency Rate           : 30%
Reconstruction Rate              : 30%
Cache Flush Interval             : 4s
Max Drives to Spinup at One Time : 4
Delay Among Spinup Groups        : 2s
Physical Drive Coercion Mode     : Disabled
Cluster Mode                     : Disabled
Alarm                            : Enabled
Auto Rebuild                     : Enabled
Battery Warning                  : Enabled
Ecc Bucket Size                  : 15
Ecc Bucket Leak Rate             : 1440 Minutes
Restore HotSpare on Insertion    : Disabled
Expose Enclosure Devices         : Enabled
Maintain PD Fail History         : Enabled
Host Request Reordering          : Enabled
Auto Detect BackPlane Enabled    : SGPIO/i2c SEP
Load Balance Mode                : Auto
Use FDE Only                     : No
Security Key Assigned            : No
Security Key Failed              : No
Security Key Not Backedup        : No
Default LD PowerSave Policy      : Controller Defined
Maximum number of direct attached drives to spin up in 1 min : 120 
Auto Enhanced Import             : Yes
Any Offline VD Cache Preserved   : No
Allow Boot with Preserved Cache  : No
Disable Online Controller Reset  : No
PFK in NVRAM                     : No
Use disk activity for locate     : No
POST delay			 : 90 seconds
BIOS Error Handling          	 : Stop On Errors
Current Boot Mode 		  :Normal
                Capabilities
                ================
RAID Level Supported             : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span
Supported Drives                 : SAS, SATA

Allowed Mixing:

Mix in Enclosure Allowed
Mix of SAS/SATA of HDD type in VD Allowed
Mix of SAS/SATA of SSD type in VD Allowed
Mix of SSD/HDD in VD Allowed

                Status
                ================
ECC Bucket Count                 : 0

                Limitations
                ================
Max Arms Per VD          : 32 
Max Spans Per VD         : 8 
Max Arrays               : 128 
Max Number of VDs        : 64 
Max Parallel Commands    : 1008 
Max SGE Count            : 80 
Max Data Transfer Size   : 8192 sectors 
Max Strips PerIO         : 42 
Max LD per array         : 16 
Min Strip Size           : 8 KB
Max Strip Size           : 1.0 MB
Max Configurable CacheCade Size: 0 GB
Current Size of CacheCade      : 0 GB
Current Size of FW Cache       : 346 MB

                Device Present
                ================
Virtual Drives    : 1 
  Degraded        : 0 
  Offline         : 0 
Physical Devices  : 18 
  Disks           : 16 
  Critical Disks  : 0 
  Failed Disks    : 0 

                Supported Adapter Operations
                ================
Rebuild Rate                    : Yes
CC Rate                         : Yes
BGI Rate                        : Yes
Reconstruct Rate                : Yes
Patrol Read Rate                : Yes
Alarm Control                   : Yes
Cluster Support                 : No
BBU                             : Yes
Spanning                        : Yes
Dedicated Hot Spare             : Yes
Revertible Hot Spares           : Yes
Foreign Config Import           : Yes
Self Diagnostic                 : Yes
Allow Mixed Redundancy on Array : No
Global Hot Spares               : Yes
Deny SCSI Passthrough           : No
Deny SMP Passthrough            : No
Deny STP Passthrough            : No
Support Security                : No
Snapshot Enabled                : No
Support the OCE without adding drives : Yes
Support PFK                     : Yes
Support PI                      : No
Support Boot Time PFK Change    : No
Disable Online PFK Change       : No
PFK TrailTime Remaining         : 0 days 0 hours
Support Shield State            : No
Block SSD Write Disk Cache Change: No
Support Online FW Update	: Yes

                Supported VD Operations
                ================
Read Policy          : Yes
Write Policy         : Yes
IO Policy            : Yes
Access Policy        : Yes
Disk Cache Policy    : Yes
Reconstruction       : Yes
Deny Locate          : No
Deny CC              : No
Allow Ctrl Encryption: No
Enable LDBBM         : Yes
Support Breakmirror  : No
Power Savings        : No

                Supported PD Operations
                ================
Force Online                            : Yes
Force Offline                           : Yes
Force Rebuild                           : Yes
Deny Force Failed                       : No
Deny Force Good/Bad                     : No
Deny Missing Replace                    : No
Deny Clear                              : No
Deny Locate                             : No
Support Temperature                     : Yes
Disable Copyback                        : No
Enable JBOD                             : No
Enable Copyback on SMART                : No
Enable Copyback to SSD on SMART Error   : Yes
Enable SSD Patrol Read                  : No
PR Correct Unconfigured Areas           : Yes
Enable Spin Down of UnConfigured Drives : Yes
Disable Spin Down of hot spares         : No
Spin Down time                          : 30 
T10 Power State                         : No
                Error Counters
                ================
Memory Correctable Errors   : 0 
Memory Uncorrectable Errors : 0 

                Cluster Information
                ================
Cluster Permitted     : No
Cluster Active        : No

                Default Settings
                ================
Phy Polarity                     : 0 
Phy PolaritySplit                : 0 
Background Rate                  : 30 
Strip Size                       : 256kB
Flush Time                       : 4 seconds
Write Policy                     : WB
Read Policy                      : RA
Cache When BBU Bad               : Disabled
Cached IO                        : No
SMART Mode                       : Mode 6
Alarm Disable                    : Yes
Coercion Mode                    : None
ZCR Config                       : Unknown
Dirty LED Shows Drive Activity   : No
BIOS Continue on Error           : 3 
Spin Down Mode                   : None
Allowed Device Type              : SAS/SATA Mix
Allow Mix in Enclosure           : Yes
Allow HDD SAS/SATA Mix in VD     : Yes
Allow SSD SAS/SATA Mix in VD     : Yes
Allow HDD/SSD Mix in VD          : Yes
Allow SATA in Cluster            : No
Max Chained Enclosures           : 16 
Disable Ctrl-R                   : Yes
Enable Web BIOS                  : Yes
Direct PD Mapping                : No
BIOS Enumerate VDs               : Yes
Restore Hot Spare on Insertion   : No
Expose Enclosure Devices         : Yes
Maintain PD Fail History         : Yes
Disable Puncturing               : No
Zero Based Enclosure Enumeration : No
PreBoot CLI Enabled              : Yes
LED Show Drive Activity          : Yes
Cluster Disable                  : Yes
SAS Disable                      : No
Auto Detect BackPlane Enable     : SGPIO/i2c SEP
Use FDE Only                     : No
Enable Led Header                : Yes
Delay during POST                : 0 
EnableCrashDump                  : No
Disable Online Controller Reset  : No
EnableLDBBM                      : Yes
Un-Certified Hard Disk Drives    : Allow
Treat Single span R1E as R10     : No
Max LD per array                 : 16
Power Saving option              : Don't Auto spin down Configured Drives
Max power savings option is  not allowed for LDs. Only T10 power conditions are to be used.
Default spin down time in minutes: 30 
Enable JBOD                      : No
TTY Log In Flash                 : No
Auto Enhanced Import             : Yes
BreakMirror RAID Support         : Yes
Disable Join Mirror              : No
Enable Shield State              : No
Time taken to detect CME         : 60s

Exit Code: 0x00

[-- Attachment #3: megaraid_drive_info.txt --]
[-- Type: text/plain, Size: 7735 bytes --]

System
	Operating System:  Linux version 2.6.32-358.23.2.el6.x86_64 
	Driver Version: 06.504.01.00-rh1
	CLI Version: 8.07.07

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-6, Secondary-0, RAID Level Qualifier-3
Size                : 25.463 TB
Sector Size         : 512
Is VD emulated      : No
Parity Size         : 3.637 TB
State               : Optimal
Strip Size          : 64 KB
Number Of Drives    : 16
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAhead, Cached, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy   : Enabled
Encryption Type     : None
Bad Blocks Exist: No
Is VD Cached: No



Exit Code: 0x00

Hardware
        Controller
                 ProductName       : LSI MegaRAID SAS 9260-4i(Bus 0, Dev 0)
                 SAS Address       : 500605b004137820
                 FW Package Version: 12.14.0-0167
                 Status            : Optimal
        BBU
                 BBU Type          : iBBU
                 Status            : Healthy
        Enclosure
                 Product Id        : SAS2X28         
                 Type              : SES
                 Status            : OK

                 Product Id        : SGPIO           
                 Type              : SGPIO
                 Status            : OK

        PD 
                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 0 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 1 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 2 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 3 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 5 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 6 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 7 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 4 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 11 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 10 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 9 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 8 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 15 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 14 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 13 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

                Connector          : Port 0 - 3<Internal><Encl Pos 1 >: Slot 12 
                Vendor Id          : ATA     
                Product Id         : WDC WD2002FAEX-0
                State              : Online
                Disk Type          : SATA,Hard Disk Device
                Capacity           : 1.818 TB
                Power State        : Active

Storage

       Virtual Drives
                Virtual drive      : Target Id 0 ,VD name 
                Size               : 25.463 TB
                State              : Optimal
                RAID Level         : 6 


Exit Code: 0x00

[-- Attachment #4: xfs_info.txt --]
[-- Type: text/plain, Size: 537 bytes --]

meta-data=/dev/sda1              isize=256    agcount=26, agsize=268435455 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=6835404288, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[-- Attachment #5: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-05  2:55 Sudden File System Corruption Mike Dacre
  2013-12-05  3:40 ` Dave Chinner
  2013-12-05  8:10 ` Stan Hoeppner
@ 2013-12-05 17:40 ` Ben Myers
       [not found]   ` <20131205175053.GG1935@sgi.com>
  2013-12-09 19:04 ` Eric Sandeen
  3 siblings, 1 reply; 25+ messages in thread
From: Ben Myers @ 2013-12-05 17:40 UTC (permalink / raw)
  To: Mike Dacre; +Cc: xfs

Hi Mike,

On Wed, Dec 04, 2013 at 06:55:05PM -0800, Mike Dacre wrote:
> Apologies if this is the wrong place to post or if this has been answered
> already.
> 
> I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an XFS
> filesystem and has been online for over a year.  It is accessed by 23
> different machines connected via Infiniband over NFS v3.  I haven't had any
> major problems yet, one drive failed but it was easily replaced.
> 
> However, today the drive suddenly stopped responding and started returning
> IO errors when any requests were made.  This happened while it was being
> accessed by  5 different users, one was doing a very large rm operation (rm
> *sh on thousands on files in a directory).  Also, about 30 minutes before
> we had connected the globus connect endpoint to allow easy file transfers
> to SDSC.
> 
> I rebooted the machine which hosts it and checked the RAID6 logs, no
> physical problems with the drives at all.  I tried to mount and got the
> following error:
> 
> XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file
> fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
> mount: Structure needs cleaning
> 
> I ran xfs_check and got the following message:
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check.  If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
> 
> 
> I checked the log and found the following message:
> 
> Dec  4 18:26:33 fruster kernel: XFS (sda1): Mounting Filesystem
> Dec  4 18:26:33 fruster kernel: XFS (sda1): Starting recovery (logdev:
> internal)
> Dec  4 18:26:36 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO
> at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
> Dec  4 18:26:36 fruster kernel:
> Dec  4 18:26:36 fruster kernel: Pid: 5491, comm: mount Not tainted
> 2.6.32-358.23.2.el6.x86_64 #1
> Dec  4 18:26:36 fruster kernel: Call Trace:
> Dec  4 18:26:36 fruster kernel: [<ffffffffa045b0ef>] ?
> xfs_error_report+0x3f/0x50 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ?
> xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0430c2b>] ?
> xfs_free_ag_extent+0x58b/0x750 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ?
> xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa046de2d>] ?
> xlog_recover_process_efi+0x1bd/0x200 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa04796ea>] ?
> xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa046ded2>] ?
> xlog_recover_process_efis+0x62/0xc0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0471f34>] ?
> xlog_recover_finish+0x24/0xd0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa046a3ac>] ?
> xfs_log_mount_finish+0x2c/0x30 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0475a61>] ?
> xfs_mountfs+0x421/0x6a0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa048d6f4>] ?
> xfs_fs_fill_super+0x224/0x2e0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffff811847ce>] ?
> get_sb_bdev+0x18e/0x1d0
> Dec  4 18:26:36 fruster kernel: [<ffffffffa048d4d0>] ?
> xfs_fs_fill_super+0x0/0x2e0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa048b5b8>] ?
> xfs_fs_get_sb+0x18/0x20 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffff81183c1b>] ?
> vfs_kern_mount+0x7b/0x1b0
> Dec  4 18:26:36 fruster kernel: [<ffffffff81183dc2>] ?
> do_kern_mount+0x52/0x130
> Dec  4 18:26:36 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
> Dec  4 18:26:36 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
> Dec  4 18:26:36 fruster kernel: [<ffffffff8100b072>] ?
> system_call_fastpath+0x16/0x1b
> Dec  4 18:26:36 fruster kernel: XFS (sda1): Failed to recover EFIs
> Dec  4 18:26:36 fruster kernel: XFS (sda1): log mount finish failed
> 
> 
> I went back and looked at the log from around the time the drive died and
> found this message:
> Dec  4 17:58:16 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO
> at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1


> Dec  4 17:58:16 fruster kernel:
> Dec  4 17:58:16 fruster kernel: Pid: 4548, comm: nfsd Not tainted
> 2.6.32-358.23.2.el6.x86_64 #1
> Dec  4 17:58:16 fruster kernel: Call Trace:
> Dec  4 17:58:16 fruster kernel: [<ffffffffa045b0ef>] ?
> xfs_error_report+0x3f/0x50 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ?
> xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0430c2b>] ?
> xfs_free_ag_extent+0x58b/0x750 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ?
> xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa043c89d>] ?
> xfs_bmap_finish+0x15d/0x1a0 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa04626ff>] ?
> xfs_itruncate_finish+0x15f/0x320 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa047e370>] ?
> xfs_inactive+0x330/0x480 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa04793f4>] ?
> _xfs_trans_commit+0x214/0x2a0 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa048b9a0>] ?
> xfs_fs_clear_inode+0xa0/0xd0 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119d31c>] ?
> clear_inode+0xac/0x140
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119dad6>] ?
> generic_delete_inode+0x196/0x1d0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119db75>] ?
> generic_drop_inode+0x65/0x80
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119c9c2>] ? iput+0x62/0x70
> Dec  4 17:58:16 fruster kernel: [<ffffffff81199610>] ?
> dentry_iput+0x90/0x100
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119c278>] ? d_delete+0xe8/0xf0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8118fe99>] ? vfs_unlink+0xd9/0xf0
> Dec  4 17:58:16 fruster kernel: [<ffffffffa071cf4f>] ?
> nfsd_unlink+0x1af/0x250 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0723f03>] ?
> nfsd3_proc_remove+0x83/0x120 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa071543e>] ?
> nfsd_dispatch+0xfe/0x240 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa068e624>] ?
> svc_process_common+0x344/0x640 [sunrpc]
> Dec  4 17:58:16 fruster kernel: [<ffffffff81063990>] ?
> default_wake_function+0x0/0x20
> Dec  4 17:58:16 fruster kernel: [<ffffffffa068ec60>] ?
> svc_process+0x110/0x160 [sunrpc]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0715b62>] ? nfsd+0xc2/0x160
> [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffff81096a36>] ? kthread+0x96/0xa0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0ca>] ? child_rip+0xa/0x20
> Dec  4 17:58:16 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
> Dec  4 17:58:16 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x8)
> called from line 3863 of file fs/xfs/xfs_bmap.c.  Return address =
> 0xffffffffa043c8d6
> Dec  4 17:58:16 fruster kernel: XFS (sda1): Corruption of in-memory data
> detected.  Shutting down filesystem
> Dec  4 17:58:16 fruster kernel: XFS (sda1): Please umount the filesystem
> and rectify the problem(s)
> Dec  4 17:58:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 17:58:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 17:59:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 17:59:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:00:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:00:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:01:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:01:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x1)
> called from line 1061 of file fs/xfs/linux-2.6/xfs_buf.c.  Return address =
> 0xffffffffa04856e3
> Dec  4 18:02:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> 
> 
> I have attached the complete log from the time it died until now.
> 
> In the end, I successfully repaired the filesystem with `xfs_repair -L
> /dev/sda1`.  However, I am nervous that some files may have been corrupted.
> 
> Do any of you have any idea what could have caused this problem?

1456 STATIC int                      /* error */
1457 xfs_free_ag_extent(
1458         xfs_trans_t     *tp,    /* transaction pointer */
1459         xfs_buf_t       *agbp,  /* buffer for a.g. freelist header */
1460         xfs_agnumber_t  agno,   /* allocation group number */
1461         xfs_agblock_t   bno,    /* starting block number */
1462         xfs_extlen_t    len,    /* length of extent */
1463         int             isfl)   /* set if is freelist blocks - no sb acctg */
1464 {
1465         xfs_btree_cur_t *bno_cur;       /* cursor for by-block btree */
1466         xfs_btree_cur_t *cnt_cur;       /* cursor for by-size btree */
1467         int             error;          /* error return value */
1468         xfs_agblock_t   gtbno;          /* start of right neighbor block */
1469         xfs_extlen_t    gtlen;          /* length of right neighbor block */
1470         int             haveleft;       /* have a left neighbor block */
1471         int             haveright;      /* have a right neighbor block */
1472         int             i;              /* temp, result code */
1473         xfs_agblock_t   ltbno;          /* start of left neighbor block */
1474         xfs_extlen_t    ltlen;          /* length of left neighbor block */
1475         xfs_mount_t     *mp;            /* mount point struct for filesystem */
1476         xfs_agblock_t   nbno;           /* new starting block of freespace */
1477         xfs_extlen_t    nlen;           /* new length of freespace */
1478         xfs_perag_t     *pag;           /* per allocation group data */
1479 
1480         mp = tp->t_mountp;
1481         /*
1482          * Allocate and initialize a cursor for the by-block btree.
1483          */
1484         bno_cur = xfs_allocbt_init_cursor(mp, tp, agbp, agno, XFS_BTNUM_BNO);
1485         cnt_cur = NULL;
1486         /*
1487          * Look for a neighboring block on the left (lower block numbers)
1488          * that is contiguous with this space.
1489          */
1490         if ((error = xfs_alloc_lookup_le(bno_cur, bno, len, &haveleft)))
1491                 goto error0;
1492         if (haveleft) {
1493                 /*
1494                  * There is a block to our left.
1495                  */
1496                 if ((error = xfs_alloc_get_rec(bno_cur, &ltbno, &ltlen, &i)))
1497                         goto error0;
1498                 XFS_WANT_CORRUPTED_GOTO(i == 1, error0);
1499                 /*
1500                  * It's not contiguous, though.
1501                  */
1502                 if (ltbno + ltlen < bno)
1503                         haveleft = 0;
1504                 else {
1505                         /*
1506                          * If this failure happens the request to free this
1507                          * space was invalid, it's (partly) already free.
1508                          * Very bad.
1509                          */
1510                         XFS_WANT_CORRUPTED_GOTO(ltbno + ltlen <= bno, error0);
1511                 }
1512         }

@ 1510 the extent list in one of the files that was being deleted included a
block that was already in the by block number freespace btree.  Unfortunately
repair may have removed all of the evidence.  It's one of those deals where the
corruption would have acutally happened awhile ago and we don't find out until
later.

Recently we found a bug in repair where it doesn't fix certain kinds of
corruption.  Here are the strings to look for in your xfs_repair output:

"fork in ino ... claims dup extent"
"fork in ino ... claims free block"
"fork in inode ... claims used block"

If you run repair again and see those messages you still have the corruption.

If you do still have the corruption it would be very helpful to grab a
metadump.  Then if you restart rm and get 'lucky' and hit it again, a logprint
would be helpful.

The fix for repair is here if you need it:
http://oss.sgi.com/archives/xfs/2013-12/msg00109.html

This is the same symptom that we're currently discussing in another thread:
http://oss.sgi.com/archives/xfs/2013-12/msg00108.html

It's too early to make an assertion that this is what you have, but it might
make some interesting reading if you are interested.  Kind of a crazy
coincidence.

Thanks,
Ben

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Fwd: Sudden File System Corruption
  2013-12-05 15:58     ` Fwd: " Mike Dacre
@ 2013-12-06  8:58       ` Stan Hoeppner
       [not found]         ` <CAPd9ww8+W2VX2HAfxEkVN5mL1a_+=HDAStf1126WSE33Vb=VsQ@mail.gmail.com>
  0 siblings, 1 reply; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-06  8:58 UTC (permalink / raw)
  To: Mike Dacre, xfs

On 12/5/2013 9:58 AM, Mike Dacre wrote:

> On Thu, Dec 5, 2013 at 12:10 AM, Stan Hoeppner <stan@hardwarefreak.com>wrote:
>> On 12/4/2013 8:55 PM, Mike Dacre wrote:
>> ...
>
> Definitely RAID6
> 
> 2.  Strip size?  (eg 512KB)
>>
> 64KB

Ok, so 64*14 = 896KB stripe.  This seems pretty sane for a 14 spindle
parity array and mixed workloads.

> 4.  BBU module?
>>
> Yes. iBBU, state optimal, 97% charged.
> 
> 5.  Is write cache enabled?
>>
>> Yes: Cahced IO and Write Back with BBU are enabled.

I should have pointed you this this earlier:
http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

but we've got most of it already.  We don't have your fstab mount
options.  Please provide that.

...
> This is also attached as xfs_info.txt

You're not aligning XFS to the RAID geometry (unless you're overriding
in fstab).  No alignment is good though for small (<896KB) file
allocations but less than optimal for large streaming allocation writes.
 But it isn't a factor in the problems you reported.

...
>> Good point.  These happened while trying to ls.  I am not sure why I can't
> find them in the log, they printed out to the console as 'Input/Output'
> errors, simply stating that the ls command failed.

We look for SCSI IO errors preceding an XFS error as a causal indicator.
 I didn't see that here.  You could have run into the bug Ben described
earlier.  I can't really speak to the console errors.

>> With delaylog enabled, which I believe it is in RHEL/CentOS 6, a single
>> big rm shouldn't kill the disks.  But with the combination of other
>> workloads it seems you may have been seeking the disks to death.
>>
> That is possible, workloads can get really high sometimes.  I am not sure
> how to control that without significantly impacting performance - I want a
> single user to be able to use 98% IO capacity sometimes... but other times
> I want the load to be split amongst many users.  

You can't control the seeking at the disks.  You can only schedule
workloads together that don't compete for seeks.  And if you have one
metadata or random read/write heavy workload, with this SATA RAID6
array, it will need exclusive access for the duration of execution, or
the portion that does all the random IO.  Otherwise other workloads
running concurrently will crawl while competing for seek bandwidth.

> Also, each user can
> execute jobs simultaneously on 23 different computers, each acessing the
> same drive via NFS.  This is a great system most of the time, but sometimes
> the workloads on the drive get really high.

So it's a small compute cluster using NFS over Infiniband for shared
file access to a low performance RAID6 array.  The IO resource sharing
is automatic.  But AFAIK there's no easy way to enforce IO quotas on
users or processes, if at all.  You may simply not have sufficient IO to
go around.  Let's ponder that.

Looking at the math, you currently have approximately 14*150=2100
seeks/sec capability with 14x 7.2k RPM data spindles.  That's less than
100 seeks/sec per compute node, i.e. each node is getting about 2/3rd of
the performance of a single SATA disk from this array.  This simply
isn't sufficient for servicing a 23 node cluster, unless all workloads
are compute bound, and none IO/seek bound.  Given the overload/crash
that brought you to our attention, I'd say some of your workloads are
obviously IO/seek bound.  I'd say you probably need more/faster disks.
Or you need to identify which jobs are IO/seek heavy and schedule them
so they're not running concurrently.

...
>> http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
>>
>> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
>> of the parallelization in XFS."
...
>> echo deadline > /sys/block/sda/queue/scheduler
>>
> Wow, this is huge, I can't believe I missed that.  I have switched it to
> noop now as we use write caching.  I have been trying to figure out for a
> while why I would keep getting timeouts when the NFS load was high.  If you
> have any other suggestions for how I can improve performance, I would
> greatly appreciate it.

This may not fix NFS timeouts entirely but it should help.  If the NFS
operations are seeking the disks to death you may still see timeouts.

>> This one simple command line may help pretty dramatically, immediately,
>> assuming your hardware array parameters aren't horribly wrong for your
>> workloads, and your XFS alignment correctly matches the hardware geometry.
>>
> Great, thanks.  Our workloads vary considerably as we are a biology
> research lab, sometimes we do lots of seeks, other times we are almost
> maxing out read or write speed with massively parallel processes all
> accessing the disk at the same time.

Do you use munin or something similar?  Sample output:
http://demo.munin-monitoring.org/munin-monitoring.org/demo.munin-monitoring.org/index.html#disk

Project page:
http://munin-monitoring.org/

It also has an NFS module and many others.  The storage oriented metrics
may be very helpful to you.  You would install munin-node on the NFS
server and all compute nodes, and munin on a collector/web server.  This
will allow you to cross reference client and server NFS loads.  You can
then cross reference the time in your PBS logs to see which users were
running which jobs when IO spikes occur on the NFS server.  You'll know
exactly which workloads, or combination thereof, are causing IO spikes.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Fwd: Fwd: Sudden File System Corruption
       [not found]         ` <CAPd9ww8+W2VX2HAfxEkVN5mL1a_+=HDAStf1126WSE33Vb=VsQ@mail.gmail.com>
@ 2013-12-06 23:15           ` Mike Dacre
  2013-12-07 11:12           ` Stan Hoeppner
  1 sibling, 0 replies; 25+ messages in thread
From: Mike Dacre @ 2013-12-06 23:15 UTC (permalink / raw)
  To: xfs


[-- Attachment #1.1: Type: text/plain, Size: 8442 bytes --]

---------- Forwarded message ----------
From: Mike Dacre <mike.dacre@gmail.com>
Date: Fri, Dec 6, 2013 at 2:14 PM
Subject: Re: Fwd: Sudden File System Corruption
To: stan@hardwarefreak.com





On Fri, Dec 6, 2013 at 12:58 AM, Stan Hoeppner <stan@hardwarefreak.com>wrote:

> On 12/5/2013 9:58 AM, Mike Dacre wrote:
>
> > On Thu, Dec 5, 2013 at 12:10 AM, Stan Hoeppner <stan@hardwarefreak.com
> >wrote:
> >> On 12/4/2013 8:55 PM, Mike Dacre wrote:
> >> ...
> >
> > Definitely RAID6
> >
> > 2.  Strip size?  (eg 512KB)
> >>
> > 64KB
>
> Ok, so 64*14 = 896KB stripe.  This seems pretty sane for a 14 spindle
> parity array and mixed workloads.
>
> > 4.  BBU module?
> >>
> > Yes. iBBU, state optimal, 97% charged.
> >
> > 5.  Is write cache enabled?
> >>
> >> Yes: Cahced IO and Write Back with BBU are enabled.
>
> I should have pointed you this this earlier:
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>
> but we've got most of it already.  We don't have your fstab mount
> options.  Please provide that.
>

UUID=a58bf1db-0d64-4a2d-8e03-aad78dbebcbe /science                xfs
defaults,inode64          1 0

On the slave nodes, I managed to reduce the demand on the disks by adding
the actimeo=60 mount option.  Prior to doing this I would sometimes see the
disk being negatively affected by enormous numbers of getattr requests.
 Here is the fstab mount on the nodes:

192.168.2.1:/science                      /science                nfs
defaults,vers=3,nofail,actimeo=60,bg,hard,intr,rw  0 0

...
> > This is also attached as xfs_info.txt
>
> You're not aligning XFS to the RAID geometry (unless you're overriding
> in fstab).  No alignment is good though for small (<896KB) file
> allocations but less than optimal for large streaming allocation writes.
>  But it isn't a factor in the problems you reported.
>
>
Correct, I am not consciously aligning the XFS to the RAID geometry, I
actually didn't know that was possible.


> ...
> >> Good point.  These happened while trying to ls.  I am not sure why I
> can't
> > find them in the log, they printed out to the console as 'Input/Output'
> > errors, simply stating that the ls command failed.
>
> We look for SCSI IO errors preceding an XFS error as a causal indicator.
>  I didn't see that here.  You could have run into the bug Ben described
> earlier.  I can't really speak to the console errors.
>
> >> With delaylog enabled, which I believe it is in RHEL/CentOS 6, a single
> >> big rm shouldn't kill the disks.  But with the combination of other
> >> workloads it seems you may have been seeking the disks to death.
> >>
> > That is possible, workloads can get really high sometimes.  I am not sure
> > how to control that without significantly impacting performance - I want
> a
> > single user to be able to use 98% IO capacity sometimes... but other
> times
> > I want the load to be split amongst many users.
>
> You can't control the seeking at the disks.  You can only schedule
> workloads together that don't compete for seeks.  And if you have one
> metadata or random read/write heavy workload, with this SATA RAID6
> array, it will need exclusive access for the duration of execution, or
> the portion that does all the random IO.  Otherwise other workloads
> running concurrently will crawl while competing for seek bandwidth.
>
> > Also, each user can
> > execute jobs simultaneously on 23 different computers, each acessing the
> > same drive via NFS.  This is a great system most of the time, but
> sometimes
> > the workloads on the drive get really high.
>
> So it's a small compute cluster using NFS over Infiniband for shared
> file access to a low performance RAID6 array.  The IO resource sharing
> is automatic.  But AFAIK there's no easy way to enforce IO quotas on
> users or processes, if at all.  You may simply not have sufficient IO to
> go around.  Let's ponder that.
>

I have tried a few things to improve IO allocation.  BetterLinux have a
cgroup control suite that allow on-the-fly user-level IO adjustments,
however I found them to be quite cumbersome.

I considered an ugly hack in which I would run two NFS servers, one on the
network to the login node, and one on the network to the other nodes, so
that I could use cgroups to limit IO by process, effectively guaranteeing a
5% IO capacity window to the login node, even if the compute nodes were all
going crazy.  I quickly came to the conclusion that I don't know enough
about filesystems, nfs, or the linux kernel to do this effectively: I would
almost certainly just make an ugly mess that accomplished little more than
breaking a lot of things, while not solving the problem.  I still think it
is a good idea in principle though, I just recognize that it would need to
be implemented by someone with a lot more experience than me, and that it
would probably be a major undertaking.


> Looking at the math, you currently have approximately 14*150=2100
> seeks/sec capability with 14x 7.2k RPM data spindles.  That's less than
> 100 seeks/sec per compute node, i.e. each node is getting about 2/3rd of
> the performance of a single SATA disk from this array.  This simply
> isn't sufficient for servicing a 23 node cluster, unless all workloads
> are compute bound, and none IO/seek bound.  Given the overload/crash
> that brought you to our attention, I'd say some of your workloads are
> obviously IO/seek bound.  I'd say you probably need more/faster disks.
> Or you need to identify which jobs are IO/seek heavy and schedule them
> so they're not running concurrently.
>

Yes, this is a problem.  We sadly lack the resources to do much better than
this, we have recently been adding extra storage by just chaining together
USB3 drives with RAID and LVM... which is cumbersome and slow, but cheaper.

My current solution is to be on the alert for high IO jobs, and to move
them to a specific torque queue that limits the number of concurrent jobs.
 This works, but I have not found a way to do it automatically.
 Thankfully, with a 12 member lab, it is actually not terribly complex to
handle, but I would definitely prefer a more comprehensive solution.  I
don't doubt that the huge IO and seek demands we put on these disks will
cause more problems in the future.


> ...
> >>
> http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
> >>
> >> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much
> >> of the parallelization in XFS."
> ...
> >> echo deadline > /sys/block/sda/queue/scheduler
> >>
> > Wow, this is huge, I can't believe I missed that.  I have switched it to
> > noop now as we use write caching.  I have been trying to figure out for a
> > while why I would keep getting timeouts when the NFS load was high.  If
> you
> > have any other suggestions for how I can improve performance, I would
> > greatly appreciate it.
>
> This may not fix NFS timeouts entirely but it should help.  If the NFS
> operations are seeking the disks to death you may still see timeouts.
>
> >> This one simple command line may help pretty dramatically, immediately,
> >> assuming your hardware array parameters aren't horribly wrong for your
> >> workloads, and your XFS alignment correctly matches the hardware
> geometry.
> >>
> > Great, thanks.  Our workloads vary considerably as we are a biology
> > research lab, sometimes we do lots of seeks, other times we are almost
> > maxing out read or write speed with massively parallel processes all
> > accessing the disk at the same time.
>
> Do you use munin or something similar?  Sample output:
>
> http://demo.munin-monitoring.org/munin-monitoring.org/demo.munin-monitoring.org/index.html#disk
>
> Project page:
> http://munin-monitoring.org/


I have been using Ganglia, but it doesn't have good NFS monitoring as far
as I can tell.  I will check out Munin, thanks for the advice.


> It also has an NFS module and many others.  The storage oriented metrics
> may be very helpful to you.  You would install munin-node on the NFS
> server and all compute nodes, and munin on a collector/web server.  This
> will allow you to cross reference client and server NFS loads.  You can
> then cross reference the time in your PBS logs to see which users were
> running which jobs when IO spikes occur on the NFS server.  You'll know
> exactly which workloads, or combination thereof, are causing IO spikes.
>
> --
> Stan
>

-Mike

[-- Attachment #1.2: Type: text/html, Size: 11945 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
       [not found]           ` <20131206225612.GU10553@sgi.com>
@ 2013-12-06 23:15             ` Mike Dacre
  2013-12-08 22:20               ` Dave Chinner
  0 siblings, 1 reply; 25+ messages in thread
From: Mike Dacre @ 2013-12-06 23:15 UTC (permalink / raw)
  To: Ben Myers; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1920 bytes --]

Hey Guys,

Here is the repair log from right after the corruption happened.  The
repair was successful.

-Mike


On Fri, Dec 6, 2013 at 2:56 PM, Ben Myers <bpm@sgi.com> wrote:

> Hey Mike,
>
> On Thu, Dec 05, 2013 at 04:40:46PM -0800, Mike Dacre wrote:
> > On Thu, Dec 5, 2013 at 4:23 PM, Ben Myers <bpm@sgi.com> wrote:
> > > On Thu, Dec 05, 2013 at 04:06:27PM -0800, Mike Dacre wrote:
> > > > I sent you the output of the xfs_repair command, and also the
> logprint,
> > > but
> > > > I couldn't get a metadump as the filesystem is mounted.  I can't
> unmount
> > > > the file system, it is too important.  Sorry.
> > > >
> > > > The logprint is in a tar archive at
> > > > ftp://shell.sgi.com/receive/mike_dacre/mike-xfs-files.tar.bz2
> > >
> > > Thanks for the info.  Could you clarify a couple things for me so that
> I
> > > know
> > > what I'm looking at?
> > >
> > > 1) How did you create the logprint file?  Was the filesystem mounted at
> > > the time?
> > >
> >
> > The filesystem was mounted, I created it with this command: `xfs_logprint
> > -C xfs_logdump.txt /dev/sda1`
>
> Oh, ok.  The logprint I'm looking for would have to be taken immediately
> after
> the forced shutdown.  Sorry for the confusion.
>
> > > 2) Is the xfs_repair.log you sent the output of the very first run of
> > > xfs_repair?  Or, is it the output from a second incident?  Was the
> > > filesystem
> > > mounted at the time?
> > >
> >
> > That is from the first run of xfs_repair, after the filesystem corrupted.
> >  I ran it with the filesystem unmounted.
>
> It's great that you have this.  And an interesting repair log.  The good
> news
> is that it doesn't look like the corruption that xfs_repair doesn't fix,
> the
> bad news is that I don't recognise it.  If you wouldn't mind posting the
> repair
> log to the list, I think that would help.  At least it would get some more
> eyes
> on it.
>
> Thanks much,
>        Ben
>

[-- Attachment #1.2: Type: text/html, Size: 2800 bytes --]

[-- Attachment #2: xfs_repair.log --]
[-- Type: text/x-log, Size: 128473 bytes --]

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
agi unlinked bucket 0 is 1168000 in ag 0 (inode=1168000)
agi unlinked bucket 1 is 1168001 in ag 0 (inode=1168001)
agi unlinked bucket 2 is 1168002 in ag 0 (inode=1168002)
agi unlinked bucket 3 is 1168835 in ag 0 (inode=1168835)
agi unlinked bucket 4 is 2640388 in ag 0 (inode=2640388)
agi unlinked bucket 5 is 2640389 in ag 0 (inode=2640389)
agi unlinked bucket 6 is 402118 in ag 0 (inode=402118)
agi unlinked bucket 7 is 2640391 in ag 0 (inode=2640391)
agi unlinked bucket 8 is 402120 in ag 0 (inode=402120)
agi unlinked bucket 9 is 2388873 in ag 0 (inode=2388873)
agi unlinked bucket 10 is 2388874 in ag 0 (inode=2388874)
agi unlinked bucket 11 is 2373771 in ag 0 (inode=2373771)
agi unlinked bucket 12 is 2388876 in ag 0 (inode=2388876)
agi unlinked bucket 13 is 2388877 in ag 0 (inode=2388877)
agi unlinked bucket 14 is 2373774 in ag 0 (inode=2373774)
agi unlinked bucket 15 is 2746767 in ag 0 (inode=2746767)
agi unlinked bucket 16 is 2388880 in ag 0 (inode=2388880)
agi unlinked bucket 17 is 2388881 in ag 0 (inode=2388881)
agi unlinked bucket 18 is 2373778 in ag 0 (inode=2373778)
agi unlinked bucket 19 is 2641555 in ag 0 (inode=2641555)
agi unlinked bucket 20 is 1172884 in ag 0 (inode=1172884)
agi unlinked bucket 21 is 2388885 in ag 0 (inode=2388885)
agi unlinked bucket 22 is 786582 in ag 0 (inode=786582)
agi unlinked bucket 23 is 786583 in ag 0 (inode=786583)
agi unlinked bucket 24 is 786584 in ag 0 (inode=786584)
agi unlinked bucket 25 is 2373721 in ag 0 (inode=2373721)
agi unlinked bucket 26 is 2518106 in ag 0 (inode=2518106)
agi unlinked bucket 27 is 2518107 in ag 0 (inode=2518107)
agi unlinked bucket 28 is 2518108 in ag 0 (inode=2518108)
agi unlinked bucket 29 is 2373789 in ag 0 (inode=2373789)
agi unlinked bucket 30 is 2746526 in ag 0 (inode=2746526)
agi unlinked bucket 31 is 2531807 in ag 0 (inode=2531807)
agi unlinked bucket 32 is 2531808 in ag 0 (inode=2531808)
agi unlinked bucket 33 is 2531809 in ag 0 (inode=2531809)
agi unlinked bucket 34 is 2531810 in ag 0 (inode=2531810)
agi unlinked bucket 35 is 2531811 in ag 0 (inode=2531811)
agi unlinked bucket 36 is 2531812 in ag 0 (inode=2531812)
agi unlinked bucket 37 is 2531813 in ag 0 (inode=2531813)
agi unlinked bucket 38 is 860902 in ag 0 (inode=860902)
agi unlinked bucket 39 is 2531815 in ag 0 (inode=2531815)
agi unlinked bucket 40 is 2531816 in ag 0 (inode=2531816)
agi unlinked bucket 41 is 2531817 in ag 0 (inode=2531817)
agi unlinked bucket 42 is 2531818 in ag 0 (inode=2531818)
agi unlinked bucket 43 is 2531819 in ag 0 (inode=2531819)
agi unlinked bucket 44 is 2531820 in ag 0 (inode=2531820)
agi unlinked bucket 45 is 2531821 in ag 0 (inode=2531821)
agi unlinked bucket 46 is 2531822 in ag 0 (inode=2531822)
agi unlinked bucket 47 is 2531823 in ag 0 (inode=2531823)
agi unlinked bucket 48 is 2422384 in ag 0 (inode=2422384)
agi unlinked bucket 49 is 2373745 in ag 0 (inode=2373745)
agi unlinked bucket 50 is 2373618 in ag 0 (inode=2373618)
agi unlinked bucket 51 is 2373747 in ag 0 (inode=2373747)
agi unlinked bucket 52 is 2531828 in ag 0 (inode=2531828)
agi unlinked bucket 53 is 2531829 in ag 0 (inode=2531829)
agi unlinked bucket 54 is 2531830 in ag 0 (inode=2531830)
agi unlinked bucket 55 is 2386103 in ag 0 (inode=2386103)
agi unlinked bucket 56 is 2531832 in ag 0 (inode=2531832)
agi unlinked bucket 57 is 2531833 in ag 0 (inode=2531833)
agi unlinked bucket 58 is 2531834 in ag 0 (inode=2531834)
agi unlinked bucket 59 is 2531835 in ag 0 (inode=2531835)
agi unlinked bucket 60 is 2417980 in ag 0 (inode=2417980)
agi unlinked bucket 61 is 2531837 in ag 0 (inode=2531837)
agi unlinked bucket 62 is 2531838 in ag 0 (inode=2531838)
agi unlinked bucket 63 is 2531839 in ag 0 (inode=2531839)
sb_icount 56095104, counted 56097344
sb_ifree 35184822, counted 35287630
sb_fdblocks 770579643, counted 795688499
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
7f6d7f893700: Badness in key lookup (length)
bp=(bno 576, len 16384 bytes) key=(bno 576, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1264, len 16384 bytes) key=(bno 1264, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1392, len 16384 bytes) key=(bno 1392, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1440, len 16384 bytes) key=(bno 1440, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1648, len 16384 bytes) key=(bno 1648, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1760, len 16384 bytes) key=(bno 1760, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 17088, len 16384 bytes) key=(bno 17088, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 17760, len 16384 bytes) key=(bno 17760, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 17792, len 16384 bytes) key=(bno 17792, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 154496, len 16384 bytes) key=(bno 154496, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 154528, len 16384 bytes) key=(bno 154528, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 156624, len 16384 bytes) key=(bno 156624, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 182752, len 16384 bytes) key=(bno 182752, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 182880, len 16384 bytes) key=(bno 182880, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 183328, len 16384 bytes) key=(bno 183328, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 184432, len 16384 bytes) key=(bno 184432, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 184464, len 16384 bytes) key=(bno 184464, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 184512, len 16384 bytes) key=(bno 184512, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 184544, len 16384 bytes) key=(bno 184544, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 184592, len 16384 bytes) key=(bno 184592, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 185632, len 16384 bytes) key=(bno 185632, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 185680, len 16384 bytes) key=(bno 185680, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 185712, len 16384 bytes) key=(bno 185712, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 185760, len 16384 bytes) key=(bno 185760, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 185792, len 16384 bytes) key=(bno 185792, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 186288, len 16384 bytes) key=(bno 186288, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 186416, len 16384 bytes) key=(bno 186416, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 201040, len 16384 bytes) key=(bno 201040, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 393280, len 16384 bytes) key=(bno 393280, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 430448, len 16384 bytes) key=(bno 430448, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 583536, len 16384 bytes) key=(bno 583536, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 584000, len 16384 bytes) key=(bno 584000, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 584320, len 16384 bytes) key=(bno 584320, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 584416, len 16384 bytes) key=(bno 584416, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 584528, len 16384 bytes) key=(bno 584528, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 586384, len 16384 bytes) key=(bno 586384, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 586416, len 16384 bytes) key=(bno 586416, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590160, len 16384 bytes) key=(bno 590160, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590240, len 16384 bytes) key=(bno 590240, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590320, len 16384 bytes) key=(bno 590320, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590368, len 16384 bytes) key=(bno 590368, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590432, len 16384 bytes) key=(bno 590432, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590496, len 16384 bytes) key=(bno 590496, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590544, len 16384 bytes) key=(bno 590544, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590608, len 16384 bytes) key=(bno 590608, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 590720, len 16384 bytes) key=(bno 590720, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 591424, len 16384 bytes) key=(bno 591424, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 591456, len 16384 bytes) key=(bno 591456, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 591520, len 16384 bytes) key=(bno 591520, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 591616, len 16384 bytes) key=(bno 591616, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592128, len 16384 bytes) key=(bno 592128, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592176, len 16384 bytes) key=(bno 592176, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592224, len 16384 bytes) key=(bno 592224, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592272, len 16384 bytes) key=(bno 592272, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592320, len 16384 bytes) key=(bno 592320, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592352, len 16384 bytes) key=(bno 592352, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592416, len 16384 bytes) key=(bno 592416, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592464, len 16384 bytes) key=(bno 592464, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592512, len 16384 bytes) key=(bno 592512, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592560, len 16384 bytes) key=(bno 592560, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592608, len 16384 bytes) key=(bno 592608, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592656, len 16384 bytes) key=(bno 592656, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592704, len 16384 bytes) key=(bno 592704, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 592752, len 16384 bytes) key=(bno 592752, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 597568, len 16384 bytes) key=(bno 597568, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 608128, len 16384 bytes) key=(bno 608128, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 608192, len 16384 bytes) key=(bno 608192, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182768, len 16384 bytes) key=(bno 1182768, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182800, len 16384 bytes) key=(bno 1182800, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182832, len 16384 bytes) key=(bno 1182832, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182864, len 16384 bytes) key=(bno 1182864, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182896, len 16384 bytes) key=(bno 1182896, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182928, len 16384 bytes) key=(bno 1182928, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182960, len 16384 bytes) key=(bno 1182960, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1182992, len 16384 bytes) key=(bno 1182992, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183024, len 16384 bytes) key=(bno 1183024, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183056, len 16384 bytes) key=(bno 1183056, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183088, len 16384 bytes) key=(bno 1183088, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183120, len 16384 bytes) key=(bno 1183120, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183152, len 16384 bytes) key=(bno 1183152, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183184, len 16384 bytes) key=(bno 1183184, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183216, len 16384 bytes) key=(bno 1183216, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183248, len 16384 bytes) key=(bno 1183248, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183280, len 16384 bytes) key=(bno 1183280, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183312, len 16384 bytes) key=(bno 1183312, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183344, len 16384 bytes) key=(bno 1183344, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183376, len 16384 bytes) key=(bno 1183376, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183408, len 16384 bytes) key=(bno 1183408, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183440, len 16384 bytes) key=(bno 1183440, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183472, len 16384 bytes) key=(bno 1183472, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183504, len 16384 bytes) key=(bno 1183504, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183536, len 16384 bytes) key=(bno 1183536, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183568, len 16384 bytes) key=(bno 1183568, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183600, len 16384 bytes) key=(bno 1183600, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183632, len 16384 bytes) key=(bno 1183632, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183664, len 16384 bytes) key=(bno 1183664, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183696, len 16384 bytes) key=(bno 1183696, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183728, len 16384 bytes) key=(bno 1183728, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183760, len 16384 bytes) key=(bno 1183760, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183824, len 16384 bytes) key=(bno 1183824, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183856, len 16384 bytes) key=(bno 1183856, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183888, len 16384 bytes) key=(bno 1183888, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183920, len 16384 bytes) key=(bno 1183920, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183952, len 16384 bytes) key=(bno 1183952, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1183984, len 16384 bytes) key=(bno 1183984, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184016, len 16384 bytes) key=(bno 1184016, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184048, len 16384 bytes) key=(bno 1184048, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184112, len 16384 bytes) key=(bno 1184112, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184144, len 16384 bytes) key=(bno 1184144, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184176, len 16384 bytes) key=(bno 1184176, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184208, len 16384 bytes) key=(bno 1184208, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184240, len 16384 bytes) key=(bno 1184240, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184272, len 16384 bytes) key=(bno 1184272, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184304, len 16384 bytes) key=(bno 1184304, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184336, len 16384 bytes) key=(bno 1184336, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184368, len 16384 bytes) key=(bno 1184368, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184400, len 16384 bytes) key=(bno 1184400, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184432, len 16384 bytes) key=(bno 1184432, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184464, len 16384 bytes) key=(bno 1184464, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184496, len 16384 bytes) key=(bno 1184496, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184528, len 16384 bytes) key=(bno 1184528, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184560, len 16384 bytes) key=(bno 1184560, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184592, len 16384 bytes) key=(bno 1184592, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184624, len 16384 bytes) key=(bno 1184624, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184656, len 16384 bytes) key=(bno 1184656, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184688, len 16384 bytes) key=(bno 1184688, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184720, len 16384 bytes) key=(bno 1184720, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184752, len 16384 bytes) key=(bno 1184752, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184784, len 16384 bytes) key=(bno 1184784, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184816, len 16384 bytes) key=(bno 1184816, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184848, len 16384 bytes) key=(bno 1184848, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184880, len 16384 bytes) key=(bno 1184880, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184912, len 16384 bytes) key=(bno 1184912, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184944, len 16384 bytes) key=(bno 1184944, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1184976, len 16384 bytes) key=(bno 1184976, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1185008, len 16384 bytes) key=(bno 1185008, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1185040, len 16384 bytes) key=(bno 1185040, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1185072, len 16384 bytes) key=(bno 1185072, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1185136, len 16384 bytes) key=(bno 1185136, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186608, len 16384 bytes) key=(bno 1186608, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186672, len 16384 bytes) key=(bno 1186672, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186704, len 16384 bytes) key=(bno 1186704, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186736, len 16384 bytes) key=(bno 1186736, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186768, len 16384 bytes) key=(bno 1186768, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186800, len 16384 bytes) key=(bno 1186800, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186832, len 16384 bytes) key=(bno 1186832, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186864, len 16384 bytes) key=(bno 1186864, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1186896, len 16384 bytes) key=(bno 1186896, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1193024, len 16384 bytes) key=(bno 1193024, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1194400, len 16384 bytes) key=(bno 1194400, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1194432, len 16384 bytes) key=(bno 1194432, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1208960, len 16384 bytes) key=(bno 1208960, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1211168, len 16384 bytes) key=(bno 1211168, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1259040, len 16384 bytes) key=(bno 1259040, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1259168, len 16384 bytes) key=(bno 1259168, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1265888, len 16384 bytes) key=(bno 1265888, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1266016, len 16384 bytes) key=(bno 1266016, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1319904, len 16384 bytes) key=(bno 1319904, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1320160, len 16384 bytes) key=(bno 1320160, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1320192, len 16384 bytes) key=(bno 1320192, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1320768, len 16384 bytes) key=(bno 1320768, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1362176, len 16384 bytes) key=(bno 1362176, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1368464, len 16384 bytes) key=(bno 1368464, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1369200, len 16384 bytes) key=(bno 1369200, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1369648, len 16384 bytes) key=(bno 1369648, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1372016, len 16384 bytes) key=(bno 1372016, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1373232, len 16384 bytes) key=(bno 1373232, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1373360, len 16384 bytes) key=(bno 1373360, len 8192 bytes)
7f6d7f893700: Badness in key lookup (length)
bp=(bno 1373584, len 16384 bytes) key=(bno 1373584, len 8192 bytes)
data fork in ino 860903 claims free block 451482
data fork in ino 860904 claims free block 451489
data fork in ino 860905 claims free block 451514
data fork in ino 860906 claims free block 451545
data fork in ino 860907 claims free block 451560
data fork in ino 860908 claims free block 451581
data fork in ino 860909 claims free block 451606
data fork in ino 860910 claims free block 451627
data fork in ino 860911 claims free block 451636
data fork in ino 860912 claims free block 451655
data fork in ino 860914 claims free block 451674
data fork in ino 860915 claims free block 451705
data fork in ino 860916 claims free block 451740
data fork in ino 860917 claims free block 451757
data fork in ino 860918 claims free block 451762
data fork in ino 860919 claims free block 451769
data fork in ino 860920 claims free block 451869
data fork in ino 860921 claims free block 451870
data fork in ino 860922 claims free block 451873
data fork in ino 860923 claims free block 451876
data fork in ino 860924 claims free block 451906
data fork in ino 860925 claims free block 451907
data fork in ino 860926 claims free block 451921
data fork in ino 860927 claims free block 451922
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 5
        - agno = 6
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - agno = 22
        - agno = 23
        - agno = 24
        - agno = 25
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 3
        - agno = 4
        - agno = 1
        - agno = 6
        - agno = 7
        - agno = 5
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 11
        - agno = 12
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - agno = 22
        - agno = 23
        - agno = 24
        - agno = 25
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 1186, moving to lost+found
disconnected inode 1195, moving to lost+found
disconnected inode 1196, moving to lost+found
disconnected inode 1197, moving to lost+found
disconnected inode 1198, moving to lost+found
disconnected inode 1199, moving to lost+found
disconnected inode 1202, moving to lost+found
disconnected inode 1203, moving to lost+found
disconnected inode 1204, moving to lost+found
disconnected inode 1214, moving to lost+found
disconnected inode 1215, moving to lost+found
disconnected inode 2571, moving to lost+found
disconnected inode 2827, moving to lost+found
disconnected inode 2838, moving to lost+found
disconnected inode 2843, moving to lost+found
disconnected inode 2882, moving to lost+found
disconnected inode 2884, moving to lost+found
disconnected inode 2888, moving to lost+found
disconnected inode 2902, moving to lost+found
disconnected inode 2919, moving to lost+found
disconnected inode 3296, moving to lost+found
disconnected inode 3297, moving to lost+found
disconnected inode 3317, moving to lost+found
disconnected inode 3325, moving to lost+found
disconnected inode 3338, moving to lost+found
disconnected inode 3352, moving to lost+found
disconnected inode 3353, moving to lost+found
disconnected inode 3354, moving to lost+found
disconnected inode 3355, moving to lost+found
disconnected inode 3356, moving to lost+found
disconnected inode 3357, moving to lost+found
disconnected inode 3521, moving to lost+found
disconnected inode 3522, moving to lost+found
disconnected inode 3525, moving to lost+found
disconnected inode 3542, moving to lost+found
disconnected inode 3543, moving to lost+found
disconnected inode 3571, moving to lost+found
disconnected inode 34202, moving to lost+found
disconnected inode 34204, moving to lost+found
disconnected inode 34206, moving to lost+found
disconnected inode 34208, moving to lost+found
disconnected inode 34210, moving to lost+found
disconnected inode 34212, moving to lost+found
disconnected inode 34214, moving to lost+found
disconnected inode 34216, moving to lost+found
disconnected inode 34218, moving to lost+found
disconnected inode 34220, moving to lost+found
disconnected inode 35562, moving to lost+found
disconnected inode 35566, moving to lost+found
disconnected inode 35570, moving to lost+found
disconnected inode 35572, moving to lost+found
disconnected inode 35574, moving to lost+found
disconnected inode 35576, moving to lost+found
disconnected inode 35578, moving to lost+found
disconnected inode 35580, moving to lost+found
disconnected inode 35582, moving to lost+found
disconnected inode 35584, moving to lost+found
disconnected inode 35586, moving to lost+found
disconnected inode 309042, moving to lost+found
disconnected inode 309044, moving to lost+found
disconnected inode 309046, moving to lost+found
disconnected inode 309048, moving to lost+found
disconnected inode 309050, moving to lost+found
disconnected inode 309052, moving to lost+found
disconnected inode 309054, moving to lost+found
disconnected inode 309056, moving to lost+found
disconnected inode 309058, moving to lost+found
disconnected inode 309060, moving to lost+found
disconnected inode 309062, moving to lost+found
disconnected inode 309064, moving to lost+found
disconnected inode 309066, moving to lost+found
disconnected inode 309068, moving to lost+found
disconnected inode 309070, moving to lost+found
disconnected inode 309072, moving to lost+found
disconnected inode 313262, moving to lost+found
disconnected inode 313264, moving to lost+found
disconnected inode 313266, moving to lost+found
disconnected inode 313268, moving to lost+found
disconnected inode 313270, moving to lost+found
disconnected inode 313272, moving to lost+found
disconnected inode 313274, moving to lost+found
disconnected inode 313276, moving to lost+found
disconnected inode 313278, moving to lost+found
disconnected inode 313280, moving to lost+found
disconnected inode 365564, moving to lost+found
disconnected inode 365806, moving to lost+found
disconnected inode 366671, moving to lost+found
disconnected inode 368924, moving to lost+found
disconnected inode 368925, moving to lost+found
disconnected inode 368926, moving to lost+found
disconnected inode 368927, moving to lost+found
disconnected inode 368928, moving to lost+found
disconnected inode 368929, moving to lost+found
disconnected inode 368930, moving to lost+found
disconnected inode 368931, moving to lost+found
disconnected inode 368932, moving to lost+found
disconnected inode 368933, moving to lost+found
disconnected inode 368934, moving to lost+found
disconnected inode 368935, moving to lost+found
disconnected inode 368936, moving to lost+found
disconnected inode 368937, moving to lost+found
disconnected inode 368938, moving to lost+found
disconnected inode 368939, moving to lost+found
disconnected inode 368940, moving to lost+found
disconnected inode 368941, moving to lost+found
disconnected inode 368942, moving to lost+found
disconnected inode 368943, moving to lost+found
disconnected inode 368944, moving to lost+found
disconnected inode 368945, moving to lost+found
disconnected inode 368946, moving to lost+found
disconnected inode 368947, moving to lost+found
disconnected inode 368948, moving to lost+found
disconnected inode 368949, moving to lost+found
disconnected inode 368950, moving to lost+found
disconnected inode 368951, moving to lost+found
disconnected inode 368952, moving to lost+found
disconnected inode 368953, moving to lost+found
disconnected inode 368954, moving to lost+found
disconnected inode 368955, moving to lost+found
disconnected inode 368956, moving to lost+found
disconnected inode 368957, moving to lost+found
disconnected inode 368958, moving to lost+found
disconnected inode 368959, moving to lost+found
disconnected inode 368960, moving to lost+found
disconnected inode 368962, moving to lost+found
disconnected inode 368963, moving to lost+found
disconnected inode 368964, moving to lost+found
disconnected inode 368965, moving to lost+found
disconnected inode 368966, moving to lost+found
disconnected inode 368967, moving to lost+found
disconnected inode 368968, moving to lost+found
disconnected inode 368970, moving to lost+found
disconnected inode 368972, moving to lost+found
disconnected inode 368973, moving to lost+found
disconnected inode 368974, moving to lost+found
disconnected inode 368975, moving to lost+found
disconnected inode 368976, moving to lost+found
disconnected inode 368977, moving to lost+found
disconnected inode 368978, moving to lost+found
disconnected inode 368979, moving to lost+found
disconnected inode 368980, moving to lost+found
disconnected inode 368990, moving to lost+found
disconnected inode 368991, moving to lost+found
disconnected inode 369024, moving to lost+found
disconnected inode 369025, moving to lost+found
disconnected inode 369026, moving to lost+found
disconnected inode 369027, moving to lost+found
disconnected inode 369028, moving to lost+found
disconnected inode 369029, moving to lost+found
disconnected inode 369030, moving to lost+found
disconnected inode 369031, moving to lost+found
disconnected inode 369032, moving to lost+found
disconnected inode 369033, moving to lost+found
disconnected inode 369034, moving to lost+found
disconnected inode 369035, moving to lost+found
disconnected inode 369036, moving to lost+found
disconnected inode 369037, moving to lost+found
disconnected inode 369038, moving to lost+found
disconnected inode 369039, moving to lost+found
disconnected inode 369040, moving to lost+found
disconnected inode 369041, moving to lost+found
disconnected inode 369042, moving to lost+found
disconnected inode 369043, moving to lost+found
disconnected inode 369044, moving to lost+found
disconnected inode 369045, moving to lost+found
disconnected inode 369046, moving to lost+found
disconnected inode 369047, moving to lost+found
disconnected inode 369048, moving to lost+found
disconnected inode 369050, moving to lost+found
disconnected inode 369051, moving to lost+found
disconnected inode 369052, moving to lost+found
disconnected inode 369053, moving to lost+found
disconnected inode 369054, moving to lost+found
disconnected inode 369056, moving to lost+found
disconnected inode 369057, moving to lost+found
disconnected inode 369058, moving to lost+found
disconnected inode 369059, moving to lost+found
disconnected inode 369060, moving to lost+found
disconnected inode 369061, moving to lost+found
disconnected inode 369062, moving to lost+found
disconnected inode 369063, moving to lost+found
disconnected inode 369064, moving to lost+found
disconnected inode 369065, moving to lost+found
disconnected inode 369066, moving to lost+found
disconnected inode 369067, moving to lost+found
disconnected inode 369068, moving to lost+found
disconnected inode 369069, moving to lost+found
disconnected inode 369070, moving to lost+found
disconnected inode 369071, moving to lost+found
disconnected inode 369072, moving to lost+found
disconnected inode 369073, moving to lost+found
disconnected inode 369074, moving to lost+found
disconnected inode 369076, moving to lost+found
disconnected inode 369077, moving to lost+found
disconnected inode 369079, moving to lost+found
disconnected inode 369080, moving to lost+found
disconnected inode 369081, moving to lost+found
disconnected inode 369082, moving to lost+found
disconnected inode 369083, moving to lost+found
disconnected inode 369084, moving to lost+found
disconnected inode 369085, moving to lost+found
disconnected inode 369086, moving to lost+found
disconnected inode 369087, moving to lost+found
disconnected inode 369088, moving to lost+found
disconnected inode 369089, moving to lost+found
disconnected inode 369090, moving to lost+found
disconnected inode 369091, moving to lost+found
disconnected inode 369092, moving to lost+found
disconnected inode 369093, moving to lost+found
disconnected inode 369094, moving to lost+found
disconnected inode 369095, moving to lost+found
disconnected inode 369096, moving to lost+found
disconnected inode 369097, moving to lost+found
disconnected inode 369098, moving to lost+found
disconnected inode 369100, moving to lost+found
disconnected inode 369101, moving to lost+found
disconnected inode 369102, moving to lost+found
disconnected inode 369103, moving to lost+found
disconnected inode 369104, moving to lost+found
disconnected inode 369105, moving to lost+found
disconnected inode 369106, moving to lost+found
disconnected inode 369107, moving to lost+found
disconnected inode 369112, moving to lost+found
disconnected inode 369113, moving to lost+found
disconnected inode 369114, moving to lost+found
disconnected inode 369115, moving to lost+found
disconnected inode 369116, moving to lost+found
disconnected inode 369117, moving to lost+found
disconnected inode 369118, moving to lost+found
disconnected inode 369119, moving to lost+found
disconnected inode 369120, moving to lost+found
disconnected inode 369122, moving to lost+found
disconnected inode 369123, moving to lost+found
disconnected inode 369124, moving to lost+found
disconnected inode 369125, moving to lost+found
disconnected inode 369126, moving to lost+found
disconnected inode 369127, moving to lost+found
disconnected inode 369128, moving to lost+found
disconnected inode 369132, moving to lost+found
disconnected inode 369133, moving to lost+found
disconnected inode 369134, moving to lost+found
disconnected inode 369135, moving to lost+found
disconnected inode 369138, moving to lost+found
disconnected inode 369139, moving to lost+found
disconnected inode 369142, moving to lost+found
disconnected inode 369143, moving to lost+found
disconnected inode 369144, moving to lost+found
disconnected inode 369145, moving to lost+found
disconnected inode 369146, moving to lost+found
disconnected inode 369147, moving to lost+found
disconnected inode 369150, moving to lost+found
disconnected inode 369151, moving to lost+found
disconnected inode 369186, moving to lost+found
disconnected inode 369187, moving to lost+found
disconnected inode 369188, moving to lost+found
disconnected inode 369189, moving to lost+found
disconnected inode 369190, moving to lost+found
disconnected inode 369191, moving to lost+found
disconnected inode 369192, moving to lost+found
disconnected inode 369193, moving to lost+found
disconnected inode 369194, moving to lost+found
disconnected inode 369195, moving to lost+found
disconnected inode 369196, moving to lost+found
disconnected inode 369197, moving to lost+found
disconnected inode 369205, moving to lost+found
disconnected inode 369207, moving to lost+found
disconnected inode 369209, moving to lost+found
disconnected inode 369210, moving to lost+found
disconnected inode 369212, moving to lost+found
disconnected inode 371319, moving to lost+found
disconnected inode 371327, moving to lost+found
disconnected inode 371360, moving to lost+found
disconnected inode 371361, moving to lost+found
disconnected inode 371362, moving to lost+found
disconnected inode 371363, moving to lost+found
disconnected inode 371364, moving to lost+found
disconnected inode 371365, moving to lost+found
disconnected inode 371366, moving to lost+found
disconnected inode 371367, moving to lost+found
disconnected inode 371368, moving to lost+found
disconnected inode 371369, moving to lost+found
disconnected inode 371370, moving to lost+found
disconnected inode 371371, moving to lost+found
disconnected inode 371372, moving to lost+found
disconnected inode 371373, moving to lost+found
disconnected inode 371374, moving to lost+found
disconnected inode 371375, moving to lost+found
disconnected inode 371376, moving to lost+found
disconnected inode 371377, moving to lost+found
disconnected inode 371378, moving to lost+found
disconnected inode 371379, moving to lost+found
disconnected inode 371380, moving to lost+found
disconnected inode 371381, moving to lost+found
disconnected inode 371382, moving to lost+found
disconnected inode 371383, moving to lost+found
disconnected inode 371384, moving to lost+found
disconnected inode 371385, moving to lost+found
disconnected inode 371386, moving to lost+found
disconnected inode 371387, moving to lost+found
disconnected inode 371388, moving to lost+found
disconnected inode 371389, moving to lost+found
disconnected inode 371391, moving to lost+found
disconnected inode 371392, moving to lost+found
disconnected inode 371393, moving to lost+found
disconnected inode 371394, moving to lost+found
disconnected inode 371396, moving to lost+found
disconnected inode 371397, moving to lost+found
disconnected inode 371398, moving to lost+found
disconnected inode 371399, moving to lost+found
disconnected inode 371400, moving to lost+found
disconnected inode 371401, moving to lost+found
disconnected inode 371402, moving to lost+found
disconnected inode 371403, moving to lost+found
disconnected inode 371404, moving to lost+found
disconnected inode 371405, moving to lost+found
disconnected inode 371406, moving to lost+found
disconnected inode 371407, moving to lost+found
disconnected inode 371408, moving to lost+found
disconnected inode 371409, moving to lost+found
disconnected inode 371410, moving to lost+found
disconnected inode 371411, moving to lost+found
disconnected inode 371412, moving to lost+found
disconnected inode 371413, moving to lost+found
disconnected inode 371414, moving to lost+found
disconnected inode 371415, moving to lost+found
disconnected inode 371416, moving to lost+found
disconnected inode 371417, moving to lost+found
disconnected inode 371418, moving to lost+found
disconnected inode 371419, moving to lost+found
disconnected inode 371420, moving to lost+found
disconnected inode 371421, moving to lost+found
disconnected inode 371422, moving to lost+found
disconnected inode 371423, moving to lost+found
disconnected inode 371424, moving to lost+found
disconnected inode 371425, moving to lost+found
disconnected inode 371426, moving to lost+found
disconnected inode 371427, moving to lost+found
disconnected inode 371428, moving to lost+found
disconnected inode 371429, moving to lost+found
disconnected inode 371430, moving to lost+found
disconnected inode 371431, moving to lost+found
disconnected inode 371433, moving to lost+found
disconnected inode 371434, moving to lost+found
disconnected inode 371435, moving to lost+found
disconnected inode 371436, moving to lost+found
disconnected inode 371437, moving to lost+found
disconnected inode 371438, moving to lost+found
disconnected inode 371439, moving to lost+found
disconnected inode 371440, moving to lost+found
disconnected inode 371441, moving to lost+found
disconnected inode 371442, moving to lost+found
disconnected inode 371443, moving to lost+found
disconnected inode 371444, moving to lost+found
disconnected inode 371445, moving to lost+found
disconnected inode 371446, moving to lost+found
disconnected inode 371447, moving to lost+found
disconnected inode 371448, moving to lost+found
disconnected inode 371449, moving to lost+found
disconnected inode 371450, moving to lost+found
disconnected inode 371451, moving to lost+found
disconnected inode 371452, moving to lost+found
disconnected inode 371453, moving to lost+found
disconnected inode 371454, moving to lost+found
disconnected inode 371455, moving to lost+found
disconnected inode 371456, moving to lost+found
disconnected inode 371457, moving to lost+found
disconnected inode 371458, moving to lost+found
disconnected inode 371459, moving to lost+found
disconnected inode 371460, moving to lost+found
disconnected inode 371461, moving to lost+found
disconnected inode 371462, moving to lost+found
disconnected inode 371463, moving to lost+found
disconnected inode 371464, moving to lost+found
disconnected inode 371465, moving to lost+found
disconnected inode 371466, moving to lost+found
disconnected inode 371467, moving to lost+found
disconnected inode 371468, moving to lost+found
disconnected inode 371469, moving to lost+found
disconnected inode 371470, moving to lost+found
disconnected inode 371471, moving to lost+found
disconnected inode 371472, moving to lost+found
disconnected inode 371473, moving to lost+found
disconnected inode 371474, moving to lost+found
disconnected inode 371475, moving to lost+found
disconnected inode 371476, moving to lost+found
disconnected inode 371477, moving to lost+found
disconnected inode 371478, moving to lost+found
disconnected inode 371479, moving to lost+found
disconnected inode 371480, moving to lost+found
disconnected inode 371481, moving to lost+found
disconnected inode 371482, moving to lost+found
disconnected inode 371483, moving to lost+found
disconnected inode 371484, moving to lost+found
disconnected inode 371485, moving to lost+found
disconnected inode 371486, moving to lost+found
disconnected inode 371487, moving to lost+found
disconnected inode 371520, moving to lost+found
disconnected inode 371521, moving to lost+found
disconnected inode 371522, moving to lost+found
disconnected inode 371523, moving to lost+found
disconnected inode 371536, moving to lost+found
disconnected inode 371537, moving to lost+found
disconnected inode 371538, moving to lost+found
disconnected inode 371539, moving to lost+found
disconnected inode 371540, moving to lost+found
disconnected inode 371541, moving to lost+found
disconnected inode 371542, moving to lost+found
disconnected inode 371543, moving to lost+found
disconnected inode 371544, moving to lost+found
disconnected inode 371545, moving to lost+found
disconnected inode 371546, moving to lost+found
disconnected inode 371547, moving to lost+found
disconnected inode 371548, moving to lost+found
disconnected inode 371549, moving to lost+found
disconnected inode 371550, moving to lost+found
disconnected inode 371551, moving to lost+found
disconnected inode 371552, moving to lost+found
disconnected inode 371553, moving to lost+found
disconnected inode 371554, moving to lost+found
disconnected inode 371555, moving to lost+found
disconnected inode 371556, moving to lost+found
disconnected inode 371557, moving to lost+found
disconnected inode 371558, moving to lost+found
disconnected inode 371559, moving to lost+found
disconnected inode 371560, moving to lost+found
disconnected inode 371561, moving to lost+found
disconnected inode 371562, moving to lost+found
disconnected inode 371563, moving to lost+found
disconnected inode 371564, moving to lost+found
disconnected inode 371565, moving to lost+found
disconnected inode 371566, moving to lost+found
disconnected inode 371567, moving to lost+found
disconnected inode 371568, moving to lost+found
disconnected inode 371569, moving to lost+found
disconnected inode 371570, moving to lost+found
disconnected inode 371571, moving to lost+found
disconnected inode 371572, moving to lost+found
disconnected inode 371573, moving to lost+found
disconnected inode 371576, moving to lost+found
disconnected inode 371577, moving to lost+found
disconnected inode 371578, moving to lost+found
disconnected inode 371579, moving to lost+found
disconnected inode 371580, moving to lost+found
disconnected inode 371581, moving to lost+found
disconnected inode 371582, moving to lost+found
disconnected inode 371583, moving to lost+found
disconnected inode 371584, moving to lost+found
disconnected inode 371585, moving to lost+found
disconnected inode 371586, moving to lost+found
disconnected inode 371587, moving to lost+found
disconnected inode 371588, moving to lost+found
disconnected inode 371589, moving to lost+found
disconnected inode 371590, moving to lost+found
disconnected inode 371591, moving to lost+found
disconnected inode 371592, moving to lost+found
disconnected inode 371593, moving to lost+found
disconnected inode 371594, moving to lost+found
disconnected inode 371595, moving to lost+found
disconnected inode 371596, moving to lost+found
disconnected inode 371597, moving to lost+found
disconnected inode 371598, moving to lost+found
disconnected inode 371599, moving to lost+found
disconnected inode 372601, moving to lost+found
disconnected inode 372615, moving to lost+found
disconnected inode 372616, moving to lost+found
disconnected inode 372621, moving to lost+found
disconnected inode 372636, moving to lost+found
disconnected inode 372832, moving to lost+found
disconnected inode 372833, moving to lost+found
disconnected inode 372834, moving to lost+found
disconnected inode 372871, moving to lost+found
disconnected inode 372877, moving to lost+found
disconnected inode 402118, moving to lost+found
disconnected inode 402119, moving to lost+found
disconnected inode 402120, moving to lost+found
disconnected inode 786582, moving to lost+found
disconnected inode 786583, moving to lost+found
disconnected inode 786584, moving to lost+found
disconnected inode 860902, moving to lost+found
disconnected inode 1167125, moving to lost+found
disconnected inode 1167127, moving to lost+found
disconnected inode 1167128, moving to lost+found
disconnected inode 1168000, moving to lost+found
disconnected inode 1168001, moving to lost+found
disconnected inode 1168002, moving to lost+found
disconnected inode 1168050, moving to lost+found
disconnected inode 1168051, moving to lost+found
disconnected inode 1168052, moving to lost+found
disconnected inode 1168645, moving to lost+found
disconnected inode 1168646, moving to lost+found
disconnected inode 1168647, moving to lost+found
disconnected inode 1168832, moving to lost+found
disconnected inode 1168834, moving to lost+found
disconnected inode 1168835, moving to lost+found
disconnected inode 1168838, moving to lost+found
disconnected inode 1168852, moving to lost+found
disconnected inode 1168853, moving to lost+found
disconnected inode 1169091, moving to lost+found
disconnected inode 1169092, moving to lost+found
disconnected inode 1169093, moving to lost+found
disconnected inode 1169095, moving to lost+found
disconnected inode 1169096, moving to lost+found
disconnected inode 1169099, moving to lost+found
disconnected inode 1169100, moving to lost+found
disconnected inode 1169101, moving to lost+found
disconnected inode 1172788, moving to lost+found
disconnected inode 1172792, moving to lost+found
disconnected inode 1172794, moving to lost+found
disconnected inode 1172795, moving to lost+found
disconnected inode 1172797, moving to lost+found
disconnected inode 1172799, moving to lost+found
disconnected inode 1172800, moving to lost+found
disconnected inode 1172801, moving to lost+found
disconnected inode 1172803, moving to lost+found
disconnected inode 1172804, moving to lost+found
disconnected inode 1172805, moving to lost+found
disconnected inode 1172806, moving to lost+found
disconnected inode 1172807, moving to lost+found
disconnected inode 1172808, moving to lost+found
disconnected inode 1172809, moving to lost+found
disconnected inode 1172810, moving to lost+found
disconnected inode 1172811, moving to lost+found
disconnected inode 1172812, moving to lost+found
disconnected inode 1172813, moving to lost+found
disconnected inode 1172814, moving to lost+found
disconnected inode 1172815, moving to lost+found
disconnected inode 1172816, moving to lost+found
disconnected inode 1172817, moving to lost+found
disconnected inode 1172819, moving to lost+found
disconnected inode 1172820, moving to lost+found
disconnected inode 1172821, moving to lost+found
disconnected inode 1172822, moving to lost+found
disconnected inode 1172823, moving to lost+found
disconnected inode 1172824, moving to lost+found
disconnected inode 1172825, moving to lost+found
disconnected inode 1172827, moving to lost+found
disconnected inode 1172828, moving to lost+found
disconnected inode 1172829, moving to lost+found
disconnected inode 1172831, moving to lost+found
disconnected inode 1172833, moving to lost+found
disconnected inode 1172834, moving to lost+found
disconnected inode 1172835, moving to lost+found
disconnected inode 1172836, moving to lost+found
disconnected inode 1172837, moving to lost+found
disconnected inode 1172838, moving to lost+found
disconnected inode 1172839, moving to lost+found
disconnected inode 1172840, moving to lost+found
disconnected inode 1172841, moving to lost+found
disconnected inode 1172842, moving to lost+found
disconnected inode 1172843, moving to lost+found
disconnected inode 1172844, moving to lost+found
disconnected inode 1172845, moving to lost+found
disconnected inode 1172846, moving to lost+found
disconnected inode 1172847, moving to lost+found
disconnected inode 1172850, moving to lost+found
disconnected inode 1172852, moving to lost+found
disconnected inode 1172854, moving to lost+found
disconnected inode 1172856, moving to lost+found
disconnected inode 1172857, moving to lost+found
disconnected inode 1172858, moving to lost+found
disconnected inode 1172861, moving to lost+found
disconnected inode 1172862, moving to lost+found
disconnected inode 1172864, moving to lost+found
disconnected inode 1172867, moving to lost+found
disconnected inode 1172868, moving to lost+found
disconnected inode 1172870, moving to lost+found
disconnected inode 1172871, moving to lost+found
disconnected inode 1172874, moving to lost+found
disconnected inode 1172875, moving to lost+found
disconnected inode 1172877, moving to lost+found
disconnected inode 1172878, moving to lost+found
disconnected inode 1172880, moving to lost+found
disconnected inode 1172883, moving to lost+found
disconnected inode 1172884, moving to lost+found
disconnected inode 1180362, moving to lost+found
disconnected inode 1180363, moving to lost+found
disconnected inode 1180529, moving to lost+found
disconnected inode 1180530, moving to lost+found
disconnected inode 1180534, moving to lost+found
disconnected inode 1180535, moving to lost+found
disconnected inode 1180536, moving to lost+found
disconnected inode 1180537, moving to lost+found
disconnected inode 1180538, moving to lost+found
disconnected inode 1180539, moving to lost+found
disconnected inode 1180540, moving to lost+found
disconnected inode 1180541, moving to lost+found
disconnected inode 1180543, moving to lost+found
disconnected inode 1180640, moving to lost+found
disconnected inode 1180641, moving to lost+found
disconnected inode 1180642, moving to lost+found
disconnected inode 1180643, moving to lost+found
disconnected inode 1180644, moving to lost+found
disconnected inode 1180645, moving to lost+found
disconnected inode 1180646, moving to lost+found
disconnected inode 1180647, moving to lost+found
disconnected inode 1180648, moving to lost+found
disconnected inode 1180649, moving to lost+found
disconnected inode 1180651, moving to lost+found
disconnected inode 1180654, moving to lost+found
disconnected inode 1180655, moving to lost+found
disconnected inode 1180656, moving to lost+found
disconnected inode 1180657, moving to lost+found
disconnected inode 1180658, moving to lost+found
disconnected inode 1180659, moving to lost+found
disconnected inode 1180660, moving to lost+found
disconnected inode 1180661, moving to lost+found
disconnected inode 1180662, moving to lost+found
disconnected inode 1180663, moving to lost+found
disconnected inode 1180664, moving to lost+found
disconnected inode 1180665, moving to lost+found
disconnected inode 1180666, moving to lost+found
disconnected inode 1180667, moving to lost+found
disconnected inode 1180668, moving to lost+found
disconnected inode 1180672, moving to lost+found
disconnected inode 1180674, moving to lost+found
disconnected inode 1180675, moving to lost+found
disconnected inode 1180676, moving to lost+found
disconnected inode 1180677, moving to lost+found
disconnected inode 1180678, moving to lost+found
disconnected inode 1180680, moving to lost+found
disconnected inode 1180682, moving to lost+found
disconnected inode 1180683, moving to lost+found
disconnected inode 1180684, moving to lost+found
disconnected inode 1180685, moving to lost+found
disconnected inode 1180687, moving to lost+found
disconnected inode 1180688, moving to lost+found
disconnected inode 1180689, moving to lost+found
disconnected inode 1180690, moving to lost+found
disconnected inode 1180691, moving to lost+found
disconnected inode 1180692, moving to lost+found
disconnected inode 1180693, moving to lost+found
disconnected inode 1180694, moving to lost+found
disconnected inode 1180695, moving to lost+found
disconnected inode 1180696, moving to lost+found
disconnected inode 1180697, moving to lost+found
disconnected inode 1180698, moving to lost+found
disconnected inode 1180699, moving to lost+found
disconnected inode 1180700, moving to lost+found
disconnected inode 1180701, moving to lost+found
disconnected inode 1180702, moving to lost+found
disconnected inode 1180703, moving to lost+found
disconnected inode 1180736, moving to lost+found
disconnected inode 1180737, moving to lost+found
disconnected inode 1180738, moving to lost+found
disconnected inode 1180739, moving to lost+found
disconnected inode 1180740, moving to lost+found
disconnected inode 1180741, moving to lost+found
disconnected inode 1180742, moving to lost+found
disconnected inode 1180743, moving to lost+found
disconnected inode 1180744, moving to lost+found
disconnected inode 1180745, moving to lost+found
disconnected inode 1180746, moving to lost+found
disconnected inode 1180749, moving to lost+found
disconnected inode 1180750, moving to lost+found
disconnected inode 1180751, moving to lost+found
disconnected inode 1180752, moving to lost+found
disconnected inode 1180754, moving to lost+found
disconnected inode 1180755, moving to lost+found
disconnected inode 1180756, moving to lost+found
disconnected inode 1180757, moving to lost+found
disconnected inode 1180758, moving to lost+found
disconnected inode 1180759, moving to lost+found
disconnected inode 1180760, moving to lost+found
disconnected inode 1180761, moving to lost+found
disconnected inode 1180762, moving to lost+found
disconnected inode 1180763, moving to lost+found
disconnected inode 1180764, moving to lost+found
disconnected inode 1180765, moving to lost+found
disconnected inode 1180766, moving to lost+found
disconnected inode 1180767, moving to lost+found
disconnected inode 1180768, moving to lost+found
disconnected inode 1180769, moving to lost+found
disconnected inode 1180770, moving to lost+found
disconnected inode 1180771, moving to lost+found
disconnected inode 1180772, moving to lost+found
disconnected inode 1180773, moving to lost+found
disconnected inode 1180774, moving to lost+found
disconnected inode 1180775, moving to lost+found
disconnected inode 1180776, moving to lost+found
disconnected inode 1180777, moving to lost+found
disconnected inode 1180778, moving to lost+found
disconnected inode 1180779, moving to lost+found
disconnected inode 1180780, moving to lost+found
disconnected inode 1180781, moving to lost+found
disconnected inode 1180783, moving to lost+found
disconnected inode 1180784, moving to lost+found
disconnected inode 1180785, moving to lost+found
disconnected inode 1180786, moving to lost+found
disconnected inode 1180787, moving to lost+found
disconnected inode 1180788, moving to lost+found
disconnected inode 1180789, moving to lost+found
disconnected inode 1180790, moving to lost+found
disconnected inode 1180791, moving to lost+found
disconnected inode 1180792, moving to lost+found
disconnected inode 1180793, moving to lost+found
disconnected inode 1180794, moving to lost+found
disconnected inode 1180795, moving to lost+found
disconnected inode 1180796, moving to lost+found
disconnected inode 1180797, moving to lost+found
disconnected inode 1180799, moving to lost+found
disconnected inode 1180864, moving to lost+found
disconnected inode 1180865, moving to lost+found
disconnected inode 1180866, moving to lost+found
disconnected inode 1180867, moving to lost+found
disconnected inode 1180868, moving to lost+found
disconnected inode 1180869, moving to lost+found
disconnected inode 1180870, moving to lost+found
disconnected inode 1180871, moving to lost+found
disconnected inode 1180872, moving to lost+found
disconnected inode 1180873, moving to lost+found
disconnected inode 1180874, moving to lost+found
disconnected inode 1180875, moving to lost+found
disconnected inode 1180876, moving to lost+found
disconnected inode 1180877, moving to lost+found
disconnected inode 1180879, moving to lost+found
disconnected inode 1180880, moving to lost+found
disconnected inode 1180881, moving to lost+found
disconnected inode 1180882, moving to lost+found
disconnected inode 1180883, moving to lost+found
disconnected inode 1180885, moving to lost+found
disconnected inode 1180886, moving to lost+found
disconnected inode 1180887, moving to lost+found
disconnected inode 1180888, moving to lost+found
disconnected inode 1180889, moving to lost+found
disconnected inode 1180890, moving to lost+found
disconnected inode 1180891, moving to lost+found
disconnected inode 1180892, moving to lost+found
disconnected inode 1180893, moving to lost+found
disconnected inode 1180894, moving to lost+found
disconnected inode 1180895, moving to lost+found
disconnected inode 1180897, moving to lost+found
disconnected inode 1180898, moving to lost+found
disconnected inode 1180899, moving to lost+found
disconnected inode 1180901, moving to lost+found
disconnected inode 1180902, moving to lost+found
disconnected inode 1180903, moving to lost+found
disconnected inode 1180904, moving to lost+found
disconnected inode 1180905, moving to lost+found
disconnected inode 1180906, moving to lost+found
disconnected inode 1180912, moving to lost+found
disconnected inode 1180913, moving to lost+found
disconnected inode 1180915, moving to lost+found
disconnected inode 1180916, moving to lost+found
disconnected inode 1180918, moving to lost+found
disconnected inode 1180919, moving to lost+found
disconnected inode 1180920, moving to lost+found
disconnected inode 1180921, moving to lost+found
disconnected inode 1180922, moving to lost+found
disconnected inode 1180923, moving to lost+found
disconnected inode 1180924, moving to lost+found
disconnected inode 1180925, moving to lost+found
disconnected inode 1180927, moving to lost+found
disconnected inode 1180992, moving to lost+found
disconnected inode 1180993, moving to lost+found
disconnected inode 1180994, moving to lost+found
disconnected inode 1180995, moving to lost+found
disconnected inode 1180997, moving to lost+found
disconnected inode 1180998, moving to lost+found
disconnected inode 1181002, moving to lost+found
disconnected inode 1181003, moving to lost+found
disconnected inode 1181005, moving to lost+found
disconnected inode 1181006, moving to lost+found
disconnected inode 1181007, moving to lost+found
disconnected inode 1181008, moving to lost+found
disconnected inode 1181009, moving to lost+found
disconnected inode 1181010, moving to lost+found
disconnected inode 1181011, moving to lost+found
disconnected inode 1181012, moving to lost+found
disconnected inode 1181013, moving to lost+found
disconnected inode 1181014, moving to lost+found
disconnected inode 1181015, moving to lost+found
disconnected inode 1181016, moving to lost+found
disconnected inode 1181017, moving to lost+found
disconnected inode 1181018, moving to lost+found
disconnected inode 1181019, moving to lost+found
disconnected inode 1181020, moving to lost+found
disconnected inode 1181021, moving to lost+found
disconnected inode 1181022, moving to lost+found
disconnected inode 1181023, moving to lost+found
disconnected inode 1181024, moving to lost+found
disconnected inode 1181025, moving to lost+found
disconnected inode 1181026, moving to lost+found
disconnected inode 1181027, moving to lost+found
disconnected inode 1181028, moving to lost+found
disconnected inode 1181029, moving to lost+found
disconnected inode 1181030, moving to lost+found
disconnected inode 1181031, moving to lost+found
disconnected inode 1181032, moving to lost+found
disconnected inode 1181033, moving to lost+found
disconnected inode 1181034, moving to lost+found
disconnected inode 1181035, moving to lost+found
disconnected inode 1181036, moving to lost+found
disconnected inode 1181037, moving to lost+found
disconnected inode 1181038, moving to lost+found
disconnected inode 1181039, moving to lost+found
disconnected inode 1181040, moving to lost+found
disconnected inode 1181041, moving to lost+found
disconnected inode 1181042, moving to lost+found
disconnected inode 1181043, moving to lost+found
disconnected inode 1181046, moving to lost+found
disconnected inode 1181047, moving to lost+found
disconnected inode 1181048, moving to lost+found
disconnected inode 1181049, moving to lost+found
disconnected inode 1181050, moving to lost+found
disconnected inode 1181051, moving to lost+found
disconnected inode 1181052, moving to lost+found
disconnected inode 1181053, moving to lost+found
disconnected inode 1181054, moving to lost+found
disconnected inode 1181055, moving to lost+found
disconnected inode 1181088, moving to lost+found
disconnected inode 1181089, moving to lost+found
disconnected inode 1181090, moving to lost+found
disconnected inode 1181091, moving to lost+found
disconnected inode 1181092, moving to lost+found
disconnected inode 1181093, moving to lost+found
disconnected inode 1181094, moving to lost+found
disconnected inode 1181095, moving to lost+found
disconnected inode 1181096, moving to lost+found
disconnected inode 1181097, moving to lost+found
disconnected inode 1181098, moving to lost+found
disconnected inode 1181099, moving to lost+found
disconnected inode 1181100, moving to lost+found
disconnected inode 1181101, moving to lost+found
disconnected inode 1181102, moving to lost+found
disconnected inode 1181103, moving to lost+found
disconnected inode 1181104, moving to lost+found
disconnected inode 1181105, moving to lost+found
disconnected inode 1181106, moving to lost+found
disconnected inode 1181107, moving to lost+found
disconnected inode 1181108, moving to lost+found
disconnected inode 1181109, moving to lost+found
disconnected inode 1181114, moving to lost+found
disconnected inode 1181116, moving to lost+found
disconnected inode 1181117, moving to lost+found
disconnected inode 1181119, moving to lost+found
disconnected inode 1181120, moving to lost+found
disconnected inode 1181122, moving to lost+found
disconnected inode 1181123, moving to lost+found
disconnected inode 1181124, moving to lost+found
disconnected inode 1181125, moving to lost+found
disconnected inode 1181126, moving to lost+found
disconnected inode 1181127, moving to lost+found
disconnected inode 1181128, moving to lost+found
disconnected inode 1181129, moving to lost+found
disconnected inode 1181131, moving to lost+found
disconnected inode 1181132, moving to lost+found
disconnected inode 1181133, moving to lost+found
disconnected inode 1181134, moving to lost+found
disconnected inode 1181135, moving to lost+found
disconnected inode 1181136, moving to lost+found
disconnected inode 1181137, moving to lost+found
disconnected inode 1181138, moving to lost+found
disconnected inode 1181139, moving to lost+found
disconnected inode 1181140, moving to lost+found
disconnected inode 1181141, moving to lost+found
disconnected inode 1181142, moving to lost+found
disconnected inode 1181143, moving to lost+found
disconnected inode 1181144, moving to lost+found
disconnected inode 1181146, moving to lost+found
disconnected inode 1181150, moving to lost+found
disconnected inode 1181151, moving to lost+found
disconnected inode 1181216, moving to lost+found
disconnected inode 1181218, moving to lost+found
disconnected inode 1181219, moving to lost+found
disconnected inode 1181222, moving to lost+found
disconnected inode 1181223, moving to lost+found
disconnected inode 1181224, moving to lost+found
disconnected inode 1181225, moving to lost+found
disconnected inode 1181226, moving to lost+found
disconnected inode 1181227, moving to lost+found
disconnected inode 1181228, moving to lost+found
disconnected inode 1181229, moving to lost+found
disconnected inode 1181230, moving to lost+found
disconnected inode 1181231, moving to lost+found
disconnected inode 1181232, moving to lost+found
disconnected inode 1181233, moving to lost+found
disconnected inode 1181235, moving to lost+found
disconnected inode 1181237, moving to lost+found
disconnected inode 1181239, moving to lost+found
disconnected inode 1181241, moving to lost+found
disconnected inode 1181243, moving to lost+found
disconnected inode 1181244, moving to lost+found
disconnected inode 1181245, moving to lost+found
disconnected inode 1181246, moving to lost+found
disconnected inode 1181247, moving to lost+found
disconnected inode 1181248, moving to lost+found
disconnected inode 1181249, moving to lost+found
disconnected inode 1181250, moving to lost+found
disconnected inode 1181251, moving to lost+found
disconnected inode 1181252, moving to lost+found
disconnected inode 1181253, moving to lost+found
disconnected inode 1181254, moving to lost+found
disconnected inode 1181255, moving to lost+found
disconnected inode 1181256, moving to lost+found
disconnected inode 1181257, moving to lost+found
disconnected inode 1181258, moving to lost+found
disconnected inode 1181259, moving to lost+found
disconnected inode 1181260, moving to lost+found
disconnected inode 1181261, moving to lost+found
disconnected inode 1181262, moving to lost+found
disconnected inode 1181263, moving to lost+found
disconnected inode 1181264, moving to lost+found
disconnected inode 1181265, moving to lost+found
disconnected inode 1181266, moving to lost+found
disconnected inode 1181267, moving to lost+found
disconnected inode 1181268, moving to lost+found
disconnected inode 1181269, moving to lost+found
disconnected inode 1181270, moving to lost+found
disconnected inode 1181271, moving to lost+found
disconnected inode 1181272, moving to lost+found
disconnected inode 1181273, moving to lost+found
disconnected inode 1181274, moving to lost+found
disconnected inode 1181275, moving to lost+found
disconnected inode 1181276, moving to lost+found
disconnected inode 1181277, moving to lost+found
disconnected inode 1181278, moving to lost+found
disconnected inode 1181279, moving to lost+found
disconnected inode 1181440, moving to lost+found
disconnected inode 1181441, moving to lost+found
disconnected inode 1181442, moving to lost+found
disconnected inode 1181443, moving to lost+found
disconnected inode 1181444, moving to lost+found
disconnected inode 1181445, moving to lost+found
disconnected inode 1181446, moving to lost+found
disconnected inode 1181447, moving to lost+found
disconnected inode 1181448, moving to lost+found
disconnected inode 1181449, moving to lost+found
disconnected inode 1181450, moving to lost+found
disconnected inode 1181452, moving to lost+found
disconnected inode 1181453, moving to lost+found
disconnected inode 1181454, moving to lost+found
disconnected inode 1181455, moving to lost+found
disconnected inode 1181456, moving to lost+found
disconnected inode 1181457, moving to lost+found
disconnected inode 1181460, moving to lost+found
disconnected inode 1181461, moving to lost+found
disconnected inode 1181462, moving to lost+found
disconnected inode 1181463, moving to lost+found
disconnected inode 1181464, moving to lost+found
disconnected inode 1181465, moving to lost+found
disconnected inode 1181466, moving to lost+found
disconnected inode 1181467, moving to lost+found
disconnected inode 1181468, moving to lost+found
disconnected inode 1181469, moving to lost+found
disconnected inode 1181470, moving to lost+found
disconnected inode 1181471, moving to lost+found
disconnected inode 1181472, moving to lost+found
disconnected inode 1181473, moving to lost+found
disconnected inode 1181474, moving to lost+found
disconnected inode 1181475, moving to lost+found
disconnected inode 1181476, moving to lost+found
disconnected inode 1181477, moving to lost+found
disconnected inode 1181478, moving to lost+found
disconnected inode 1181479, moving to lost+found
disconnected inode 1181480, moving to lost+found
disconnected inode 1181481, moving to lost+found
disconnected inode 1181482, moving to lost+found
disconnected inode 1181483, moving to lost+found
disconnected inode 1182908, moving to lost+found
disconnected inode 1182911, moving to lost+found
disconnected inode 1182952, moving to lost+found
disconnected inode 1182953, moving to lost+found
disconnected inode 1182954, moving to lost+found
disconnected inode 1182955, moving to lost+found
disconnected inode 1182956, moving to lost+found
disconnected inode 1182957, moving to lost+found
disconnected inode 1182958, moving to lost+found
disconnected inode 1182959, moving to lost+found
disconnected inode 1182962, moving to lost+found
disconnected inode 1182963, moving to lost+found
disconnected inode 1182964, moving to lost+found
disconnected inode 1182965, moving to lost+found
disconnected inode 1182966, moving to lost+found
disconnected inode 1182967, moving to lost+found
disconnected inode 1182968, moving to lost+found
disconnected inode 1182969, moving to lost+found
disconnected inode 1182970, moving to lost+found
disconnected inode 1182971, moving to lost+found
disconnected inode 1182972, moving to lost+found
disconnected inode 1182973, moving to lost+found
disconnected inode 1182974, moving to lost+found
disconnected inode 1182975, moving to lost+found
disconnected inode 1183043, moving to lost+found
disconnected inode 1183044, moving to lost+found
disconnected inode 1183052, moving to lost+found
disconnected inode 1183062, moving to lost+found
disconnected inode 1183076, moving to lost+found
disconnected inode 1183095, moving to lost+found
disconnected inode 1183096, moving to lost+found
disconnected inode 1183098, moving to lost+found
disconnected inode 1183099, moving to lost+found
disconnected inode 1183101, moving to lost+found
disconnected inode 1183234, moving to lost+found
disconnected inode 1183241, moving to lost+found
disconnected inode 1183259, moving to lost+found
disconnected inode 1183260, moving to lost+found
disconnected inode 1183277, moving to lost+found
disconnected inode 1184292, moving to lost+found
disconnected inode 1184293, moving to lost+found
disconnected inode 1184295, moving to lost+found
disconnected inode 1184296, moving to lost+found
disconnected inode 1184297, moving to lost+found
disconnected inode 1184298, moving to lost+found
disconnected inode 1184300, moving to lost+found
disconnected inode 1184301, moving to lost+found
disconnected inode 1184302, moving to lost+found
disconnected inode 1184303, moving to lost+found
disconnected inode 1184304, moving to lost+found
disconnected inode 1184305, moving to lost+found
disconnected inode 1184306, moving to lost+found
disconnected inode 1184307, moving to lost+found
disconnected inode 1184308, moving to lost+found
disconnected inode 1184309, moving to lost+found
disconnected inode 1184310, moving to lost+found
disconnected inode 1184311, moving to lost+found
disconnected inode 1184312, moving to lost+found
disconnected inode 1184316, moving to lost+found
disconnected inode 1184318, moving to lost+found
disconnected inode 1184319, moving to lost+found
disconnected inode 1184385, moving to lost+found
disconnected inode 1184386, moving to lost+found
disconnected inode 1184387, moving to lost+found
disconnected inode 1184388, moving to lost+found
disconnected inode 1184390, moving to lost+found
disconnected inode 1184391, moving to lost+found
disconnected inode 1184392, moving to lost+found
disconnected inode 1184393, moving to lost+found
disconnected inode 1184394, moving to lost+found
disconnected inode 1184395, moving to lost+found
disconnected inode 1184396, moving to lost+found
disconnected inode 1184397, moving to lost+found
disconnected inode 1184398, moving to lost+found
disconnected inode 1184399, moving to lost+found
disconnected inode 1184400, moving to lost+found
disconnected inode 1184401, moving to lost+found
disconnected inode 1184402, moving to lost+found
disconnected inode 1184406, moving to lost+found
disconnected inode 1184408, moving to lost+found
disconnected inode 1184409, moving to lost+found
disconnected inode 1184411, moving to lost+found
disconnected inode 1184412, moving to lost+found
disconnected inode 1184413, moving to lost+found
disconnected inode 1184414, moving to lost+found
disconnected inode 1184415, moving to lost+found
disconnected inode 1184481, moving to lost+found
disconnected inode 1184482, moving to lost+found
disconnected inode 1184483, moving to lost+found
disconnected inode 1184484, moving to lost+found
disconnected inode 1184485, moving to lost+found
disconnected inode 1184486, moving to lost+found
disconnected inode 1184487, moving to lost+found
disconnected inode 1184488, moving to lost+found
disconnected inode 1184489, moving to lost+found
disconnected inode 1184490, moving to lost+found
disconnected inode 1184491, moving to lost+found
disconnected inode 1184492, moving to lost+found
disconnected inode 1184502, moving to lost+found
disconnected inode 1184503, moving to lost+found
disconnected inode 1184504, moving to lost+found
disconnected inode 1184507, moving to lost+found
disconnected inode 1184509, moving to lost+found
disconnected inode 1184510, moving to lost+found
disconnected inode 1184511, moving to lost+found
disconnected inode 1184576, moving to lost+found
disconnected inode 1184577, moving to lost+found
disconnected inode 1184578, moving to lost+found
disconnected inode 1184579, moving to lost+found
disconnected inode 1184580, moving to lost+found
disconnected inode 1184581, moving to lost+found
disconnected inode 1184582, moving to lost+found
disconnected inode 1184586, moving to lost+found
disconnected inode 1184588, moving to lost+found
disconnected inode 1184589, moving to lost+found
disconnected inode 1184591, moving to lost+found
disconnected inode 1184592, moving to lost+found
disconnected inode 1184593, moving to lost+found
disconnected inode 1184594, moving to lost+found
disconnected inode 1184595, moving to lost+found
disconnected inode 1184596, moving to lost+found
disconnected inode 1184597, moving to lost+found
disconnected inode 1184599, moving to lost+found
disconnected inode 1184600, moving to lost+found
disconnected inode 1184601, moving to lost+found
disconnected inode 1184602, moving to lost+found
disconnected inode 1184603, moving to lost+found
disconnected inode 1184604, moving to lost+found
disconnected inode 1184605, moving to lost+found
disconnected inode 1184606, moving to lost+found
disconnected inode 1184607, moving to lost+found
disconnected inode 1184691, moving to lost+found
disconnected inode 1184692, moving to lost+found
disconnected inode 1184695, moving to lost+found
disconnected inode 1184696, moving to lost+found
disconnected inode 1184697, moving to lost+found
disconnected inode 1184698, moving to lost+found
disconnected inode 1184699, moving to lost+found
disconnected inode 1184700, moving to lost+found
disconnected inode 1184701, moving to lost+found
disconnected inode 1184704, moving to lost+found
disconnected inode 1184705, moving to lost+found
disconnected inode 1184706, moving to lost+found
disconnected inode 1184707, moving to lost+found
disconnected inode 1184708, moving to lost+found
disconnected inode 1184709, moving to lost+found
disconnected inode 1184711, moving to lost+found
disconnected inode 1184712, moving to lost+found
disconnected inode 1184713, moving to lost+found
disconnected inode 1184714, moving to lost+found
disconnected inode 1184715, moving to lost+found
disconnected inode 1184716, moving to lost+found
disconnected inode 1184717, moving to lost+found
disconnected inode 1184718, moving to lost+found
disconnected inode 1184719, moving to lost+found
disconnected inode 1184720, moving to lost+found
disconnected inode 1184721, moving to lost+found
disconnected inode 1184722, moving to lost+found
disconnected inode 1184723, moving to lost+found
disconnected inode 1184724, moving to lost+found
disconnected inode 1184725, moving to lost+found
disconnected inode 1184726, moving to lost+found
disconnected inode 1184727, moving to lost+found
disconnected inode 1184728, moving to lost+found
disconnected inode 1184729, moving to lost+found
disconnected inode 1184730, moving to lost+found
disconnected inode 1184731, moving to lost+found
disconnected inode 1184732, moving to lost+found
disconnected inode 1184733, moving to lost+found
disconnected inode 1184734, moving to lost+found
disconnected inode 1184735, moving to lost+found
disconnected inode 1184737, moving to lost+found
disconnected inode 1184738, moving to lost+found
disconnected inode 1184739, moving to lost+found
disconnected inode 1184740, moving to lost+found
disconnected inode 1184741, moving to lost+found
disconnected inode 1184742, moving to lost+found
disconnected inode 1184743, moving to lost+found
disconnected inode 1184744, moving to lost+found
disconnected inode 1184745, moving to lost+found
disconnected inode 1184746, moving to lost+found
disconnected inode 1184747, moving to lost+found
disconnected inode 1184748, moving to lost+found
disconnected inode 1184749, moving to lost+found
disconnected inode 1184750, moving to lost+found
disconnected inode 1184751, moving to lost+found
disconnected inode 1184752, moving to lost+found
disconnected inode 1184753, moving to lost+found
disconnected inode 1184754, moving to lost+found
disconnected inode 1184755, moving to lost+found
disconnected inode 1184757, moving to lost+found
disconnected inode 1184759, moving to lost+found
disconnected inode 1184760, moving to lost+found
disconnected inode 1184761, moving to lost+found
disconnected inode 1184762, moving to lost+found
disconnected inode 1184763, moving to lost+found
disconnected inode 1184764, moving to lost+found
disconnected inode 1184765, moving to lost+found
disconnected inode 1184767, moving to lost+found
disconnected inode 1184832, moving to lost+found
disconnected inode 1184834, moving to lost+found
disconnected inode 1184835, moving to lost+found
disconnected inode 1184836, moving to lost+found
disconnected inode 1184837, moving to lost+found
disconnected inode 1184838, moving to lost+found
disconnected inode 1184839, moving to lost+found
disconnected inode 1184840, moving to lost+found
disconnected inode 1184842, moving to lost+found
disconnected inode 1184843, moving to lost+found
disconnected inode 1184844, moving to lost+found
disconnected inode 1184845, moving to lost+found
disconnected inode 1184846, moving to lost+found
disconnected inode 1184847, moving to lost+found
disconnected inode 1184848, moving to lost+found
disconnected inode 1184849, moving to lost+found
disconnected inode 1184850, moving to lost+found
disconnected inode 1184851, moving to lost+found
disconnected inode 1184852, moving to lost+found
disconnected inode 1184854, moving to lost+found
disconnected inode 1184855, moving to lost+found
disconnected inode 1184856, moving to lost+found
disconnected inode 1184857, moving to lost+found
disconnected inode 1184858, moving to lost+found
disconnected inode 1184859, moving to lost+found
disconnected inode 1184860, moving to lost+found
disconnected inode 1184861, moving to lost+found
disconnected inode 1184862, moving to lost+found
disconnected inode 1184863, moving to lost+found
disconnected inode 1184864, moving to lost+found
disconnected inode 1184867, moving to lost+found
disconnected inode 1184870, moving to lost+found
disconnected inode 1184871, moving to lost+found
disconnected inode 1184874, moving to lost+found
disconnected inode 1184875, moving to lost+found
disconnected inode 1184876, moving to lost+found
disconnected inode 1184877, moving to lost+found
disconnected inode 1184878, moving to lost+found
disconnected inode 1184879, moving to lost+found
disconnected inode 1184880, moving to lost+found
disconnected inode 1184881, moving to lost+found
disconnected inode 1184882, moving to lost+found
disconnected inode 1184883, moving to lost+found
disconnected inode 1184884, moving to lost+found
disconnected inode 1184885, moving to lost+found
disconnected inode 1184886, moving to lost+found
disconnected inode 1184887, moving to lost+found
disconnected inode 1184888, moving to lost+found
disconnected inode 1184889, moving to lost+found
disconnected inode 1184893, moving to lost+found
disconnected inode 1184894, moving to lost+found
disconnected inode 1184963, moving to lost+found
disconnected inode 1184964, moving to lost+found
disconnected inode 1184965, moving to lost+found
disconnected inode 1184966, moving to lost+found
disconnected inode 1184967, moving to lost+found
disconnected inode 1184968, moving to lost+found
disconnected inode 1184969, moving to lost+found
disconnected inode 1184971, moving to lost+found
disconnected inode 1184972, moving to lost+found
disconnected inode 1184973, moving to lost+found
disconnected inode 1184974, moving to lost+found
disconnected inode 1184975, moving to lost+found
disconnected inode 1184976, moving to lost+found
disconnected inode 1184977, moving to lost+found
disconnected inode 1184978, moving to lost+found
disconnected inode 1184979, moving to lost+found
disconnected inode 1184980, moving to lost+found
disconnected inode 1184981, moving to lost+found
disconnected inode 1184982, moving to lost+found
disconnected inode 1184983, moving to lost+found
disconnected inode 1184987, moving to lost+found
disconnected inode 1184989, moving to lost+found
disconnected inode 1184990, moving to lost+found
disconnected inode 1185056, moving to lost+found
disconnected inode 1185057, moving to lost+found
disconnected inode 1185058, moving to lost+found
disconnected inode 1185059, moving to lost+found
disconnected inode 1185061, moving to lost+found
disconnected inode 1185062, moving to lost+found
disconnected inode 1185063, moving to lost+found
disconnected inode 1185070, moving to lost+found
disconnected inode 1185071, moving to lost+found
disconnected inode 1185072, moving to lost+found
disconnected inode 1185073, moving to lost+found
disconnected inode 1185077, moving to lost+found
disconnected inode 1185079, moving to lost+found
disconnected inode 1185080, moving to lost+found
disconnected inode 1185082, moving to lost+found
disconnected inode 1185083, moving to lost+found
disconnected inode 1185084, moving to lost+found
disconnected inode 1185085, moving to lost+found
disconnected inode 1185086, moving to lost+found
disconnected inode 1185087, moving to lost+found
disconnected inode 1185153, moving to lost+found
disconnected inode 1185154, moving to lost+found
disconnected inode 1185155, moving to lost+found
disconnected inode 1185156, moving to lost+found
disconnected inode 1185157, moving to lost+found
disconnected inode 1185158, moving to lost+found
disconnected inode 1185159, moving to lost+found
disconnected inode 1185160, moving to lost+found
disconnected inode 1185161, moving to lost+found
disconnected inode 1185162, moving to lost+found
disconnected inode 1185163, moving to lost+found
disconnected inode 1185167, moving to lost+found
disconnected inode 1185169, moving to lost+found
disconnected inode 1185170, moving to lost+found
disconnected inode 1185172, moving to lost+found
disconnected inode 1185173, moving to lost+found
disconnected inode 1185174, moving to lost+found
disconnected inode 1185175, moving to lost+found
disconnected inode 1185176, moving to lost+found
disconnected inode 1185177, moving to lost+found
disconnected inode 1185178, moving to lost+found
disconnected inode 1185180, moving to lost+found
disconnected inode 1185181, moving to lost+found
disconnected inode 1185182, moving to lost+found
disconnected inode 1185183, moving to lost+found
disconnected inode 1185248, moving to lost+found
disconnected inode 1185249, moving to lost+found
disconnected inode 1185250, moving to lost+found
disconnected inode 1185251, moving to lost+found
disconnected inode 1185252, moving to lost+found
disconnected inode 1185253, moving to lost+found
disconnected inode 1185257, moving to lost+found
disconnected inode 1185259, moving to lost+found
disconnected inode 1185260, moving to lost+found
disconnected inode 1185262, moving to lost+found
disconnected inode 1185263, moving to lost+found
disconnected inode 1185264, moving to lost+found
disconnected inode 1185265, moving to lost+found
disconnected inode 1185266, moving to lost+found
disconnected inode 1185267, moving to lost+found
disconnected inode 1185268, moving to lost+found
disconnected inode 1185270, moving to lost+found
disconnected inode 1185271, moving to lost+found
disconnected inode 1185272, moving to lost+found
disconnected inode 1185273, moving to lost+found
disconnected inode 1185274, moving to lost+found
disconnected inode 1185275, moving to lost+found
disconnected inode 1185276, moving to lost+found
disconnected inode 1185277, moving to lost+found
disconnected inode 1185278, moving to lost+found
disconnected inode 1185279, moving to lost+found
disconnected inode 1185363, moving to lost+found
disconnected inode 1185364, moving to lost+found
disconnected inode 1185365, moving to lost+found
disconnected inode 1185366, moving to lost+found
disconnected inode 1185367, moving to lost+found
disconnected inode 1185368, moving to lost+found
disconnected inode 1185369, moving to lost+found
disconnected inode 1185370, moving to lost+found
disconnected inode 1185442, moving to lost+found
disconnected inode 1185443, moving to lost+found
disconnected inode 1185450, moving to lost+found
disconnected inode 1185451, moving to lost+found
disconnected inode 1185453, moving to lost+found
disconnected inode 1185454, moving to lost+found
disconnected inode 1185455, moving to lost+found
disconnected inode 1185456, moving to lost+found
disconnected inode 1185457, moving to lost+found
disconnected inode 1185458, moving to lost+found
disconnected inode 1185459, moving to lost+found
disconnected inode 1185460, moving to lost+found
disconnected inode 1185461, moving to lost+found
disconnected inode 1185462, moving to lost+found
disconnected inode 1185466, moving to lost+found
disconnected inode 1185468, moving to lost+found
disconnected inode 1185469, moving to lost+found
disconnected inode 1185471, moving to lost+found
disconnected inode 1185536, moving to lost+found
disconnected inode 1185537, moving to lost+found
disconnected inode 1185538, moving to lost+found
disconnected inode 1185540, moving to lost+found
disconnected inode 1185541, moving to lost+found
disconnected inode 1185542, moving to lost+found
disconnected inode 1185543, moving to lost+found
disconnected inode 1185544, moving to lost+found
disconnected inode 1185545, moving to lost+found
disconnected inode 1185546, moving to lost+found
disconnected inode 1185547, moving to lost+found
disconnected inode 1185548, moving to lost+found
disconnected inode 1185549, moving to lost+found
disconnected inode 1185550, moving to lost+found
disconnected inode 1195171, moving to lost+found
disconnected inode 1195172, moving to lost+found
disconnected inode 1216319, moving to lost+found
disconnected inode 1216422, moving to lost+found
disconnected inode 2365570, moving to lost+found
disconnected inode 2365572, moving to lost+found
disconnected inode 2365592, moving to lost+found
disconnected inode 2365628, moving to lost+found
disconnected inode 2365634, moving to lost+found
disconnected inode 2365641, moving to lost+found
disconnected inode 2365643, moving to lost+found
disconnected inode 2365689, moving to lost+found
disconnected inode 2365694, moving to lost+found
disconnected inode 2365696, moving to lost+found
disconnected inode 2365704, moving to lost+found
disconnected inode 2365765, moving to lost+found
disconnected inode 2365793, moving to lost+found
disconnected inode 2365797, moving to lost+found
disconnected inode 2365809, moving to lost+found
disconnected inode 2365814, moving to lost+found
disconnected inode 2365825, moving to lost+found
disconnected inode 2365836, moving to lost+found
disconnected inode 2365864, moving to lost+found
disconnected inode 2365865, moving to lost+found
disconnected inode 2365874, moving to lost+found
disconnected inode 2365880, moving to lost+found
disconnected inode 2365881, moving to lost+found
disconnected inode 2365885, moving to lost+found
disconnected inode 2365888, moving to lost+found
disconnected inode 2365918, moving to lost+found
disconnected inode 2365930, moving to lost+found
disconnected inode 2365932, moving to lost+found
disconnected inode 2365959, moving to lost+found
disconnected inode 2365968, moving to lost+found
disconnected inode 2365972, moving to lost+found
disconnected inode 2365976, moving to lost+found
disconnected inode 2365987, moving to lost+found
disconnected inode 2365990, moving to lost+found
disconnected inode 2366011, moving to lost+found
disconnected inode 2366052, moving to lost+found
disconnected inode 2366075, moving to lost+found
disconnected inode 2366091, moving to lost+found
disconnected inode 2366115, moving to lost+found
disconnected inode 2366116, moving to lost+found
disconnected inode 2366119, moving to lost+found
disconnected inode 2366121, moving to lost+found
disconnected inode 2366127, moving to lost+found
disconnected inode 2366131, moving to lost+found
disconnected inode 2366214, moving to lost+found
disconnected inode 2366226, moving to lost+found
disconnected inode 2366232, moving to lost+found
disconnected inode 2366240, moving to lost+found
disconnected inode 2366242, moving to lost+found
disconnected inode 2366250, moving to lost+found
disconnected inode 2366256, moving to lost+found
disconnected inode 2366259, moving to lost+found
disconnected inode 2366261, moving to lost+found
disconnected inode 2366270, moving to lost+found
disconnected inode 2366280, moving to lost+found
disconnected inode 2366289, moving to lost+found
disconnected inode 2366358, moving to lost+found
disconnected inode 2366369, moving to lost+found
disconnected inode 2366370, moving to lost+found
disconnected inode 2366433, moving to lost+found
disconnected inode 2366440, moving to lost+found
disconnected inode 2366482, moving to lost+found
disconnected inode 2366484, moving to lost+found
disconnected inode 2366499, moving to lost+found
disconnected inode 2366510, moving to lost+found
disconnected inode 2366523, moving to lost+found
disconnected inode 2366541, moving to lost+found
disconnected inode 2366548, moving to lost+found
disconnected inode 2366549, moving to lost+found
disconnected inode 2366570, moving to lost+found
disconnected inode 2366574, moving to lost+found
disconnected inode 2366577, moving to lost+found
disconnected inode 2366583, moving to lost+found
disconnected inode 2366586, moving to lost+found
disconnected inode 2366588, moving to lost+found
disconnected inode 2366609, moving to lost+found
disconnected inode 2366614, moving to lost+found
disconnected inode 2366647, moving to lost+found
disconnected inode 2366663, moving to lost+found
disconnected inode 2366705, moving to lost+found
disconnected inode 2366709, moving to lost+found
disconnected inode 2366720, moving to lost+found
disconnected inode 2366723, moving to lost+found
disconnected inode 2366731, moving to lost+found
disconnected inode 2366735, moving to lost+found
disconnected inode 2366765, moving to lost+found
disconnected inode 2366816, moving to lost+found
disconnected inode 2366822, moving to lost+found
disconnected inode 2366826, moving to lost+found
disconnected inode 2366830, moving to lost+found
disconnected inode 2366833, moving to lost+found
disconnected inode 2366844, moving to lost+found
disconnected inode 2366874, moving to lost+found
disconnected inode 2366909, moving to lost+found
disconnected inode 2366949, moving to lost+found
disconnected inode 2366951, moving to lost+found
disconnected inode 2366963, moving to lost+found
disconnected inode 2366977, moving to lost+found
disconnected inode 2366986, moving to lost+found
disconnected inode 2366995, moving to lost+found
disconnected inode 2367006, moving to lost+found
disconnected inode 2367063, moving to lost+found
disconnected inode 2367065, moving to lost+found
disconnected inode 2367066, moving to lost+found
disconnected inode 2367078, moving to lost+found
disconnected inode 2367131, moving to lost+found
disconnected inode 2367170, moving to lost+found
disconnected inode 2367208, moving to lost+found
disconnected inode 2367211, moving to lost+found
disconnected inode 2367257, moving to lost+found
disconnected inode 2367318, moving to lost+found
disconnected inode 2367340, moving to lost+found
disconnected inode 2367345, moving to lost+found
disconnected inode 2367351, moving to lost+found
disconnected inode 2367355, moving to lost+found
disconnected inode 2367364, moving to lost+found
disconnected inode 2367373, moving to lost+found
disconnected inode 2367374, moving to lost+found
disconnected inode 2367379, moving to lost+found
disconnected inode 2367382, moving to lost+found
disconnected inode 2367386, moving to lost+found
disconnected inode 2367388, moving to lost+found
disconnected inode 2367393, moving to lost+found
disconnected inode 2367398, moving to lost+found
disconnected inode 2367400, moving to lost+found
disconnected inode 2367402, moving to lost+found
disconnected inode 2367407, moving to lost+found
disconnected inode 2367416, moving to lost+found
disconnected inode 2367496, moving to lost+found
disconnected inode 2367498, moving to lost+found
disconnected inode 2367499, moving to lost+found
disconnected inode 2367501, moving to lost+found
disconnected inode 2367502, moving to lost+found
disconnected inode 2367503, moving to lost+found
disconnected inode 2367522, moving to lost+found
disconnected inode 2367580, moving to lost+found
disconnected inode 2367649, moving to lost+found
disconnected inode 2367659, moving to lost+found
disconnected inode 2367673, moving to lost+found
disconnected inode 2367683, moving to lost+found
disconnected inode 2367688, moving to lost+found
disconnected inode 2367708, moving to lost+found
disconnected inode 2367719, moving to lost+found
disconnected inode 2367725, moving to lost+found
disconnected inode 2367733, moving to lost+found
disconnected inode 2367746, moving to lost+found
disconnected inode 2367747, moving to lost+found
disconnected inode 2367755, moving to lost+found
disconnected inode 2367761, moving to lost+found
disconnected inode 2367785, moving to lost+found
disconnected inode 2367790, moving to lost+found
disconnected inode 2367828, moving to lost+found
disconnected inode 2367829, moving to lost+found
disconnected inode 2367831, moving to lost+found
disconnected inode 2367833, moving to lost+found
disconnected inode 2367834, moving to lost+found
disconnected inode 2367855, moving to lost+found
disconnected inode 2367859, moving to lost+found
disconnected inode 2367862, moving to lost+found
disconnected inode 2367866, moving to lost+found
disconnected inode 2367869, moving to lost+found
disconnected inode 2367906, moving to lost+found
disconnected inode 2367912, moving to lost+found
disconnected inode 2367918, moving to lost+found
disconnected inode 2367923, moving to lost+found
disconnected inode 2367925, moving to lost+found
disconnected inode 2367944, moving to lost+found
disconnected inode 2367953, moving to lost+found
disconnected inode 2367960, moving to lost+found
disconnected inode 2367977, moving to lost+found
disconnected inode 2367984, moving to lost+found
disconnected inode 2368035, moving to lost+found
disconnected inode 2368057, moving to lost+found
disconnected inode 2368063, moving to lost+found
disconnected inode 2368072, moving to lost+found
disconnected inode 2368077, moving to lost+found
disconnected inode 2368082, moving to lost+found
disconnected inode 2368084, moving to lost+found
disconnected inode 2368087, moving to lost+found
disconnected inode 2368090, moving to lost+found
disconnected inode 2368091, moving to lost+found
disconnected inode 2368093, moving to lost+found
disconnected inode 2368106, moving to lost+found
disconnected inode 2368108, moving to lost+found
disconnected inode 2368112, moving to lost+found
disconnected inode 2368122, moving to lost+found
disconnected inode 2368152, moving to lost+found
disconnected inode 2368230, moving to lost+found
disconnected inode 2368262, moving to lost+found
disconnected inode 2368266, moving to lost+found
disconnected inode 2368285, moving to lost+found
disconnected inode 2368292, moving to lost+found
disconnected inode 2368324, moving to lost+found
disconnected inode 2368346, moving to lost+found
disconnected inode 2368347, moving to lost+found
disconnected inode 2368387, moving to lost+found
disconnected inode 2368395, moving to lost+found
disconnected inode 2368398, moving to lost+found
disconnected inode 2368402, moving to lost+found
disconnected inode 2368405, moving to lost+found
disconnected inode 2368407, moving to lost+found
disconnected inode 2368411, moving to lost+found
disconnected inode 2368421, moving to lost+found
disconnected inode 2368422, moving to lost+found
disconnected inode 2368429, moving to lost+found
disconnected inode 2368432, moving to lost+found
disconnected inode 2368434, moving to lost+found
disconnected inode 2368442, moving to lost+found
disconnected inode 2368459, moving to lost+found
disconnected inode 2368472, moving to lost+found
disconnected inode 2368486, moving to lost+found
disconnected inode 2368493, moving to lost+found
disconnected inode 2368514, moving to lost+found
disconnected inode 2368541, moving to lost+found
disconnected inode 2368546, moving to lost+found
disconnected inode 2368576, moving to lost+found
disconnected inode 2368580, moving to lost+found
disconnected inode 2368596, moving to lost+found
disconnected inode 2368602, moving to lost+found
disconnected inode 2368604, moving to lost+found
disconnected inode 2368609, moving to lost+found
disconnected inode 2368610, moving to lost+found
disconnected inode 2368614, moving to lost+found
disconnected inode 2368617, moving to lost+found
disconnected inode 2368623, moving to lost+found
disconnected inode 2368624, moving to lost+found
disconnected inode 2368625, moving to lost+found
disconnected inode 2368644, moving to lost+found
disconnected inode 2368659, moving to lost+found
disconnected inode 2368673, moving to lost+found
disconnected inode 2368676, moving to lost+found
disconnected inode 2368677, moving to lost+found
disconnected inode 2368694, moving to lost+found
disconnected inode 2368699, moving to lost+found
disconnected inode 2368722, moving to lost+found
disconnected inode 2368736, moving to lost+found
disconnected inode 2368740, moving to lost+found
disconnected inode 2368745, moving to lost+found
disconnected inode 2368763, moving to lost+found
disconnected inode 2368766, moving to lost+found
disconnected inode 2368782, moving to lost+found
disconnected inode 2368783, moving to lost+found
disconnected inode 2368789, moving to lost+found
disconnected inode 2368796, moving to lost+found
disconnected inode 2368801, moving to lost+found
disconnected inode 2368822, moving to lost+found
disconnected inode 2368825, moving to lost+found
disconnected inode 2368830, moving to lost+found
disconnected inode 2368834, moving to lost+found
disconnected inode 2368847, moving to lost+found
disconnected inode 2368853, moving to lost+found
disconnected inode 2368864, moving to lost+found
disconnected inode 2368878, moving to lost+found
disconnected inode 2368888, moving to lost+found
disconnected inode 2368890, moving to lost+found
disconnected inode 2368901, moving to lost+found
disconnected inode 2368905, moving to lost+found
disconnected inode 2368910, moving to lost+found
disconnected inode 2368933, moving to lost+found
disconnected inode 2368938, moving to lost+found
disconnected inode 2368940, moving to lost+found
disconnected inode 2368943, moving to lost+found
disconnected inode 2368944, moving to lost+found
disconnected inode 2368946, moving to lost+found
disconnected inode 2368953, moving to lost+found
disconnected inode 2368960, moving to lost+found
disconnected inode 2368961, moving to lost+found
disconnected inode 2368963, moving to lost+found
disconnected inode 2368972, moving to lost+found
disconnected inode 2368973, moving to lost+found
disconnected inode 2369002, moving to lost+found
disconnected inode 2369015, moving to lost+found
disconnected inode 2369056, moving to lost+found
disconnected inode 2369077, moving to lost+found
disconnected inode 2369085, moving to lost+found
disconnected inode 2369099, moving to lost+found
disconnected inode 2369104, moving to lost+found
disconnected inode 2369119, moving to lost+found
disconnected inode 2369123, moving to lost+found
disconnected inode 2369126, moving to lost+found
disconnected inode 2369141, moving to lost+found
disconnected inode 2369150, moving to lost+found
disconnected inode 2369156, moving to lost+found
disconnected inode 2369171, moving to lost+found
disconnected inode 2369172, moving to lost+found
disconnected inode 2369184, moving to lost+found
disconnected inode 2369189, moving to lost+found
disconnected inode 2369222, moving to lost+found
disconnected inode 2369227, moving to lost+found
disconnected inode 2369229, moving to lost+found
disconnected inode 2369233, moving to lost+found
disconnected inode 2369234, moving to lost+found
disconnected inode 2369236, moving to lost+found
disconnected inode 2369259, moving to lost+found
disconnected inode 2369264, moving to lost+found
disconnected inode 2369268, moving to lost+found
disconnected inode 2369269, moving to lost+found
disconnected inode 2369276, moving to lost+found
disconnected inode 2369286, moving to lost+found
disconnected inode 2369290, moving to lost+found
disconnected inode 2369327, moving to lost+found
disconnected inode 2369374, moving to lost+found
disconnected inode 2369380, moving to lost+found
disconnected inode 2369428, moving to lost+found
disconnected inode 2369441, moving to lost+found
disconnected inode 2369443, moving to lost+found
disconnected inode 2369448, moving to lost+found
disconnected inode 2369449, moving to lost+found
disconnected inode 2369451, moving to lost+found
disconnected inode 2369461, moving to lost+found
disconnected inode 2369466, moving to lost+found
disconnected inode 2369472, moving to lost+found
disconnected inode 2369482, moving to lost+found
disconnected inode 2369533, moving to lost+found
disconnected inode 2369537, moving to lost+found
disconnected inode 2369539, moving to lost+found
disconnected inode 2369542, moving to lost+found
disconnected inode 2369543, moving to lost+found
disconnected inode 2369548, moving to lost+found
disconnected inode 2369559, moving to lost+found
disconnected inode 2369565, moving to lost+found
disconnected inode 2369567, moving to lost+found
disconnected inode 2369570, moving to lost+found
disconnected inode 2369593, moving to lost+found
disconnected inode 2369597, moving to lost+found
disconnected inode 2369614, moving to lost+found
disconnected inode 2369618, moving to lost+found
disconnected inode 2369621, moving to lost+found
disconnected inode 2369640, moving to lost+found
disconnected inode 2369647, moving to lost+found
disconnected inode 2369671, moving to lost+found
disconnected inode 2369675, moving to lost+found
disconnected inode 2369677, moving to lost+found
disconnected inode 2369686, moving to lost+found
disconnected inode 2369691, moving to lost+found
disconnected inode 2369702, moving to lost+found
disconnected inode 2369704, moving to lost+found
disconnected inode 2369710, moving to lost+found
disconnected inode 2369738, moving to lost+found
disconnected inode 2369745, moving to lost+found
disconnected inode 2369767, moving to lost+found
disconnected inode 2369775, moving to lost+found
disconnected inode 2369776, moving to lost+found
disconnected inode 2369786, moving to lost+found
disconnected inode 2369793, moving to lost+found
disconnected inode 2369807, moving to lost+found
disconnected inode 2369814, moving to lost+found
disconnected inode 2369817, moving to lost+found
disconnected inode 2369838, moving to lost+found
disconnected inode 2369878, moving to lost+found
disconnected inode 2369889, moving to lost+found
disconnected inode 2369908, moving to lost+found
disconnected inode 2369940, moving to lost+found
disconnected inode 2369945, moving to lost+found
disconnected inode 2369950, moving to lost+found
disconnected inode 2369959, moving to lost+found
disconnected inode 2369969, moving to lost+found
disconnected inode 2369975, moving to lost+found
disconnected inode 2369996, moving to lost+found
disconnected inode 2369999, moving to lost+found
disconnected inode 2370005, moving to lost+found
disconnected inode 2370012, moving to lost+found
disconnected inode 2370052, moving to lost+found
disconnected inode 2370054, moving to lost+found
disconnected inode 2370057, moving to lost+found
disconnected inode 2370066, moving to lost+found
disconnected inode 2370075, moving to lost+found
disconnected inode 2370077, moving to lost+found
disconnected inode 2370117, moving to lost+found
disconnected inode 2370118, moving to lost+found
disconnected inode 2370129, moving to lost+found
disconnected inode 2370182, moving to lost+found
disconnected inode 2370185, moving to lost+found
disconnected inode 2370274, moving to lost+found
disconnected inode 2370277, moving to lost+found
disconnected inode 2370288, moving to lost+found
disconnected inode 2370293, moving to lost+found
disconnected inode 2370294, moving to lost+found
disconnected inode 2370295, moving to lost+found
disconnected inode 2370299, moving to lost+found
disconnected inode 2370309, moving to lost+found
disconnected inode 2370316, moving to lost+found
disconnected inode 2370324, moving to lost+found
disconnected inode 2373250, moving to lost+found
disconnected inode 2373271, moving to lost+found
disconnected inode 2373388, moving to lost+found
disconnected inode 2373389, moving to lost+found
disconnected inode 2373390, moving to lost+found
disconnected inode 2373391, moving to lost+found
disconnected inode 2373393, moving to lost+found
disconnected inode 2373394, moving to lost+found
disconnected inode 2373396, moving to lost+found
disconnected inode 2373398, moving to lost+found
disconnected inode 2373400, moving to lost+found
disconnected inode 2373403, moving to lost+found
disconnected inode 2373404, moving to lost+found
disconnected inode 2373405, moving to lost+found
disconnected inode 2373406, moving to lost+found
disconnected inode 2373407, moving to lost+found
disconnected inode 2373410, moving to lost+found
disconnected inode 2373411, moving to lost+found
disconnected inode 2373413, moving to lost+found
disconnected inode 2373415, moving to lost+found
disconnected inode 2373421, moving to lost+found
disconnected inode 2373422, moving to lost+found
disconnected inode 2373430, moving to lost+found
disconnected inode 2373431, moving to lost+found
disconnected inode 2373432, moving to lost+found
disconnected inode 2373433, moving to lost+found
disconnected inode 2373434, moving to lost+found
disconnected inode 2373435, moving to lost+found
disconnected inode 2373436, moving to lost+found
disconnected inode 2373438, moving to lost+found
disconnected inode 2373440, moving to lost+found
disconnected inode 2373441, moving to lost+found
disconnected inode 2373442, moving to lost+found
disconnected inode 2373443, moving to lost+found
disconnected inode 2373445, moving to lost+found
disconnected inode 2373450, moving to lost+found
disconnected inode 2373451, moving to lost+found
disconnected inode 2373452, moving to lost+found
disconnected inode 2373454, moving to lost+found
disconnected inode 2373457, moving to lost+found
disconnected inode 2373459, moving to lost+found
disconnected inode 2373460, moving to lost+found
disconnected inode 2373461, moving to lost+found
disconnected inode 2373462, moving to lost+found
disconnected inode 2373464, moving to lost+found
disconnected inode 2373465, moving to lost+found
disconnected inode 2373466, moving to lost+found
disconnected inode 2373467, moving to lost+found
disconnected inode 2373470, moving to lost+found
disconnected inode 2373473, moving to lost+found
disconnected inode 2373482, moving to lost+found
disconnected inode 2373483, moving to lost+found
disconnected inode 2373484, moving to lost+found
disconnected inode 2373485, moving to lost+found
disconnected inode 2373486, moving to lost+found
disconnected inode 2373489, moving to lost+found
disconnected inode 2373490, moving to lost+found
disconnected inode 2373491, moving to lost+found
disconnected inode 2373492, moving to lost+found
disconnected inode 2373493, moving to lost+found
disconnected inode 2373494, moving to lost+found
disconnected inode 2373495, moving to lost+found
disconnected inode 2373496, moving to lost+found
disconnected inode 2373497, moving to lost+found
disconnected inode 2373498, moving to lost+found
disconnected inode 2373499, moving to lost+found
disconnected inode 2373500, moving to lost+found
disconnected inode 2373502, moving to lost+found
disconnected inode 2373504, moving to lost+found
disconnected inode 2373505, moving to lost+found
disconnected inode 2373508, moving to lost+found
disconnected inode 2373509, moving to lost+found
disconnected inode 2373510, moving to lost+found
disconnected inode 2373511, moving to lost+found
disconnected inode 2373512, moving to lost+found
disconnected inode 2373513, moving to lost+found
disconnected inode 2373514, moving to lost+found
disconnected inode 2373515, moving to lost+found
disconnected inode 2373516, moving to lost+found
disconnected inode 2373517, moving to lost+found
disconnected inode 2373518, moving to lost+found
disconnected inode 2373519, moving to lost+found
disconnected inode 2373520, moving to lost+found
disconnected inode 2373521, moving to lost+found
disconnected inode 2373522, moving to lost+found
disconnected inode 2373524, moving to lost+found
disconnected inode 2373526, moving to lost+found
disconnected inode 2373527, moving to lost+found
disconnected inode 2373528, moving to lost+found
disconnected inode 2373530, moving to lost+found
disconnected inode 2373531, moving to lost+found
disconnected inode 2373533, moving to lost+found
disconnected inode 2373534, moving to lost+found
disconnected inode 2373536, moving to lost+found
disconnected inode 2373537, moving to lost+found
disconnected inode 2373538, moving to lost+found
disconnected inode 2373539, moving to lost+found
disconnected inode 2373540, moving to lost+found
disconnected inode 2373542, moving to lost+found
disconnected inode 2373543, moving to lost+found
disconnected inode 2373544, moving to lost+found
disconnected inode 2373545, moving to lost+found
disconnected inode 2373546, moving to lost+found
disconnected inode 2373548, moving to lost+found
disconnected inode 2373549, moving to lost+found
disconnected inode 2373550, moving to lost+found
disconnected inode 2373551, moving to lost+found
disconnected inode 2373552, moving to lost+found
disconnected inode 2373553, moving to lost+found
disconnected inode 2373555, moving to lost+found
disconnected inode 2373556, moving to lost+found
disconnected inode 2373558, moving to lost+found
disconnected inode 2373559, moving to lost+found
disconnected inode 2373560, moving to lost+found
disconnected inode 2373561, moving to lost+found
disconnected inode 2373563, moving to lost+found
disconnected inode 2373564, moving to lost+found
disconnected inode 2373566, moving to lost+found
disconnected inode 2373569, moving to lost+found
disconnected inode 2373571, moving to lost+found
disconnected inode 2373572, moving to lost+found
disconnected inode 2373573, moving to lost+found
disconnected inode 2373575, moving to lost+found
disconnected inode 2373576, moving to lost+found
disconnected inode 2373578, moving to lost+found
disconnected inode 2373579, moving to lost+found
disconnected inode 2373580, moving to lost+found
disconnected inode 2373582, moving to lost+found
disconnected inode 2373583, moving to lost+found
disconnected inode 2373584, moving to lost+found
disconnected inode 2373585, moving to lost+found
disconnected inode 2373586, moving to lost+found
disconnected inode 2373587, moving to lost+found
disconnected inode 2373588, moving to lost+found
disconnected inode 2373589, moving to lost+found
disconnected inode 2373590, moving to lost+found
disconnected inode 2373591, moving to lost+found
disconnected inode 2373592, moving to lost+found
disconnected inode 2373593, moving to lost+found
disconnected inode 2373594, moving to lost+found
disconnected inode 2373595, moving to lost+found
disconnected inode 2373596, moving to lost+found
disconnected inode 2373597, moving to lost+found
disconnected inode 2373598, moving to lost+found
disconnected inode 2373599, moving to lost+found
disconnected inode 2373601, moving to lost+found
disconnected inode 2373602, moving to lost+found
disconnected inode 2373603, moving to lost+found
disconnected inode 2373607, moving to lost+found
disconnected inode 2373609, moving to lost+found
disconnected inode 2373610, moving to lost+found
disconnected inode 2373611, moving to lost+found
disconnected inode 2373612, moving to lost+found
disconnected inode 2373613, moving to lost+found
disconnected inode 2373614, moving to lost+found
disconnected inode 2373615, moving to lost+found
disconnected inode 2373616, moving to lost+found
disconnected inode 2373617, moving to lost+found
disconnected inode 2373618, moving to lost+found
disconnected inode 2373620, moving to lost+found
disconnected inode 2373623, moving to lost+found
disconnected inode 2373624, moving to lost+found
disconnected inode 2373625, moving to lost+found
disconnected inode 2373626, moving to lost+found
disconnected inode 2373628, moving to lost+found
disconnected inode 2373629, moving to lost+found
disconnected inode 2373631, moving to lost+found
disconnected inode 2373632, moving to lost+found
disconnected inode 2373633, moving to lost+found
disconnected inode 2373634, moving to lost+found
disconnected inode 2373635, moving to lost+found
disconnected inode 2373636, moving to lost+found
disconnected inode 2373639, moving to lost+found
disconnected inode 2373641, moving to lost+found
disconnected inode 2373642, moving to lost+found
disconnected inode 2373643, moving to lost+found
disconnected inode 2373644, moving to lost+found
disconnected inode 2373645, moving to lost+found
disconnected inode 2373646, moving to lost+found
disconnected inode 2373647, moving to lost+found
disconnected inode 2373648, moving to lost+found
disconnected inode 2373649, moving to lost+found
disconnected inode 2373651, moving to lost+found
disconnected inode 2373652, moving to lost+found
disconnected inode 2373653, moving to lost+found
disconnected inode 2373654, moving to lost+found
disconnected inode 2373656, moving to lost+found
disconnected inode 2373657, moving to lost+found
disconnected inode 2373658, moving to lost+found
disconnected inode 2373660, moving to lost+found
disconnected inode 2373661, moving to lost+found
disconnected inode 2373663, moving to lost+found
disconnected inode 2373664, moving to lost+found
disconnected inode 2373665, moving to lost+found
disconnected inode 2373666, moving to lost+found
disconnected inode 2373667, moving to lost+found
disconnected inode 2373668, moving to lost+found
disconnected inode 2373669, moving to lost+found
disconnected inode 2373670, moving to lost+found
disconnected inode 2373671, moving to lost+found
disconnected inode 2373673, moving to lost+found
disconnected inode 2373674, moving to lost+found
disconnected inode 2373675, moving to lost+found
disconnected inode 2373676, moving to lost+found
disconnected inode 2373677, moving to lost+found
disconnected inode 2373678, moving to lost+found
disconnected inode 2373679, moving to lost+found
disconnected inode 2373680, moving to lost+found
disconnected inode 2373681, moving to lost+found
disconnected inode 2373683, moving to lost+found
disconnected inode 2373684, moving to lost+found
disconnected inode 2373685, moving to lost+found
disconnected inode 2373686, moving to lost+found
disconnected inode 2373687, moving to lost+found
disconnected inode 2373688, moving to lost+found
disconnected inode 2373691, moving to lost+found
disconnected inode 2373692, moving to lost+found
disconnected inode 2373693, moving to lost+found
disconnected inode 2373695, moving to lost+found
disconnected inode 2373696, moving to lost+found
disconnected inode 2373699, moving to lost+found
disconnected inode 2373700, moving to lost+found
disconnected inode 2373703, moving to lost+found
disconnected inode 2373706, moving to lost+found
disconnected inode 2373707, moving to lost+found
disconnected inode 2373708, moving to lost+found
disconnected inode 2373709, moving to lost+found
disconnected inode 2373710, moving to lost+found
disconnected inode 2373711, moving to lost+found
disconnected inode 2373715, moving to lost+found
disconnected inode 2373716, moving to lost+found
disconnected inode 2373718, moving to lost+found
disconnected inode 2373719, moving to lost+found
disconnected inode 2373720, moving to lost+found
disconnected inode 2373721, moving to lost+found
disconnected inode 2373725, moving to lost+found
disconnected inode 2373726, moving to lost+found
disconnected inode 2373727, moving to lost+found
disconnected inode 2373728, moving to lost+found
disconnected inode 2373729, moving to lost+found
disconnected inode 2373730, moving to lost+found
disconnected inode 2373731, moving to lost+found
disconnected inode 2373733, moving to lost+found
disconnected inode 2373734, moving to lost+found
disconnected inode 2373737, moving to lost+found
disconnected inode 2373738, moving to lost+found
disconnected inode 2373739, moving to lost+found
disconnected inode 2373741, moving to lost+found
disconnected inode 2373742, moving to lost+found
disconnected inode 2373743, moving to lost+found
disconnected inode 2373744, moving to lost+found
disconnected inode 2373745, moving to lost+found
disconnected inode 2373747, moving to lost+found
disconnected inode 2373748, moving to lost+found
disconnected inode 2373749, moving to lost+found
disconnected inode 2373750, moving to lost+found
disconnected inode 2373751, moving to lost+found
disconnected inode 2373753, moving to lost+found
disconnected inode 2373754, moving to lost+found
disconnected inode 2373755, moving to lost+found
disconnected inode 2373756, moving to lost+found
disconnected inode 2373757, moving to lost+found
disconnected inode 2373758, moving to lost+found
disconnected inode 2373759, moving to lost+found
disconnected inode 2373760, moving to lost+found
disconnected inode 2373761, moving to lost+found
disconnected inode 2373763, moving to lost+found
disconnected inode 2373764, moving to lost+found
disconnected inode 2373765, moving to lost+found
disconnected inode 2373766, moving to lost+found
disconnected inode 2373767, moving to lost+found
disconnected inode 2373768, moving to lost+found
disconnected inode 2373769, moving to lost+found
disconnected inode 2373771, moving to lost+found
disconnected inode 2373772, moving to lost+found
disconnected inode 2373773, moving to lost+found
disconnected inode 2373774, moving to lost+found
disconnected inode 2373776, moving to lost+found
disconnected inode 2373777, moving to lost+found
disconnected inode 2373778, moving to lost+found
disconnected inode 2373779, moving to lost+found
disconnected inode 2373780, moving to lost+found
disconnected inode 2373782, moving to lost+found
disconnected inode 2373784, moving to lost+found
disconnected inode 2373788, moving to lost+found
disconnected inode 2373789, moving to lost+found
disconnected inode 2373790, moving to lost+found
disconnected inode 2373791, moving to lost+found
disconnected inode 2373792, moving to lost+found
disconnected inode 2373794, moving to lost+found
disconnected inode 2373795, moving to lost+found
disconnected inode 2386102, moving to lost+found
disconnected inode 2386103, moving to lost+found
disconnected inode 2388855, moving to lost+found
disconnected inode 2388856, moving to lost+found
disconnected inode 2388859, moving to lost+found
disconnected inode 2388862, moving to lost+found
disconnected inode 2388864, moving to lost+found
disconnected inode 2388865, moving to lost+found
disconnected inode 2388866, moving to lost+found
disconnected inode 2388869, moving to lost+found
disconnected inode 2388870, moving to lost+found
disconnected inode 2388871, moving to lost+found
disconnected inode 2388873, moving to lost+found
disconnected inode 2388874, moving to lost+found
disconnected inode 2388876, moving to lost+found
disconnected inode 2388877, moving to lost+found
disconnected inode 2388879, moving to lost+found
disconnected inode 2388880, moving to lost+found
disconnected inode 2388881, moving to lost+found
disconnected inode 2388883, moving to lost+found
disconnected inode 2388885, moving to lost+found
disconnected inode 2388886, moving to lost+found
disconnected inode 2388887, moving to lost+found
disconnected inode 2388888, moving to lost+found
disconnected inode 2417954, moving to lost+found
disconnected inode 2417977, moving to lost+found
disconnected inode 2417978, moving to lost+found
disconnected inode 2417980, moving to lost+found
disconnected inode 2417981, moving to lost+found
disconnected inode 2422382, moving to lost+found
disconnected inode 2422383, moving to lost+found
disconnected inode 2422384, moving to lost+found
disconnected inode 2518106, moving to lost+found
disconnected inode 2518107, moving to lost+found
disconnected inode 2518108, moving to lost+found
disconnected inode 2518377, moving to lost+found
disconnected inode 2531806, moving to lost+found
disconnected inode 2531807, moving to lost+found
disconnected inode 2531808, moving to lost+found
disconnected inode 2531809, moving to lost+found
disconnected inode 2531810, moving to lost+found
disconnected inode 2531811, moving to lost+found
disconnected inode 2531812, moving to lost+found
disconnected inode 2531813, moving to lost+found
disconnected inode 2531814, moving to lost+found
disconnected inode 2531815, moving to lost+found
disconnected inode 2531816, moving to lost+found
disconnected inode 2531817, moving to lost+found
disconnected inode 2531818, moving to lost+found
disconnected inode 2531819, moving to lost+found
disconnected inode 2531820, moving to lost+found
disconnected inode 2531821, moving to lost+found
disconnected inode 2531822, moving to lost+found
disconnected inode 2531823, moving to lost+found
disconnected inode 2531828, moving to lost+found
disconnected inode 2531829, moving to lost+found
disconnected inode 2531830, moving to lost+found
disconnected inode 2531832, moving to lost+found
disconnected inode 2531833, moving to lost+found
disconnected inode 2531834, moving to lost+found
disconnected inode 2531835, moving to lost+found
disconnected inode 2531837, moving to lost+found
disconnected inode 2531838, moving to lost+found
disconnected inode 2531839, moving to lost+found
disconnected inode 2532071, moving to lost+found
disconnected inode 2639853, moving to lost+found
disconnected inode 2640372, moving to lost+found
disconnected inode 2640388, moving to lost+found
disconnected inode 2640389, moving to lost+found
disconnected inode 2640391, moving to lost+found
disconnected inode 2641555, moving to lost+found
disconnected inode 2724412, moving to lost+found
disconnected inode 2736961, moving to lost+found
disconnected inode 2736962, moving to lost+found
disconnected inode 2738428, moving to lost+found
disconnected inode 2738437, moving to lost+found
disconnected inode 2738460, moving to lost+found
disconnected inode 2738463, moving to lost+found
disconnected inode 2739325, moving to lost+found
disconnected inode 2744083, moving to lost+found
disconnected inode 2746526, moving to lost+found
disconnected inode 2746767, moving to lost+found
disconnected inode 2747181, moving to lost+found
disconnected inode 2747183, moving to lost+found
Phase 7 - verify and correct link counts...
cache_purge: shake on cache 0x147f030 left 1 nodes!?
done

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Fwd: Sudden File System Corruption
       [not found]         ` <CAPd9ww8+W2VX2HAfxEkVN5mL1a_+=HDAStf1126WSE33Vb=VsQ@mail.gmail.com>
  2013-12-06 23:15           ` Fwd: " Mike Dacre
@ 2013-12-07 11:12           ` Stan Hoeppner
  2013-12-07 18:36             ` Mike Dacre
  1 sibling, 1 reply; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-07 11:12 UTC (permalink / raw)
  To: Mike Dacre; +Cc: xfs@oss.sgi.com

On 12/6/2013 4:14 PM, Mike Dacre wrote:
> On Fri, Dec 6, 2013 at 12:58 AM, Stan Hoeppner <stan@hardwarefreak.com>wrote:
...
> UUID=a58bf1db-0d64-4a2d-8e03-aad78dbebcbe /science                xfs
> defaults,inode64          1 0

Your RAID card has persistent write cache (BBWC) and we know it's
enabled from tool your output.  By default XFS assumes BBWC is not
present, and uses write barriers to ensure order/consistency.   Using
barriers on top of BBWC will be detrimental to write performance, for a
couple of reasons:

1.  Prevents the controller from optimizing writeback patterns
2.  A portion, or all of, the write cache is frequently flushed

Add 'nobarrier' to your mount options to avoid this problem.  It should
speed up many, if not all, write operations considerably, which will in
turn decrease seek contention amongst jobs.  Currently your write cache
isn't working nearly as well as it should, and in fact could be
operating horribly.

> On the slave nodes, I managed to reduce the demand on the disks by adding
> the actimeo=60 mount option.  Prior to doing this I would sometimes see the
> disk being negatively affected by enormous numbers of getattr requests.
>  Here is the fstab mount on the nodes:
> 
> 192.168.2.1:/science                      /science                nfs
> defaults,vers=3,nofail,actimeo=60,bg,hard,intr,rw  0 0

One minute attribute cache lifetime seems maybe a little high for a
compute cluster.  But if you've had no ill effects and it squelched the
getattr flood this is good.

...
> Correct, I am not consciously aligning the XFS to the RAID geometry, I
> actually didn't know that was possible.

XFS alignment is not something to worry about in this case.

...
>> So it's a small compute cluster using NFS over Infiniband for shared
>> file access to a low performance RAID6 array.  The IO resource sharing
>> is automatic.  But AFAIK there's no easy way to enforce IO quotas on
>> users or processes, if at all.  You may simply not have sufficient IO to
>> go around.  Let's ponder that.
> 
> I have tried a few things to improve IO allocation.  BetterLinux have a
> cgroup control suite that allow on-the-fly user-level IO adjustments,
> however I found them to be quite cumbersome.

This isn't going to work well because a tiny IO stream can seek the
disks to death, such as a complex find command, ls -R, etc.  A single
command such as these can generate thousands of seeks.  Shaping/limiting
user IO won't affect this.

...
>> Looking at the math, you currently have approximately 14*150=2100
>> seeks/sec capability with 14x 7.2k RPM data spindles.  That's less than
>> 100 seeks/sec per compute node, i.e. each node is getting about 2/3rd of
>> the performance of a single SATA disk from this array.  This simply
>> isn't sufficient for servicing a 23 node cluster, unless all workloads
>> are compute bound, and none IO/seek bound.  Given the overload/crash
>> that brought you to our attention, I'd say some of your workloads are
>> obviously IO/seek bound.  I'd say you probably need more/faster disks.
>> Or you need to identify which jobs are IO/seek heavy and schedule them
>> so they're not running concurrently.
> 
> Yes, this is a problem.  We sadly lack the resources to do much better than
> this, we have recently been adding extra storage by just chaining together
> USB3 drives with RAID and LVM... which is cumbersome and slow, but cheaper.

USB disk is generally a recipe for disaster.  Plenty of horror stories
on both this list and linux-raid regarding USB connected drives,
enclosures, etc.  I pray you don't run into those problems.

> My current solution is to be on the alert for high IO jobs, and to move
> them to a specific torque queue that limits the number of concurrent jobs.
>  This works, but I have not found a way to do it automatically.
>  Thankfully, with a 12 member lab, it is actually not terribly complex to
> handle, but I would definitely prefer a more comprehensive solution.  I
> don't doubt that the huge IO and seek demands we put on these disks will
> cause more problems in the future.

Your LSI 9260 controller supports using SSDs for read/write flash cache.
 LSI charges $279 for it.  It's called CacheCade Pro:

http://www.lsi.com/products/raid-controllers/pages/megaraid-cachecade-pro-software.aspx.


Connect two good quality fast SSDs to the controller, such as:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820147192

Two SSDs, mirrored, to prevent cached writes from being lost if a single
SSD fails.  You now have a ~90K IOPS, 128GB, 500MB/s low latency
read/write cache in front of your RAID6 array.  This should go a long
way toward eliminating your bottlenecks.  You can accomplish this for
~$550 assuming you have two backplane drive slots free for the SSDs.  If
not, you add one of these for $279:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207

This is an Intel 24 port SAS expander, the same device as in your drive
backplane.  SAS expanders can be daisy chained many deep.  You can drop
it into a PCIe x4 or greater slot from which it only draws power--no
data pins are connected.  Or if no slots are available you can mount it
to the side wall of your rack server chassis and power it via the 4 pin
Molex plug.  This requires a drill, brass or plastic standoffs, and DIY
skills.  I use this option as it provides a solid mount for un/plugging
the SAS cables, and being side mounted neither it nor the cables
interfere with airflow.

You'll plug the 9260-4i into one port of the Intel expander.  You'll
need another SFF-8087 cable for this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16812652015

You will plug your drive backplane cable into another of the 6 SFF-8087
ports on the Intel.  Into a 3rd port you will plug an SFF-8087 breakout
cable to give you 4 individual drive connections.  You will plug two of
these into your two SSDs.

http://www.newegg.com/Product/Product.aspx?Item=N82E16816116097

If you have no internal 2.5/3.5" drive brackets free for the SSDs and
you'd prefer not to drill (more) holes in the chassis to directly mount
them or a new cage for them, simply use some heavy duty Velcro squares,
2" is fine.

Worst case scenario you're looking at less than $1000 to cure your IO
bottlenecks, or at the very least mitigate them to a minor annoyance
instead of a show stopper.  And if you free up some money for some
external JBODs and drives in the future, you can route 2 of the unused
SFF-8087 connectors of the Intel Expander out the back panel to attach
expander JBOD enclosures, using one of these and 2 more of the 8087
cables up above:

http://www.ebay.com/itm/8-Port-SAS-SATA-6G-Dual-SFF-8088-mini-SAS-to-SFF-8087-PCIe-Adapter-w-LP-Bracket-/390508767029

I'm sure someone makes a 3 port model but 10 minutes of searching didn't
turn one up.  These panel adapters are application specific.  Most are
made to be mounted in a disk enclosure where the HBA/RAID card is on the
outside of the chassis, on the other end of the 8088 cable.  This two
port model is designed to be inside a server chassis, where the HBA
connects to the internal 8087 ports.  Think Ethernet x-over cable.

The 9260-4i supports up to 128 drives.  This Intel expander and a panel
connector allow you to get there with external JBODs.  The only caveat
being that you're limited to "only" 4.8 GB/s to/from all the disks.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Fwd: Sudden File System Corruption
  2013-12-07 11:12           ` Stan Hoeppner
@ 2013-12-07 18:36             ` Mike Dacre
  2013-12-08  5:22               ` Stan Hoeppner
  0 siblings, 1 reply; 25+ messages in thread
From: Mike Dacre @ 2013-12-07 18:36 UTC (permalink / raw)
  To: stan; +Cc: xfs@oss.sgi.com


[-- Attachment #1.1: Type: text/plain, Size: 8033 bytes --]

On Sat, Dec 7, 2013 at 3:12 AM, Stan Hoeppner <stan@hardwarefreak.com>wrote:

> On 12/6/2013 4:14 PM, Mike Dacre wrote:
> > On Fri, Dec 6, 2013 at 12:58 AM, Stan Hoeppner <stan@hardwarefreak.com
> >wrote:
> ...
> > UUID=a58bf1db-0d64-4a2d-8e03-aad78dbebcbe /science                xfs
> > defaults,inode64          1 0
>
> Your RAID card has persistent write cache (BBWC) and we know it's
> enabled from tool your output.  By default XFS assumes BBWC is not
> present, and uses write barriers to ensure order/consistency.   Using
> barriers on top of BBWC will be detrimental to write performance, for a
> couple of reasons:
>
> 1.  Prevents the controller from optimizing writeback patterns
> 2.  A portion, or all of, the write cache is frequently flushed
>
> Add 'nobarrier' to your mount options to avoid this problem.  It should
> speed up many, if not all, write operations considerably, which will in
> turn decrease seek contention amongst jobs.  Currently your write cache
> isn't working nearly as well as it should, and in fact could be
> operating horribly.
>
> > On the slave nodes, I managed to reduce the demand on the disks by adding
> > the actimeo=60 mount option.  Prior to doing this I would sometimes see
> the
> > disk being negatively affected by enormous numbers of getattr requests.
> >  Here is the fstab mount on the nodes:
> >
> > 192.168.2.1:/science                      /science                nfs
> > defaults,vers=3,nofail,actimeo=60,bg,hard,intr,rw  0 0
>
> One minute attribute cache lifetime seems maybe a little high for a
> compute cluster.  But if you've had no ill effects and it squelched the
> getattr flood this is good.
>
> ...
> > Correct, I am not consciously aligning the XFS to the RAID geometry, I
> > actually didn't know that was possible.
>
> XFS alignment is not something to worry about in this case.
>
> ...
> >> So it's a small compute cluster using NFS over Infiniband for shared
> >> file access to a low performance RAID6 array.  The IO resource sharing
> >> is automatic.  But AFAIK there's no easy way to enforce IO quotas on
> >> users or processes, if at all.  You may simply not have sufficient IO to
> >> go around.  Let's ponder that.
> >
> > I have tried a few things to improve IO allocation.  BetterLinux have a
> > cgroup control suite that allow on-the-fly user-level IO adjustments,
> > however I found them to be quite cumbersome.
>
> This isn't going to work well because a tiny IO stream can seek the
> disks to death, such as a complex find command, ls -R, etc.  A single
> command such as these can generate thousands of seeks.  Shaping/limiting
> user IO won't affect this.
>
> ...
> >> Looking at the math, you currently have approximately 14*150=2100
> >> seeks/sec capability with 14x 7.2k RPM data spindles.  That's less than
> >> 100 seeks/sec per compute node, i.e. each node is getting about 2/3rd of
> >> the performance of a single SATA disk from this array.  This simply
> >> isn't sufficient for servicing a 23 node cluster, unless all workloads
> >> are compute bound, and none IO/seek bound.  Given the overload/crash
> >> that brought you to our attention, I'd say some of your workloads are
> >> obviously IO/seek bound.  I'd say you probably need more/faster disks.
> >> Or you need to identify which jobs are IO/seek heavy and schedule them
> >> so they're not running concurrently.
> >
> > Yes, this is a problem.  We sadly lack the resources to do much better
> than
> > this, we have recently been adding extra storage by just chaining
> together
> > USB3 drives with RAID and LVM... which is cumbersome and slow, but
> cheaper.
>
> USB disk is generally a recipe for disaster.  Plenty of horror stories
> on both this list and linux-raid regarding USB connected drives,
> enclosures, etc.  I pray you don't run into those problems.
>
> > My current solution is to be on the alert for high IO jobs, and to move
> > them to a specific torque queue that limits the number of concurrent
> jobs.
> >  This works, but I have not found a way to do it automatically.
> >  Thankfully, with a 12 member lab, it is actually not terribly complex to
> > handle, but I would definitely prefer a more comprehensive solution.  I
> > don't doubt that the huge IO and seek demands we put on these disks will
> > cause more problems in the future.
>
> Your LSI 9260 controller supports using SSDs for read/write flash cache.
>  LSI charges $279 for it.  It's called CacheCade Pro:
>
>
> http://www.lsi.com/products/raid-controllers/pages/megaraid-cachecade-pro-software.aspx
> .
>
>
> Connect two good quality fast SSDs to the controller, such as:
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820147192
>
> Two SSDs, mirrored, to prevent cached writes from being lost if a single
> SSD fails.  You now have a ~90K IOPS, 128GB, 500MB/s low latency
> read/write cache in front of your RAID6 array.  This should go a long
> way toward eliminating your bottlenecks.  You can accomplish this for
> ~$550 assuming you have two backplane drive slots free for the SSDs.  If
> not, you add one of these for $279:
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207
>
> This is an Intel 24 port SAS expander, the same device as in your drive
> backplane.  SAS expanders can be daisy chained many deep.  You can drop
> it into a PCIe x4 or greater slot from which it only draws power--no
> data pins are connected.  Or if no slots are available you can mount it
> to the side wall of your rack server chassis and power it via the 4 pin
> Molex plug.  This requires a drill, brass or plastic standoffs, and DIY
> skills.  I use this option as it provides a solid mount for un/plugging
> the SAS cables, and being side mounted neither it nor the cables
> interfere with airflow.
>
> You'll plug the 9260-4i into one port of the Intel expander.  You'll
> need another SFF-8087 cable for this:
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16812652015
>
> You will plug your drive backplane cable into another of the 6 SFF-8087
> ports on the Intel.  Into a 3rd port you will plug an SFF-8087 breakout
> cable to give you 4 individual drive connections.  You will plug two of
> these into your two SSDs.
>
> http://www.newegg.com/Product/Product.aspx?Item=N82E16816116097
>
> If you have no internal 2.5/3.5" drive brackets free for the SSDs and
> you'd prefer not to drill (more) holes in the chassis to directly mount
> them or a new cage for them, simply use some heavy duty Velcro squares,
> 2" is fine.
>
> Worst case scenario you're looking at less than $1000 to cure your IO
> bottlenecks, or at the very least mitigate them to a minor annoyance
> instead of a show stopper.  And if you free up some money for some
> external JBODs and drives in the future, you can route 2 of the unused
> SFF-8087 connectors of the Intel Expander out the back panel to attach
> expander JBOD enclosures, using one of these and 2 more of the 8087
> cables up above:
>
>
> http://www.ebay.com/itm/8-Port-SAS-SATA-6G-Dual-SFF-8088-mini-SAS-to-SFF-8087-PCIe-Adapter-w-LP-Bracket-/390508767029
>
> I'm sure someone makes a 3 port model but 10 minutes of searching didn't
> turn one up.  These panel adapters are application specific.  Most are
> made to be mounted in a disk enclosure where the HBA/RAID card is on the
> outside of the chassis, on the other end of the 8088 cable.  This two
> port model is designed to be inside a server chassis, where the HBA
> connects to the internal 8087 ports.  Think Ethernet x-over cable.
>
> The 9260-4i supports up to 128 drives.  This Intel expander and a panel
> connector allow you to get there with external JBODs.  The only caveat
> being that you're limited to "only" 4.8 GB/s to/from all the disks.
>
> --
> Stan
>

Hi Stan,

Thanks for the great advice, I think you are on to something there.  I will
look into doing this in the next week or so when I have more time.  I added
'nobarrier' to my mount options.

Thanks again, I will let you know how it goes after I have upgraded.

Best,

Mike

[-- Attachment #1.2: Type: text/html, Size: 10378 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Fwd: Sudden File System Corruption
  2013-12-07 18:36             ` Mike Dacre
@ 2013-12-08  5:22               ` Stan Hoeppner
  2013-12-08 15:03                 ` Emmanuel Florac
  0 siblings, 1 reply; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-08  5:22 UTC (permalink / raw)
  To: Mike Dacre; +Cc: xfs@oss.sgi.com

On 12/7/2013 12:36 PM, Mike Dacre wrote:
> On Sat, Dec 7, 2013 at 3:12 AM, Stan Hoeppner <stan@hardwarefreak.com>wrote:
> 
>> On 12/6/2013 4:14 PM, Mike Dacre wrote:
>>> On Fri, Dec 6, 2013 at 12:58 AM, Stan Hoeppner <stan@hardwarefreak.com
>>> wrote:
>> ...
>>> UUID=a58bf1db-0d64-4a2d-8e03-aad78dbebcbe /science                xfs
>>> defaults,inode64          1 0
>>
>> Your RAID card has persistent write cache (BBWC) and we know it's
>> enabled from tool your output.  By default XFS assumes BBWC is not
>> present, and uses write barriers to ensure order/consistency.   Using
>> barriers on top of BBWC will be detrimental to write performance, for a
>> couple of reasons:
>>
>> 1.  Prevents the controller from optimizing writeback patterns
>> 2.  A portion, or all of, the write cache is frequently flushed
>>
>> Add 'nobarrier' to your mount options to avoid this problem.  It should
>> speed up many, if not all, write operations considerably, which will in
>> turn decrease seek contention amongst jobs.  Currently your write cache
>> isn't working nearly as well as it should, and in fact could be
>> operating horribly.
>>
>>> On the slave nodes, I managed to reduce the demand on the disks by adding
>>> the actimeo=60 mount option.  Prior to doing this I would sometimes see
>> the
>>> disk being negatively affected by enormous numbers of getattr requests.
>>>  Here is the fstab mount on the nodes:
>>>
>>> 192.168.2.1:/science                      /science                nfs
>>> defaults,vers=3,nofail,actimeo=60,bg,hard,intr,rw  0 0
>>
>> One minute attribute cache lifetime seems maybe a little high for a
>> compute cluster.  But if you've had no ill effects and it squelched the
>> getattr flood this is good.
>>
>> ...
>>> Correct, I am not consciously aligning the XFS to the RAID geometry, I
>>> actually didn't know that was possible.
>>
>> XFS alignment is not something to worry about in this case.
>>
>> ...
>>>> So it's a small compute cluster using NFS over Infiniband for shared
>>>> file access to a low performance RAID6 array.  The IO resource sharing
>>>> is automatic.  But AFAIK there's no easy way to enforce IO quotas on
>>>> users or processes, if at all.  You may simply not have sufficient IO to
>>>> go around.  Let's ponder that.
>>>
>>> I have tried a few things to improve IO allocation.  BetterLinux have a
>>> cgroup control suite that allow on-the-fly user-level IO adjustments,
>>> however I found them to be quite cumbersome.
>>
>> This isn't going to work well because a tiny IO stream can seek the
>> disks to death, such as a complex find command, ls -R, etc.  A single
>> command such as these can generate thousands of seeks.  Shaping/limiting
>> user IO won't affect this.
>>
>> ...
>>>> Looking at the math, you currently have approximately 14*150=2100
>>>> seeks/sec capability with 14x 7.2k RPM data spindles.  That's less than
>>>> 100 seeks/sec per compute node, i.e. each node is getting about 2/3rd of
>>>> the performance of a single SATA disk from this array.  This simply
>>>> isn't sufficient for servicing a 23 node cluster, unless all workloads
>>>> are compute bound, and none IO/seek bound.  Given the overload/crash
>>>> that brought you to our attention, I'd say some of your workloads are
>>>> obviously IO/seek bound.  I'd say you probably need more/faster disks.
>>>> Or you need to identify which jobs are IO/seek heavy and schedule them
>>>> so they're not running concurrently.
>>>
>>> Yes, this is a problem.  We sadly lack the resources to do much better
>> than
>>> this, we have recently been adding extra storage by just chaining
>> together
>>> USB3 drives with RAID and LVM... which is cumbersome and slow, but
>> cheaper.
>>
>> USB disk is generally a recipe for disaster.  Plenty of horror stories
>> on both this list and linux-raid regarding USB connected drives,
>> enclosures, etc.  I pray you don't run into those problems.
>>
>>> My current solution is to be on the alert for high IO jobs, and to move
>>> them to a specific torque queue that limits the number of concurrent
>> jobs.
>>>  This works, but I have not found a way to do it automatically.
>>>  Thankfully, with a 12 member lab, it is actually not terribly complex to
>>> handle, but I would definitely prefer a more comprehensive solution.  I
>>> don't doubt that the huge IO and seek demands we put on these disks will
>>> cause more problems in the future.
>>
>> Your LSI 9260 controller supports using SSDs for read/write flash cache.
>>  LSI charges $279 for it.  It's called CacheCade Pro:
>>
>>
>> http://www.lsi.com/products/raid-controllers/pages/megaraid-cachecade-pro-software.aspx
>> .
>>
>>
>> Connect two good quality fast SSDs to the controller, such as:
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820147192
>>
>> Two SSDs, mirrored, to prevent cached writes from being lost if a single
>> SSD fails.  You now have a ~90K IOPS, 128GB, 500MB/s low latency
>> read/write cache in front of your RAID6 array.  This should go a long
>> way toward eliminating your bottlenecks.  You can accomplish this for
>> ~$550 assuming you have two backplane drive slots free for the SSDs.  If
>> not, you add one of these for $279:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207
>>
>> This is an Intel 24 port SAS expander, the same device as in your drive
>> backplane.  SAS expanders can be daisy chained many deep.  You can drop
>> it into a PCIe x4 or greater slot from which it only draws power--no
>> data pins are connected.  Or if no slots are available you can mount it
>> to the side wall of your rack server chassis and power it via the 4 pin
>> Molex plug.  This requires a drill, brass or plastic standoffs, and DIY
>> skills.  I use this option as it provides a solid mount for un/plugging
>> the SAS cables, and being side mounted neither it nor the cables
>> interfere with airflow.
>>
>> You'll plug the 9260-4i into one port of the Intel expander.  You'll
>> need another SFF-8087 cable for this:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16812652015
>>
>> You will plug your drive backplane cable into another of the 6 SFF-8087
>> ports on the Intel.  Into a 3rd port you will plug an SFF-8087 breakout
>> cable to give you 4 individual drive connections.  You will plug two of
>> these into your two SSDs.
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16816116097
>>
>> If you have no internal 2.5/3.5" drive brackets free for the SSDs and
>> you'd prefer not to drill (more) holes in the chassis to directly mount
>> them or a new cage for them, simply use some heavy duty Velcro squares,
>> 2" is fine.
>>
>> Worst case scenario you're looking at less than $1000 to cure your IO
>> bottlenecks, or at the very least mitigate them to a minor annoyance
>> instead of a show stopper.  And if you free up some money for some
>> external JBODs and drives in the future, you can route 2 of the unused
>> SFF-8087 connectors of the Intel Expander out the back panel to attach
>> expander JBOD enclosures, using one of these and 2 more of the 8087
>> cables up above:
>>
>>
>> http://www.ebay.com/itm/8-Port-SAS-SATA-6G-Dual-SFF-8088-mini-SAS-to-SFF-8087-PCIe-Adapter-w-LP-Bracket-/390508767029
>>
>> I'm sure someone makes a 3 port model but 10 minutes of searching didn't
>> turn one up.  These panel adapters are application specific.  Most are
>> made to be mounted in a disk enclosure where the HBA/RAID card is on the
>> outside of the chassis, on the other end of the 8088 cable.  This two
>> port model is designed to be inside a server chassis, where the HBA
>> connects to the internal 8087 ports.  Think Ethernet x-over cable.
>>
>> The 9260-4i supports up to 128 drives.  This Intel expander and a panel
>> connector allow you to get there with external JBODs.  The only caveat
>> being that you're limited to "only" 4.8 GB/s to/from all the disks.
>>
>> --
>> Stan
>>
> 
> Hi Stan,
> 
> Thanks for the great advice, I think you are on to something there.  I will

You're welcome.  Full disclosure:  I should have mentioned that I
haven't used CacheCade yet myself.  My statements WRT performance are
based on available literature and understanding of the technology.

That said, considering the 9260-4i is $439 MSRP and the key to unlock
the CacheCade feature in the firmware is $279, well more than half the
price of the RAID card, LSI obviously feels there is serious performance
value in this feature.  If there wasn't the price of CacheCade would be
much lower.

> look into doing this in the next week or so when I have more time.  I added
> 'nobarrier' to my mount options.

Just be sure to remount the filesystem so this option becomes active.
Apologies for stating the obvious, but many people forget this step.

> Thanks again, I will let you know how it goes after I have upgraded.

I'd evaluate the impact of the noop elevator and nobarrier before
spending on SSD caching.  It may turn out you don't need it, yet.  But
yes, either way, it would be great to be kept abreast of your progress.
 If you do implement CacheCade I'm sure a wider audience would be
interested in reading of your experience with it.  There are probably
more than a few users on this list who have LSI gear.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-08  5:22               ` Stan Hoeppner
@ 2013-12-08 15:03                 ` Emmanuel Florac
  2013-12-09  0:58                   ` Stan Hoeppner
  0 siblings, 1 reply; 25+ messages in thread
From: Emmanuel Florac @ 2013-12-08 15:03 UTC (permalink / raw)
  To: stan; +Cc: Mike Dacre, xfs@oss.sgi.com

Le Sat, 07 Dec 2013 23:22:07 -0600 vous écriviez:

> > Thanks for the great advice, I think you are on to something
> > there.  I will  
> 
> You're welcome.  Full disclosure:  I should have mentioned that I
> haven't used CacheCade yet myself.  My statements WRT performance are
> based on available literature and understanding of the technology.

I didn't test thoroughly cachecade though I have a license code
somewhere, however I've used the equivalent Adaptec feature and one SSD
roughly double the IOPS of a RAID-6 array of 15k RPM SAS drives from
about 4200 IOPS to 7500 IOPS.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-06 23:15             ` Mike Dacre
@ 2013-12-08 22:20               ` Dave Chinner
  0 siblings, 0 replies; 25+ messages in thread
From: Dave Chinner @ 2013-12-08 22:20 UTC (permalink / raw)
  To: Mike Dacre; +Cc: Ben Myers, xfs


[ For future reference - can people keep triage on the public list
so everyone can see that the problem is being worked on? ]

On Fri, Dec 06, 2013 at 03:15:33PM -0800, Mike Dacre wrote:
> On Fri, Dec 6, 2013 at 2:56 PM, Ben Myers <bpm@sgi.com> wrote:
> > It's great that you have this.  And an interesting repair log.
> > The good news is that it doesn't look like the corruption that
> > xfs_repair doesn't fix, the bad news is that I don't recognise
> > it.
> 
> Here is the repair log from right after the corruption happened.
> The repair was successful.

If xfs_repair didn't report any freespace corruption, then it's
because it didn't see any. And that's not actually surprising for
this sort of shutdown followed by log recovery failures.

What it means the corruption was detected pretty much
immediately after it occurred and the shutdown confined it to the
log before it could be propagated to the in place metadata. Which
generally means the shutdown occurred within 30s of it occurring.

In my experience, this sort of "corruption confined to the log"
shutdown is usually a result of some kind of memory corruption that
is captured accidentally in the log due to object relogging (i.e. in
a dirty region from a previous change that is not yet committed to
the log) prior to it being detected in a transaction.

Without being able to see the before/after log recovery filesystem
images, there's nothing we can do to track this down further.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-08 15:03                 ` Emmanuel Florac
@ 2013-12-09  0:58                   ` Stan Hoeppner
  2013-12-09  1:40                     ` Dave Chinner
  2013-12-09  9:49                     ` Emmanuel Florac
  0 siblings, 2 replies; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-09  0:58 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: Mike Dacre, xfs@oss.sgi.com

On 12/8/2013 9:03 AM, Emmanuel Florac wrote:
> Le Sat, 07 Dec 2013 23:22:07 -0600 vous écriviez:
> 
>>> Thanks for the great advice, I think you are on to something
>>> there.  I will  
>>
>> You're welcome.  Full disclosure:  I should have mentioned that I
>> haven't used CacheCade yet myself.  My statements WRT performance are
>> based on available literature and understanding of the technology.
> 
> I didn't test thoroughly cachecade though I have a license code
> somewhere, however I've used the equivalent Adaptec feature and one SSD
> roughly double the IOPS of a RAID-6 array of 15k RPM SAS drives from
> about 4200 IOPS to 7500 IOPS.

Emmanuel do you recall which SSD you used here?  7500 IOPS is very low
by today's standards.  What I'm wondering is if you had an older low
IOPS SSD, or, a modern high IOPS rated SSD that performed way below its
specs in this application.

The Samsung 840 Pro I recommended is rated at 90K 4K write IOPS and
actually hits that mark in IOmeter testing at a queue depth of 7 and
greater:
http://www.tomshardware.com/reviews/840-pro-ssd-toggle-mode-2,3302-3.html

Its processor is a 3 core ARM Cortex R4 so it should excel in this RAID
cache application, which will likely have gobs of concurrency, and thus
a high queue depth.

Found a review of CacheCade 2.0.  Their testing shows near actual SSD
throughput.  The Micron P300 has 44K/16K read/write IOPS and their
testing hits 30K.  So you should be able to hit close to ~90K read/write
IOPS with the Samsung 840s.

http://www.storagereview.com/lsi_megaraid_cachecade_pro_20_review

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09  0:58                   ` Stan Hoeppner
@ 2013-12-09  1:40                     ` Dave Chinner
  2013-12-09 19:51                       ` Stan Hoeppner
  2013-12-09  9:49                     ` Emmanuel Florac
  1 sibling, 1 reply; 25+ messages in thread
From: Dave Chinner @ 2013-12-09  1:40 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: Mike Dacre, xfs@oss.sgi.com

On Sun, Dec 08, 2013 at 06:58:07PM -0600, Stan Hoeppner wrote:
> On 12/8/2013 9:03 AM, Emmanuel Florac wrote:
> > Le Sat, 07 Dec 2013 23:22:07 -0600 vous écriviez:
> > 
> >>> Thanks for the great advice, I think you are on to something
> >>> there.  I will  
> >>
> >> You're welcome.  Full disclosure:  I should have mentioned that I
> >> haven't used CacheCade yet myself.  My statements WRT performance are
> >> based on available literature and understanding of the technology.
> > 
> > I didn't test thoroughly cachecade though I have a license code
> > somewhere, however I've used the equivalent Adaptec feature and one SSD
> > roughly double the IOPS of a RAID-6 array of 15k RPM SAS drives from
> > about 4200 IOPS to 7500 IOPS.
> 
> Emmanuel do you recall which SSD you used here?  7500 IOPS is very low
> by today's standards.  What I'm wondering is if you had an older low
> IOPS SSD, or, a modern high IOPS rated SSD that performed way below its
> specs in this application.

It's most likely limited by the RAID firmware implementation, not
the SSD.

> 
> The Samsung 840 Pro I recommended is rated at 90K 4K write IOPS and
> actually hits that mark in IOmeter testing at a queue depth of 7 and
> greater:
> http://www.tomshardware.com/reviews/840-pro-ssd-toggle-mode-2,3302-3.html

Most RAID controllers can't saturate the IOPS capability of a single
modern SSD - the LSI 2208 in my largest test box can't sustain much
more than 30k write IOPS with the 1GB FBWC set to writeback mode,
even though the writes are spread across 4 SSDs that can do about
200k IOPS between them.

> Its processor is a 3 core ARM Cortex R4 so it should excel in this RAID
> cache application, which will likely have gobs of concurrency, and thus
> a high queue depth.

That is probably 2x more powerful as the RAID controller's CPU...

> Found a review of CacheCade 2.0.  Their testing shows near actual SSD
> throughput.  The Micron P300 has 44K/16K read/write IOPS and their
> testing hits 30K.  So you should be able to hit close to ~90K read/write
> IOPS with the Samsung 840s.
> 
> http://www.storagereview.com/lsi_megaraid_cachecade_pro_20_review

Like all benchmarks, take them with a grain of salt. There's nothing
there about the machine that it was actually tested on, and the data
sets used for most of the tests were a small fraction of the size of
the SSD (i.e. all the storagemark tests used a dataset smaller than
10GB, and the rest were sequential IO).

IOW, it was testing SSD resident performance only, not the
performance you'd see when the cache is full and having to page
random data in and out of the SSD cache to/from spinning disks.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09  0:58                   ` Stan Hoeppner
  2013-12-09  1:40                     ` Dave Chinner
@ 2013-12-09  9:49                     ` Emmanuel Florac
  1 sibling, 0 replies; 25+ messages in thread
From: Emmanuel Florac @ 2013-12-09  9:49 UTC (permalink / raw)
  To: stan; +Cc: Mike Dacre, xfs@oss.sgi.com

Le Sun, 08 Dec 2013 18:58:07 -0600 vous écriviez:

> Emmanuel do you recall which SSD you used here? 

Sure, was an Intel X25 32 GB. Definitely both tiny and slow by today
standards.

> 7500 IOPS is very low by today's standards. 

That's a benchmark run on the whole 3.5 TB array, not the SSD alone. The
SSD alone easily reached 20 KIOPS. 

> What I'm wondering is if you had an older low
> IOPS SSD, or, a modern high IOPS rated SSD that performed way below
> its specs in this application.

Definitely an older SSD. BTW the 51645 tested was capable of about 25
KIOPS.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-05  2:55 Sudden File System Corruption Mike Dacre
                   ` (2 preceding siblings ...)
  2013-12-05 17:40 ` Ben Myers
@ 2013-12-09 19:04 ` Eric Sandeen
  3 siblings, 0 replies; 25+ messages in thread
From: Eric Sandeen @ 2013-12-09 19:04 UTC (permalink / raw)
  To: Mike Dacre, xfs

On 12/4/13, 8:55 PM, Mike Dacre wrote:
> Hi Folks,
> 
> Apologies if this is the wrong place to post or if this has been answered already.
> 
> I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i.  It has an XFS filesystem and has been online for over a year.  It is accessed by 23 different machines connected via Infiniband over NFS v3.  I haven't had any major problems yet, one drive failed but it was easily replaced.
> 
> However, today the drive suddenly stopped responding and started returning IO errors when any requests were made.  This happened while it was being accessed by  5 different users, one was doing a very large rm operation (rm *sh on thousands on files in a directory).  Also, about 30 minutes before we had connected the globus connect endpoint to allow easy file transfers to SDSC.
> 
> I rebooted the machine which hosts it and checked the RAID6 logs, no physical problems with the drives at all.  I tried to mount and got the following error:
> 
> XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
> mount: Structure needs cleaning

I've seen a similar problem w/ a customer on a similar (proper) RHEL6 kernel.

Just to rule something in or out, do you regularly use xfs_fsr on this filesystem?

Is this something you can reliably reproduce?

thanks,
-Eric

> I ran xfs_check and got the following message:
> ERROR: The filesystem has valuable metadata changes in a log which needs to
> be replayed.  Mount the filesystem to replay the log, and unmount it before
> re-running xfs_check.  If you are unable to mount the filesystem, then use
> the xfs_repair -L option to destroy the log and attempt a repair.
> Note that destroying the log may cause corruption -- please attempt a mount
> of the filesystem before doing this.
> 
> 
> I checked the log and found the following message:
> 
> Dec  4 18:26:33 fruster kernel: XFS (sda1): Mounting Filesystem
> Dec  4 18:26:33 fruster kernel: XFS (sda1): Starting recovery (logdev: internal)
> Dec  4 18:26:36 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
> Dec  4 18:26:36 fruster kernel: 
> Dec  4 18:26:36 fruster kernel: Pid: 5491, comm: mount Not tainted 2.6.32-358.23.2.el6.x86_64 #1
> Dec  4 18:26:36 fruster kernel: Call Trace:
> Dec  4 18:26:36 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa046de2d>] ? xlog_recover_process_efi+0x1bd/0x200 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa04796ea>] ? xfs_trans_ail_cursor_set+0x1a/0x30 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa046ded2>] ? xlog_recover_process_efis+0x62/0xc0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0471f34>] ? xlog_recover_finish+0x24/0xd0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa046a3ac>] ? xfs_log_mount_finish+0x2c/0x30 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa0475a61>] ? xfs_mountfs+0x421/0x6a0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa048d6f4>] ? xfs_fs_fill_super+0x224/0x2e0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffff811847ce>] ? get_sb_bdev+0x18e/0x1d0
> Dec  4 18:26:36 fruster kernel: [<ffffffffa048d4d0>] ? xfs_fs_fill_super+0x0/0x2e0 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffffa048b5b8>] ? xfs_fs_get_sb+0x18/0x20 [xfs]
> Dec  4 18:26:36 fruster kernel: [<ffffffff81183c1b>] ? vfs_kern_mount+0x7b/0x1b0
> Dec  4 18:26:36 fruster kernel: [<ffffffff81183dc2>] ? do_kern_mount+0x52/0x130
> Dec  4 18:26:36 fruster kernel: [<ffffffff811a3f22>] ? do_mount+0x2d2/0x8d0
> Dec  4 18:26:36 fruster kernel: [<ffffffff811a45b0>] ? sys_mount+0x90/0xe0
> Dec  4 18:26:36 fruster kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
> Dec  4 18:26:36 fruster kernel: XFS (sda1): Failed to recover EFIs
> Dec  4 18:26:36 fruster kernel: XFS (sda1): log mount finish failed
> 
> 
> I went back and looked at the log from around the time the drive died and found this message:
> Dec  4 17:58:16 fruster kernel: XFS: Internal error XFS_WANT_CORRUPTED_GOTO at line 1510 of file fs/xfs/xfs_alloc.c.  Caller 0xffffffffa0432ba1
> Dec  4 17:58:16 fruster kernel: 
> Dec  4 17:58:16 fruster kernel: Pid: 4548, comm: nfsd Not tainted 2.6.32-358.23.2.el6.x86_64 #1
> Dec  4 17:58:16 fruster kernel: Call Trace:
> Dec  4 17:58:16 fruster kernel: [<ffffffffa045b0ef>] ? xfs_error_report+0x3f/0x50 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0430c2b>] ? xfs_free_ag_extent+0x58b/0x750 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0432ba1>] ? xfs_free_extent+0x101/0x130 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa043c89d>] ? xfs_bmap_finish+0x15d/0x1a0 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa04626ff>] ? xfs_itruncate_finish+0x15f/0x320 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa047e370>] ? xfs_inactive+0x330/0x480 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa04793f4>] ? _xfs_trans_commit+0x214/0x2a0 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa048b9a0>] ? xfs_fs_clear_inode+0xa0/0xd0 [xfs]
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119d31c>] ? clear_inode+0xac/0x140
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119dad6>] ? generic_delete_inode+0x196/0x1d0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119db75>] ? generic_drop_inode+0x65/0x80
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119c9c2>] ? iput+0x62/0x70
> Dec  4 17:58:16 fruster kernel: [<ffffffff81199610>] ? dentry_iput+0x90/0x100
> Dec  4 17:58:16 fruster kernel: [<ffffffff8119c278>] ? d_delete+0xe8/0xf0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8118fe99>] ? vfs_unlink+0xd9/0xf0
> Dec  4 17:58:16 fruster kernel: [<ffffffffa071cf4f>] ? nfsd_unlink+0x1af/0x250 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0723f03>] ? nfsd3_proc_remove+0x83/0x120 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa071543e>] ? nfsd_dispatch+0xfe/0x240 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa068e624>] ? svc_process_common+0x344/0x640 [sunrpc]
> Dec  4 17:58:16 fruster kernel: [<ffffffff81063990>] ? default_wake_function+0x0/0x20
> Dec  4 17:58:16 fruster kernel: [<ffffffffa068ec60>] ? svc_process+0x110/0x160 [sunrpc]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0715b62>] ? nfsd+0xc2/0x160 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffffa0715aa0>] ? nfsd+0x0/0x160 [nfsd]
> Dec  4 17:58:16 fruster kernel: [<ffffffff81096a36>] ? kthread+0x96/0xa0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0ca>] ? child_rip+0xa/0x20
> Dec  4 17:58:16 fruster kernel: [<ffffffff810969a0>] ? kthread+0x0/0xa0
> Dec  4 17:58:16 fruster kernel: [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
> Dec  4 17:58:16 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x8) called from line 3863 of file fs/xfs/xfs_bmap.c.  Return address = 0xffffffffa043c8d6
> Dec  4 17:58:16 fruster kernel: XFS (sda1): Corruption of in-memory data detected.  Shutting down filesystem
> Dec  4 17:58:16 fruster kernel: XFS (sda1): Please umount the filesystem and rectify the problem(s)
> Dec  4 17:58:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 17:58:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 17:59:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 17:59:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:00:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:00:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:01:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:01:49 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> Dec  4 18:02:05 fruster kernel: XFS (sda1): xfs_do_force_shutdown(0x1) called from line 1061 of file fs/xfs/linux-2.6/xfs_buf.c.  Return address = 0xffffffffa04856e3
> Dec  4 18:02:19 fruster kernel: XFS (sda1): xfs_log_force: error 5 returned.
> 
> 
> I have attached the complete log from the time it died until now.
> 
> In the end, I successfully repaired the filesystem with `xfs_repair -L /dev/sda1`.  However, I am nervous that some files may have been corrupted.
> 
> Do any of you have any idea what could have caused this problem?
> 
> Thanks,
> 
> Mike
> 
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09  1:40                     ` Dave Chinner
@ 2013-12-09 19:51                       ` Stan Hoeppner
  2013-12-09 22:21                         ` Dave Chinner
  2013-12-09 22:24                         ` Emmanuel Florac
  0 siblings, 2 replies; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-09 19:51 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Mike Dacre, xfs@oss.sgi.com

On 12/8/2013 7:40 PM, Dave Chinner wrote:
> On Sun, Dec 08, 2013 at 06:58:07PM -0600, Stan Hoeppner wrote:
>> On 12/8/2013 9:03 AM, Emmanuel Florac wrote:
>>> Le Sat, 07 Dec 2013 23:22:07 -0600 vous écriviez:
>>>
>>>>> Thanks for the great advice, I think you are on to something
>>>>> there.  I will  
>>>>
>>>> You're welcome.  Full disclosure:  I should have mentioned that I
>>>> haven't used CacheCade yet myself.  My statements WRT performance are
>>>> based on available literature and understanding of the technology.
>>>
>>> I didn't test thoroughly cachecade though I have a license code
>>> somewhere, however I've used the equivalent Adaptec feature and one SSD
>>> roughly double the IOPS of a RAID-6 array of 15k RPM SAS drives from
>>> about 4200 IOPS to 7500 IOPS.
>>
>> Emmanuel do you recall which SSD you used here?  7500 IOPS is very low
>> by today's standards.  What I'm wondering is if you had an older low
>> IOPS SSD, or, a modern high IOPS rated SSD that performed way below its
>> specs in this application.
> 
> It's most likely limited by the RAID firmware implementation, not
> the SSD.

In Emmanuel's case I'd guess the the X25 32GB is applying a little more
pressure to the brake calipers than his RAID card.  The 32GB X25 is
rated at 33K read IOPS but an abysmal 3.3K write IOPS.  So his 15K SAS
rust is actually capable of more write IOPS, at 4.2K.

http://ark.intel.com/products/56595/

His Adaptec 51645 has a 1.2GHz dual core PPC RAID ASIC and is rated at
250K IOPS.  This figure probably includes some wishful thinking on
Adaptec's part, but clearly the RAID ASIC is much faster than the Intel
X25 SSD, which is universally known to be a very very low performer.

>> The Samsung 840 Pro I recommended is rated at 90K 4K write IOPS and
>> actually hits that mark in IOmeter testing at a queue depth of 7 and
>> greater:
>> http://www.tomshardware.com/reviews/840-pro-ssd-toggle-mode-2,3302-3.html
> 
> Most RAID controllers can't saturate the IOPS capability of a single
> modern SSD - the LSI 2208 in my largest test box can't sustain much
> more than 30k write IOPS with the 1GB FBWC set to writeback mode,
> even though the writes are spread across 4 SSDs that can do about
> 200k IOPS between them.

2208 card w/4 SSDs and only 30K IOPS?  And you've confirmed these SSDs
do individually have 50K IOPS?  Four such SSDs should be much higher
than 30K with FastPath.  Do you have FastPath enabled?  If not it's now
a freebie with firmware 5.7 or later.  Used to be a pay option.  If
you're using an LSI RAID card w/SSDs you're spinning in the mud without
FastPath.

>> Its processor is a 3 core ARM Cortex R4 so it should excel in this RAID
>> cache application, which will likely have gobs of concurrency, and thus
>> a high queue depth.
> 
> That is probably 2x more powerful as the RAID controller's CPU...

3x 300MHz ARM cores at 0.5W vs 1x 800MHz PPC core at ~10W?  The PPC core
has significantly more transistors, larger caches, higher IPC.  I'd say
this Sammy chip has a little less hardware performance than a singe LSI
core, but not much less.  Two of them would definitely have higher
throughput than one LSI core.

>> Found a review of CacheCade 2.0.  Their testing shows near actual SSD
>> throughput.  The Micron P300 has 44K/16K read/write IOPS and their
>> testing hits 30K.  So you should be able to hit close to ~90K read/write
>> IOPS with the Samsung 840s.
>>
>> http://www.storagereview.com/lsi_megaraid_cachecade_pro_20_review
> 
> Like all benchmarks, take them with a grain of salt. There's nothing
> there about the machine that it was actually tested on, and the data
> sets used for most of the tests were a small fraction of the size of
> the SSD (i.e. all the storagemark tests used a dataset smaller than
> 10GB, and the rest were sequential IO).

The value in these isn't in the absolute numbers, but the relative
before/after difference with CacheCade enabled.

> IOW, it was testing SSD resident performance only, not the
> performance you'd see when the cache is full and having to page
> random data in and out of the SSD cache to/from spinning disks.

The CacheCade algorithm seems to be a bit smarter than that, and one has
some configuration flexibility.  If one has a 128 GB SSD and splits it
50/50 between read/write cache, that leaves 64 GB write cache.  The
algorithm isn't going to send large streaming writes to SSD when the
rust array is capable of greater throughput.

So the 64 GB write cache will be pretty much dedicated to small random
write IOs and some small streaming writes where the DRAM cache can't
flush to rust fast enough.  Coincidentally, fast random write IO is
where SSD cache makes the most difference, same as DRAM cache, by
decreasing real time seek rate of the rust.  I'm guessing most workloads
aren't going to do enough random write IOPS to fill 64 GB, and then
cause cache thrashing while the SSD tries to flush to the rust.

The DRAM cache on LSI controllers, in default firmware mode, buffers
every write and then flushes it disk, often with optimized ordering.  In
CacheCode mode only some writes are buffered to the SSD, and these
bypass the DRAM cache completely via FastPath.  The logic is load adaptive.

So an obvious, and huge, advantage to this is that one can have a mixed
workload with say a 1GB/s streaming write going through DRAM cache to
the rust, with a concurrent 20K IOPS random write workload going
directly to SSD cache.  Neither workload negatively impacts the other.
With a pure rust array the IOPS workload seeks the disks to death and
the streaming write crawls at a few MB/s.

The workload that Mike originally described is similar to the above, and
thus a perfect fit for CacheCade + FastPath.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09 19:51                       ` Stan Hoeppner
@ 2013-12-09 22:21                         ` Dave Chinner
  2013-12-09 22:30                           ` Emmanuel Florac
  2013-12-09 22:24                         ` Emmanuel Florac
  1 sibling, 1 reply; 25+ messages in thread
From: Dave Chinner @ 2013-12-09 22:21 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: Mike Dacre, xfs@oss.sgi.com

On Mon, Dec 09, 2013 at 01:51:22PM -0600, Stan Hoeppner wrote:
> On 12/8/2013 7:40 PM, Dave Chinner wrote:
> > On Sun, Dec 08, 2013 at 06:58:07PM -0600, Stan Hoeppner wrote:
> >> On 12/8/2013 9:03 AM, Emmanuel Florac wrote:
> >>> Le Sat, 07 Dec 2013 23:22:07 -0600 vous écriviez:
> >> The Samsung 840 Pro I recommended is rated at 90K 4K write IOPS and
> >> actually hits that mark in IOmeter testing at a queue depth of 7 and
> >> greater:
> >> http://www.tomshardware.com/reviews/840-pro-ssd-toggle-mode-2,3302-3.html
> > 
> > Most RAID controllers can't saturate the IOPS capability of a single
> > modern SSD - the LSI 2208 in my largest test box can't sustain much
> > more than 30k write IOPS with the 1GB FBWC set to writeback mode,
> > even though the writes are spread across 4 SSDs that can do about
> > 200k IOPS between them.
> 
> 2208 card w/4 SSDs and only 30K IOPS?  And you've confirmed these SSDs
> do individually have 50K IOPS? 

Of course - OCZ Vertex4 drives connected to my workstation easily
sustain that. Behind a RAID controller, nothing near it. I can get
70kiops out of the 4 of them on read, but the RAID controller is the
bottleneck.

> Four such SSDs should be much higher
> than 30K with FastPath.  Do you have FastPath enabled?

It's supposed to be enabled by default in the vendor firmware and
cannot be disabled.  There's no obvious documentation on how to set
it up, so I figured it was simply enabled for my "virtual RAID0
driver per SSD" setup.

After googling around a bit, I found that this method of exporting
the drives isn't sufficient - you have to specifically configure the
caching correctly i.e. you have to turn off readahead and change it
to use writethrough caching. 

/me changes the settings and reboots everything.

Wow, I get 33,000 IOPS now. That was worth the change...

Hold on, let me run something I know is utterly write IO bound

/me runs mkfs.ext4 and...

Oh, great, *another* goddamn hang in the virtio blk_mq code.....

> If not it's now
> a freebie with firmware 5.7 or later.  Used to be a pay option.  If
> you're using an LSI RAID card w/SSDs you're spinning in the mud without
> FastPath.

Yeah, well, it's still 2.5x faster than the 1078 controller the
drives were previously behind, so...

> >> Its processor is a 3 core ARM Cortex R4 so it should excel in this RAID
> >> cache application, which will likely have gobs of concurrency, and thus
> >> a high queue depth.
> > 
> > That is probably 2x more powerful as the RAID controller's CPU...
> 
> 3x 300MHz ARM cores at 0.5W vs 1x 800MHz PPC core at ~10W?  The PPC core
> has significantly more transistors, larger caches, higher IPC.  I'd say
> this Sammy chip has a little less hardware performance than a singe LSI
> core, but not much less.  Two of them would definitely have higher
> throughput than one LSI core.

Keep in mind that there's more than just CPUs on those SoCs. Often
the CPUs are just marshalling agents for hardware offloads, and
those little ARM SoCs are full of hardware accelerators...

> >> Found a review of CacheCade 2.0.  Their testing shows near actual SSD
> >> throughput.  The Micron P300 has 44K/16K read/write IOPS and their
> >> testing hits 30K.  So you should be able to hit close to ~90K read/write
> >> IOPS with the Samsung 840s.
> >>
> >> http://www.storagereview.com/lsi_megaraid_cachecade_pro_20_review
> > 
> > Like all benchmarks, take them with a grain of salt. There's nothing
> > there about the machine that it was actually tested on, and the data
> > sets used for most of the tests were a small fraction of the size of
> > the SSD (i.e. all the storagemark tests used a dataset smaller than
> > 10GB, and the rest were sequential IO).
> 
> The value in these isn't in the absolute numbers, but the relative
> before/after difference with CacheCade enabled.
> 
> > IOW, it was testing SSD resident performance only, not the
> > performance you'd see when the cache is full and having to page
> > random data in and out of the SSD cache to/from spinning disks.
> 
> The CacheCade algorithm seems to be a bit smarter than that, and one has
> some configuration flexibility.  If one has a 128 GB SSD and splits it
> 50/50 between read/write cache, that leaves 64 GB write cache.  The
> algorithm isn't going to send large streaming writes to SSD when the
> rust array is capable of greater throughput.

Still, the benchmarks didn't stress any of this, and were completely
resident in the SSD. It's not indicative of the smarts that the
controller might have, nor of what happens in eal world workloads
which have to operate on 24x7 timescales, not a few minutes of
benchmarking...

So, while the tech might be great, the benchmarks sucked at
demonstrating that.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09 19:51                       ` Stan Hoeppner
  2013-12-09 22:21                         ` Dave Chinner
@ 2013-12-09 22:24                         ` Emmanuel Florac
  1 sibling, 0 replies; 25+ messages in thread
From: Emmanuel Florac @ 2013-12-09 22:24 UTC (permalink / raw)
  To: stan; +Cc: Mike Dacre, xfs@oss.sgi.com

Le Mon, 09 Dec 2013 13:51:22 -0600 vous écriviez:

> His Adaptec 51645 has a 1.2GHz dual core PPC RAID ASIC and is rated at
> 250K IOPS.  This figure probably includes some wishful thinking on
> Adaptec's part, but clearly the RAID ASIC is much faster than the
> Intel X25 SSD, which is universally known to be a very very low
> performer.

From my own benchmarks, I evaluated the real IOPS performance of these
cards at about 45K random 8K KIOPS. 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09 22:21                         ` Dave Chinner
@ 2013-12-09 22:30                           ` Emmanuel Florac
  2013-12-10  3:39                             ` Stan Hoeppner
  0 siblings, 1 reply; 25+ messages in thread
From: Emmanuel Florac @ 2013-12-09 22:30 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Mike Dacre, Stan Hoeppner, xfs@oss.sgi.com

Le Tue, 10 Dec 2013 09:21:31 +1100 vous écriviez:

> So, while the tech might be great, the benchmarks sucked at
> demonstrating that.

And now that we have enhance-io, bcache and friends, I'm actually more
confident of using these than some sort of hidden pseudo-hardware magic.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-09 22:30                           ` Emmanuel Florac
@ 2013-12-10  3:39                             ` Stan Hoeppner
  2013-12-10  8:45                               ` Emmanuel Florac
  0 siblings, 1 reply; 25+ messages in thread
From: Stan Hoeppner @ 2013-12-10  3:39 UTC (permalink / raw)
  To: Emmanuel Florac, Dave Chinner; +Cc: Mike Dacre, xfs@oss.sgi.com

On 12/9/2013 4:30 PM, Emmanuel Florac wrote:
> Le Tue, 10 Dec 2013 09:21:31 +1100 vous écriviez:
> Dave Chinner wrote:
>>
>> So, while the tech might be great, the benchmarks sucked at
>> demonstrating that.

It's pretty difficult to find comprehensive benchmark results,
especially for niche products such as CacheCade.

> And now that we have enhance-io, bcache and friends, I'm actually more
> confident of using these than some sort of hidden pseudo-hardware magic.

Enhance-IO is $295 per Linux server per year.  Last I saw, Cleancache
isn't fully supported by XFS yet in a stable vanilla release.  AFAIK
bcache isn't fully baked yet.  Coupled with the fact that Mike seems to
be limited to the CentOS ecosystem, I figured CacheCade was his best
option at this time.  It's been around a couple of years, works,
verified decent performance, and no additional kernel software is
required.  And it's relatively easy to configure.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Sudden File System Corruption
  2013-12-10  3:39                             ` Stan Hoeppner
@ 2013-12-10  8:45                               ` Emmanuel Florac
  0 siblings, 0 replies; 25+ messages in thread
From: Emmanuel Florac @ 2013-12-10  8:45 UTC (permalink / raw)
  To: stan; +Cc: Mike Dacre, xfs@oss.sgi.com

Le Mon, 09 Dec 2013 21:39:52 -0600 vous écriviez:

> > And now that we have enhance-io, bcache and friends, I'm actually
> > more confident of using these than some sort of hidden
> > pseudo-hardware magic.  
> 
> Enhance-IO is $295 per Linux server per year.

Depends upon what you want, I'm using this one:

https://github.com/stec-inc/EnhanceIO

So far it seems to work very well.

>  Last I saw, Cleancache
> isn't fully supported by XFS yet in a stable vanilla release.  AFAIK
> bcache isn't fully baked yet.  Coupled with the fact that Mike seems
> to be limited to the CentOS ecosystem, I figured CacheCade was his
> best option at this time.  It's been around a couple of years, works,
> verified decent performance, and no additional kernel software is
> required.  And it's relatively easy to configure.

Sure.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2013-12-10  8:48 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-05  2:55 Sudden File System Corruption Mike Dacre
2013-12-05  3:40 ` Dave Chinner
2013-12-05  3:46   ` Mike Dacre
2013-12-05  3:59     ` Dave Chinner
2013-12-05  8:10 ` Stan Hoeppner
     [not found]   ` <CAPd9ww9hsOFK6pxqRY-YtLLAkkJHCuSi1BaM4n9=2XTjNVAn2Q@mail.gmail.com>
2013-12-05 15:58     ` Fwd: " Mike Dacre
2013-12-06  8:58       ` Stan Hoeppner
     [not found]         ` <CAPd9ww8+W2VX2HAfxEkVN5mL1a_+=HDAStf1126WSE33Vb=VsQ@mail.gmail.com>
2013-12-06 23:15           ` Fwd: " Mike Dacre
2013-12-07 11:12           ` Stan Hoeppner
2013-12-07 18:36             ` Mike Dacre
2013-12-08  5:22               ` Stan Hoeppner
2013-12-08 15:03                 ` Emmanuel Florac
2013-12-09  0:58                   ` Stan Hoeppner
2013-12-09  1:40                     ` Dave Chinner
2013-12-09 19:51                       ` Stan Hoeppner
2013-12-09 22:21                         ` Dave Chinner
2013-12-09 22:30                           ` Emmanuel Florac
2013-12-10  3:39                             ` Stan Hoeppner
2013-12-10  8:45                               ` Emmanuel Florac
2013-12-09 22:24                         ` Emmanuel Florac
2013-12-09  9:49                     ` Emmanuel Florac
2013-12-05 17:40 ` Ben Myers
     [not found]   ` <20131205175053.GG1935@sgi.com>
     [not found]     ` <CAPd9ww9YFbMEe-dM96zHsbRJgQuBHfF=ipromch1Yw6SzPUftg@mail.gmail.com>
     [not found]       ` <20131206002308.GS10553@sgi.com>
     [not found]         ` <CAPd9ww8XDzGbSZsEEoCmSuJ+KBYUWqHeRON1sFr6bG1fZ6af7w@mail.gmail.com>
     [not found]           ` <20131206225612.GU10553@sgi.com>
2013-12-06 23:15             ` Mike Dacre
2013-12-08 22:20               ` Dave Chinner
2013-12-09 19:04 ` Eric Sandeen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox