public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* 3.9.0: XFS rootfs corruption
       [not found] <2105365384.7582278.1367825507929.JavaMail.root@redhat.com>
@ 2013-05-06  7:50 ` CAI Qian
  2013-05-06 14:31   ` Eric Sandeen
  0 siblings, 1 reply; 14+ messages in thread
From: CAI Qian @ 2013-05-06  7:50 UTC (permalink / raw)
  To: xfs

Saw this on several different Power7 systems after kdump reboot. It has xfsprogs-3.1.10
and rootfs in on LVM. Never saw one of those in any of the RC releases.

] Reached target Basic System.  
[    4.919316] bio: create slab <bio-1> at 1 
[    5.078616] SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled 
[    5.081925] XFS (dm-1): Mounting Filesystem 
[    5.168530] XFS (dm-1): Starting recovery (logdev: internal) 
[    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc 
[    5.333575]  
[    5.333600] CPU: 2 PID: 372 Comm: mount Tainted: G        W    3.9.0+ #1 
[    5.333609] Call Trace: 
[    5.333619] [c0000003e7e02b40] [c000000000014e48] .show_stack+0x78/0x1e0 (unreliable) 
[    5.333635] [c0000003e7e02c10] [c00000000074be70] .dump_stack+0x28/0x3c 
[    5.333690] [c0000003e7e02c80] [d00000000234ff14] .xfs_error_report+0x54/0x70 [xfs] 
[    5.333747] [c0000003e7e02cf0] [d000000002396e84] .__xfs_dir3_data_check+0x784/0x820 [xfs] 
[    5.333805] [c0000003e7e02df0] [d000000002396fdc] .xfs_dir3_data_verify+0xbc/0xe0 [xfs] 
[    5.333871] [c0000003e7e02e70] [d00000000239703c] .xfs_dir3_data_write_verify+0x3c/0x1c0 [xfs] 
[    5.333936] [c0000003e7e02f20] [d00000000234db94] ._xfs_buf_ioapply+0xd4/0x400 [xfs] 
[    5.334003] [c0000003e7e03060] [d00000000234dfcc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    5.334055] [c0000003e7e030f0] [d00000000234e0c4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    5.334117] [c0000003e7e03180] [d00000000234e284] .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs] 
[    5.334180] [c0000003e7e03270] [d00000000234f318] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    5.334237] [c0000003e7e03310] [d0000000023b1904] .xlog_recover_commit_trans+0xd4/0x1b0 [xfs] 
[    5.334305] [c0000003e7e033d0] [d0000000023b1c4c] .xlog_recover_process_data+0x26c/0x340 [xfs] 
[    5.334372] [c0000003e7e034a0] [d0000000023b2108] .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs] 
[    5.334438] [c0000003e7e03610] [d0000000023b2360] .xlog_do_log_recovery+0xa0/0x120 [xfs] 
[    5.334503] [c0000003e7e036b0] [d0000000023b2400] .xlog_do_recover+0x20/0x150 [xfs] 
[    5.334570] [c0000003e7e03740] [d0000000023b25c4] .xlog_recover+0x94/0x100 [xfs] 
[    5.334647] [c0000003e7e037d0] [d0000000023bcf84] .xfs_log_mount+0x144/0x1e0 [xfs] 
[    5.334705] [c0000003e7e03870] [d0000000023b6098] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    5.334768] [c0000003e7e03930] [d00000000236435c] .xfs_fs_fill_super+0x31c/0x3b0 [xfs] 
[    5.334801] [c0000003e7e039d0] [c000000000217028] .mount_bdev+0x258/0x2b0 
[    5.334855] [c0000003e7e03aa0] [d000000002361c78] .xfs_fs_mount+0x18/0x30 [xfs] 
[    5.334878] [c0000003e7e03b10] [c000000000218040] .mount_fs+0x70/0x230 
[    5.334890] [c0000003e7e03bd0] [c00000000023a9f8] .vfs_kern_mount+0x58/0x140 
[    5.334901] [c0000003e7e03c80] [c00000000023d5f0] .do_mount+0x280/0xb10 
[    5.334912] [c0000003e7e03d70] [c00000000023df30] .SyS_mount+0xb0/0x110 
[    5.334924] [c0000003e7e03e30] [c000000000009e54] syscall_exit+0x0/0x98 
[    5.334945] c00000001bee2000: 58 44 32 44 09 50 00 40 0a 50 00 40 0b 50 00 40  XD2D.P.@.P.@.P.@ 
[    5.334957] c00000001bee2010: 00 00 00 00 00 11 a3 8e 32 62 65 61 68 5f 74 61  ........2beah_ta 
[    5.334968] c00000001bee2020: 73 6b 5f 65 64 33 33 63 61 62 36 2d 32 65 30 31  sk_ed33cab6-2e01 
[    5.334979] c00000001bee2030: 2d 34 63 34 34 2d 38 63 31 65 2d 66 65 37 36 35  -4c44-8c1e-fe765 
[    5.334992] XFS (dm-1): Internal error xfs_dir3_data_write_verify at line 271 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000234db94 
[    5.334992]  
[    5.335017] CPU: 2 PID: 372 Comm: mount Tainted: G        W    3.9.0+ #1 
[    5.335025] Call Trace: 
[    5.335032] [c0000003e7e02c10] [c000000000014e48] .show_stack+0x78/0x1e0 (unreliable) 
[    5.335046] [c0000003e7e02ce0] [c00000000074be70] .dump_stack+0x28/0x3c 
[    5.335099] [c0000003e7e02d50] [d00000000234ff14] .xfs_error_report+0x54/0x70 [xfs] 
[    5.335153] [c0000003e7e02dc0] [d00000000234ffac] .xfs_corruption_error+0x7c/0xb0 [xfs] 
[    5.335220] [c0000003e7e02e70] [d000000002397148] .xfs_dir3_data_write_verify+0x148/0x1c0 [xfs] 
[    5.335284] [c0000003e7e02f20] [d00000000234db94] ._xfs_buf_ioapply+0xd4/0x400 [xfs] 
[    5.335337] [c0000003e7e03060] [d00000000234dfcc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    5.335403] [c0000003e7e030f0] [d00000000234e0c4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    5.335464] [c0000003e7e03180] [d00000000234e284] .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs] 
[    5.335527] [c0000003e7e03270] [d00000000234f318] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    5.335584] [c0000003e7e03310] [d0000000023b1904] .xlog_recover_commit_trans+0xd4/0x1b0 [xfs] 
[    5.335650] [c0000003e7e033d0] [d0000000023b1c4c] .xlog_recover_process_data+0x26c/0x340 [xfs] 
[    5.335718] [c0000003e7e034a0] [d0000000023b2108] .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs] 
[    5.335785] [c0000003e7e03610] [d0000000023b2360] .xlog_do_log_recovery+0xa0/0x120 [xfs] 
[    5.335842] [c0000003e7e036b0] [d0000000023b2400] .xlog_do_recover+0x20/0x150 [xfs] 
[    5.335909] [c0000003e7e03740] [d0000000023b25c4] .xlog_recover+0x94/0x100 [xfs] 
[    5.335976] [c0000003e7e037d0] [d0000000023bcf84] .xfs_log_mount+0x144/0x1e0 [xfs] 
[    5.336033] [c0000003e7e03870] [d0000000023b6098] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    5.336097] [c0000003e7e03930] [d00000000236435c] .xfs_fs_fill_super+0x31c/0x3b0 [xfs] 
[    5.336121] [c0000003e7e039d0] [c000000000217028] .mount_bdev+0x258/0x2b0 
[    5.336174] [c0000003e7e03aa0] [d000000002361c78] .xfs_fs_mount+0x18/0x30 [xfs] 
[    5.336206] [c0000003e7e03b10] [c000000000218040] .mount_fs+0x70/0x230 
[    5.336218] [c0000003e7e03bd0] [c00000000023a9f8] .vfs_kern_mount+0x58/0x140 
[    5.336229] [c0000003e7e03c80] [c00000000023d5f0] .do_mount+0x280/0xb10 
[    5.336240] [c0000003e7e03d70] [c00000000023df30] .SyS_mount+0xb0/0x110 
[    5.336251] [c0000003e7e03e30] [c000000000009e54] syscall_exit+0x0/0x98 
[    5.336262] XFS (dm-1): Corruption detected. Unmount and run xfs_repair 
[    5.336281] XFS (dm-1): xfs_do_force_shutdown(0x8) called from line 1364 of file fs/xfs/xfs_buf.c.  Return address = 0xd00000000234de84 
[    5.336295] XFS (dm-1): Corruption of in-memory data detected.  Shutting down filesystem 
[    5.336305] XFS (dm-1): Please umount the filesystem and rectify the problem(s) 
[    5.336320] XFS (dm-1): metadata I/O error: block 0x8cfa0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336333] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336357] XFS (dm-1): metadata I/O error: block 0xb2250 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336369] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336385] XFS (dm-1): metadata I/O error: block 0xddd00 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336397] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336422] XFS (dm-1): metadata I/O error: block 0xde228 ("xlog_recover_iodone") error 5 numblks 8 
[    5.336434] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336450] XFS (dm-1): metadata I/O error: block 0x25e6e0 ("xlog_recover_iodone") error 5 numblks 8 
[    5.336462] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336486] XFS (dm-1): metadata I/O error: block 0x55bd70 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336499] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336514] XFS (dm-1): metadata I/O error: block 0x562370 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336526] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336552] XFS (dm-1): metadata I/O error: block 0x1900002 ("xlog_recover_iodone") error 5 numblks 1 
[    5.336564] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336579] XFS (dm-1): metadata I/O error: block 0x1900018 ("xlog_recover_iodone") error 5 numblks 8 
[    5.336591] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336617] XFS (dm-1): metadata I/O error: block 0x1900590 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336629] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336644] XFS (dm-1): metadata I/O error: block 0x19005f0 ("xlog_recover_iodone") error 5 numblks 8 
[    5.336656] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336680] XFS (dm-1): metadata I/O error: block 0x1900600 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336719] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336751] XFS (dm-1): metadata I/O error: block 0x1900c10 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336767] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336798] XFS (dm-1): metadata I/O error: block 0x197c7d0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336816] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336838] XFS (dm-1): metadata I/O error: block 0x32685a0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.336855] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336886] XFS (dm-1): metadata I/O error: block 0x32c7fd8 ("xlog_recover_iodone") error 5 numblks 8 
[    5.336904] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336926] XFS (dm-1): metadata I/O error: block 0x4b00002 ("xlog_recover_iodone") error 5 numblks 1 
[    5.336950] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.336968] XFS (dm-1): metadata I/O error: block 0x4be1120 ("xlog_recover_iodone") error 5 numblks 8 
[    5.336982] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337000] XFS (dm-1): metadata I/O error: block 0x4be3820 ("xlog_recover_iodone") error 5 numblks 16 
[    5.337013] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337039] XFS (dm-1): metadata I/O error: block 0x4c3dbf0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.337053] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337071] XFS (dm-1): metadata I/O error: block 0x4c48d80 ("xlog_recover_iodone") error 5 numblks 16 
[    5.337085] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337111] XFS (dm-1): metadata I/O error: block 0x4d88328 ("xlog_recover_iodone") error 5 numblks 8 
[    5.337125] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337143] XFS (dm-1): metadata I/O error: block 0x4e05ad8 ("xlog_recover_iodone") error 5 numblks 8 
[    5.337156] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337184] XFS (dm-1): metadata I/O error: block 0x5066bf0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.337198] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337216] XFS (dm-1): metadata I/O error: block 0x506a808 ("xlog_recover_iodone") error 5 numblks 8 
[    5.337229] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.337300] XFS (dm-1): metadata I/O error: block 0x8a618 ("xlog_recover_iodone") error 117 numblks 8 
[    5.337315] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023ada70 
[    5.348410] XFS (dm-1): log mount/recovery failed: error 117 
[    5.348491] XFS (dm-1): log mount failed 
dracut-initqueue[275]: mount: mount /dev/mapper/rhel_ibm--p730--06--lp1-root on /sysroot failed: Structure needs cleaning  
dracut-initqueue[275]: Warning: Failed to mount -t xfs -o ro,ro /dev/mapper/rhel_ibm--p730--06--lp1-root /sysroot  
dracut-initqueue[275]: Warning: *** An error occurred during the file system check.  
dracut-initqueue[275]: Warning: *** Dropping you to a shell; the system will try

Also, never saw any of those in other architectures like x64, but started get those there in 3.9.0.
Unsure if those are related.

[ 3224.369782] ============================================================================= 
[ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison overwritten 
[ 3224.370017] ----------------------------------------------------------------------------- 
[ 3224.370017]  
[ 3224.370017] INFO: 0xffff880031199fb8-0xffff880031199fb8. First byte 0x6a instead of 0x6b 
[ 3224.370017] INFO: Allocated in kmem_zone_alloc+0x67/0xf0 [xfs] age=355660 cpu=0 pid=1846 
[ 3224.370017] 	__slab_alloc+0x474/0x4f2 
[ 3224.370017] 	kmem_cache_alloc+0x192/0x1e0 
[ 3224.370017] 	kmem_zone_alloc+0x67/0xf0 [xfs] 
[ 3224.370017] 	kmem_zone_zalloc+0x1d/0x50 [xfs] 
[ 3224.370017] 	xfs_efi_init+0x35/0xa0 [xfs] 
[ 3224.370017] 	xfs_trans_get_efi+0x21/0x40 [xfs] 
[ 3224.370017] 	xfs_bmap_finish+0x66/0x1a0 [xfs] 
[ 3224.370017] 	xfs_inactive+0x3b8/0x470 [xfs] 
[ 3224.370017] 	xfs_fs_evict_inode+0x84/0xc0 [xfs] 
[ 3224.370017] 	evict+0xa7/0x1a0 
[ 3224.370017] 	iput+0x105/0x190 
[ 3224.370017] 	do_unlinkat+0x1d9/0x230 
[ 3224.370017] 	SyS_unlink+0x16/0x20 
[ 3224.370017] 	system_call_fastpath+0x16/0x1b 
[ 3224.370017] INFO: Freed in xfs_efi_item_free+0x21/0x40 [xfs] age=352306 cpu=2 pid=260 
[ 3224.370017] 	__slab_free+0x35/0x328 
[ 3224.370017] 	kmem_cache_free+0x1d4/0x1f0 
[ 3224.370017] 	xfs_efi_item_free+0x21/0x40 [xfs] 
[ 3224.370017] 	__xfs_efi_release+0x53/0x60 [xfs] 
[ 3224.370017] 	xfs_efi_release+0x2d/0x50 [xfs] 
[ 3224.370017] 	xfs_efd_item_committed+0x26/0x40 [xfs] 
[ 3224.370017] 	xfs_trans_committed_bulk+0x9a/0x2a0 [xfs] 
[ 3224.370017] 	xlog_cil_committed+0x3b/0xf0 [xfs] 
[ 3224.370017] 	xlog_state_do_callback+0x16d/0x2b0 [xfs] 
[ 3224.370017] 	xlog_state_done_syncing+0x76/0xa0 [xfs] 
[ 3224.370017] 	xlog_iodone+0x4b/0xa0 [xfs] 
[ 3224.370017] 	xfs_buf_iodone_work+0x5e/0xc0 [xfs] 
[ 3224.370017] 	process_one_work+0x175/0x400 
[ 3224.370017] 	worker_thread+0x11b/0x370 
[ 3224.370017] 	kthread+0xc0/0xd0 
[ 3224.370017] 	ret_from_fork+0x7c/0xb0 
[ 3224.370017] INFO: Slab 0xffffea0000c46600 objects=22 used=22 fp=0x          (null) flags=0x10000000004080 
[ 3224.370017] INFO: Object 0xffff880031199f48 @offset=8008 fp=0xffff88003119bbb8 
[ 3224.370017]  
[ 3224.370017] Bytes b4 ffff880031199f38: 7e 3f 27 00 01 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ~?'.....ZZZZZZZZ 
[ 3224.370017] Object ffff880031199f48: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199f58: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199f68: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199f78: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199f88: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199f98: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199fa8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199fb8: 6a 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  jkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199fc8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199fd8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199fe8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff880031199ff8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a008: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a018: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a028: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a038: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a048: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a058: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a068: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a078: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a088: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a098: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a0a8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a0b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk 
[ 3224.370017] Object ffff88003119a0c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5  kkkkkkkkkkkkkkk. 
[ 3224.370017] Redzone ffff88003119a0d8: bb bb bb bb bb bb bb bb                          ........ 
[ 3224.370017] Padding ffff88003119a218: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ 
[ 3224.370017] CPU: 3 PID: 5993 Comm: rm Tainted: GF   B        3.9.0+ #1 
[ 3224.370017] Hardware name: Dell Computer Corporation PowerEdge 2800/0NJ022, BIOS A07 04/25/2008 
[ 3224.370017]  ffff880031199f48 ffff88003723f9b8 ffffffff815f105e ffff88003723f9e8 
[ 3224.370017]  ffffffff8117266e ffff880031199fb9 ffff88003768e880 000000000000006b 
[ 3224.370017]  ffff880031199f48 ffff88003723fa38 ffffffff81172802 ffff880031199fb8 
[ 3224.370017] Call Trace: 
[ 3224.370017]  [<ffffffff815f105e>] dump_stack+0x19/0x1b 
[ 3224.370017]  [<ffffffff8117266e>] print_trailer+0xfe/0x160 
[ 3224.370017]  [<ffffffff81172802>] check_bytes_and_report+0xe2/0x120 
[ 3224.370017]  [<ffffffff81172faf>] check_object+0x1cf/0x250 
[ 3224.370017]  [<ffffffffa02e1ba7>] ? kmem_zone_alloc+0x67/0xf0 [xfs] 
[ 3224.370017]  [<ffffffff815ed2af>] alloc_debug_processing+0x67/0x109 
[ 3224.370017]  [<ffffffff815edce5>] __slab_alloc+0x474/0x4f2 
[ 3224.370017]  [<ffffffffa02f6336>] ? xfs_bmap_del_extent+0x576/0xca0 [xfs] 
[ 3224.370017]  [<ffffffffa02e1ba7>] ? kmem_zone_alloc+0x67/0xf0 [xfs] 
[ 3224.370017]  [<ffffffff81175012>] kmem_cache_alloc+0x192/0x1e0 
[ 3224.370017]  [<ffffffffa02e1ba7>] ? kmem_zone_alloc+0x67/0xf0 [xfs] 
[ 3224.370017]  [<ffffffffa02e1ba7>] kmem_zone_alloc+0x67/0xf0 [xfs] 
[ 3224.370017]  [<ffffffffa02e1c4d>] kmem_zone_zalloc+0x1d/0x50 [xfs] 
[ 3224.370017]  [<ffffffffa032c365>] xfs_efi_init+0x35/0xa0 [xfs] 
[ 3224.370017]  [<ffffffffa032f051>] xfs_trans_get_efi+0x21/0x40 [xfs] 
[ 3224.370017]  [<ffffffffa02f6c66>] xfs_bmap_finish+0x66/0x1a0 [xfs] 
[ 3224.370017]  [<ffffffffa02e1ba7>] ? kmem_zone_alloc+0x67/0xf0 [xfs] 
[ 3224.370017]  [<ffffffffa0315a49>] xfs_itruncate_extents+0xf9/0x2c0 [xfs] 
[ 3224.370017]  [<ffffffffa02dff6d>] xfs_inactive+0x34d/0x470 [xfs] 
[ 3224.370017]  [<ffffffffa02dd674>] xfs_fs_evict_inode+0x84/0xc0 [xfs] 
[ 3224.370017]  [<ffffffff811a7447>] evict+0xa7/0x1a0 
[ 3224.370017]  [<ffffffff811a7c25>] iput+0x105/0x190 
[ 3224.370017]  [<ffffffff8119b399>] do_unlinkat+0x1d9/0x230 
[ 3224.370017]  [<ffffffff8119e1eb>] SyS_unlinkat+0x1b/0x40 
[ 3224.370017]  [<ffffffff815ff942>] system_call_fastpath+0x16/0x1b 
[ 3224.370017] FIX xfs_efi_item: Restoring 0xffff880031199fb8-0xffff880031199fb8=0x6b 
[ 3224.370017]  
[ 3224.370017] FIX xfs_efi_item: Marking all objects used 

CAI Qian

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-06  7:50 ` 3.9.0: XFS rootfs corruption CAI Qian
@ 2013-05-06 14:31   ` Eric Sandeen
  2013-05-07  7:53     ` CAI Qian
  0 siblings, 1 reply; 14+ messages in thread
From: Eric Sandeen @ 2013-05-06 14:31 UTC (permalink / raw)
  To: CAI Qian; +Cc: xfs

On 5/6/13 2:50 AM, CAI Qian wrote:
> Saw this on several different Power7 systems after kdump reboot. It has xfsprogs-3.1.10
> and rootfs in on LVM. Never saw one of those in any of the RC releases.
> 
> ] Reached target Basic System.  
> [    4.919316] bio: create slab <bio-1> at 1 
> [    5.078616] SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled 
> [    5.081925] XFS (dm-1): Mounting Filesystem 
> [    5.168530] XFS (dm-1): Starting recovery (logdev: internal) 
> [    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc 

here:

        /*
         * Need to have seen all the entries and all the bestfree slots.
         */
        XFS_WANT_CORRUPTED_RETURN(freeseen == 7);

I hope Dave knows offhand what this might mean.  :)

Could you get a metadump of the filesystem in question?

> [    5.333575]  
> [    5.333600] CPU: 2 PID: 372 Comm: mount Tainted: G        W    3.9.0+ #1 
> [    5.333609] Call Trace: 
> [    5.333619] [c0000003e7e02b40] [c000000000014e48] .show_stack+0x78/0x1e0 (unreliable) 
> [    5.333635] [c0000003e7e02c10] [c00000000074be70] .dump_stack+0x28/0x3c 
> [    5.333690] [c0000003e7e02c80] [d00000000234ff14] .xfs_error_report+0x54/0x70 [xfs] 
> [    5.333747] [c0000003e7e02cf0] [d000000002396e84] .__xfs_dir3_data_check+0x784/0x820 [xfs] 
> [    5.333805] [c0000003e7e02df0] [d000000002396fdc] .xfs_dir3_data_verify+0xbc/0xe0 [xfs] 
> [    5.333871] [c0000003e7e02e70] [d00000000239703c] .xfs_dir3_data_write_verify+0x3c/0x1c0 [xfs] 
> [    5.333936] [c0000003e7e02f20] [d00000000234db94] ._xfs_buf_ioapply+0xd4/0x400 [xfs] 
> [    5.334003] [c0000003e7e03060] [d00000000234dfcc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
> [    5.334055] [c0000003e7e030f0] [d00000000234e0c4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
> [    5.334117] [c0000003e7e03180] [d00000000234e284] .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs] 
> [    5.334180] [c0000003e7e03270] [d00000000234f318] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
> [    5.334237] [c0000003e7e03310] [d0000000023b1904] .xlog_recover_commit_trans+0xd4/0x1b0 [xfs] 
> [    5.334305] [c0000003e7e033d0] [d0000000023b1c4c] .xlog_recover_process_data+0x26c/0x340 [xfs] 
> [    5.334372] [c0000003e7e034a0] [d0000000023b2108] .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs] 
> [    5.334438] [c0000003e7e03610] [d0000000023b2360] .xlog_do_log_recovery+0xa0/0x120 [xfs] 
> [    5.334503] [c0000003e7e036b0] [d0000000023b2400] .xlog_do_recover+0x20/0x150 [xfs] 
> [    5.334570] [c0000003e7e03740] [d0000000023b25c4] .xlog_recover+0x94/0x100 [xfs] 
> [    5.334647] [c0000003e7e037d0] [d0000000023bcf84] .xfs_log_mount+0x144/0x1e0 [xfs] 
> [    5.334705] [c0000003e7e03870] [d0000000023b6098] .xfs_mountfs+0x3c8/0x780 [xfs] 
> [    5.334768] [c0000003e7e03930] [d00000000236435c] .xfs_fs_fill_super+0x31c/0x3b0 [xfs] 
> [    5.334801] [c0000003e7e039d0] [c000000000217028] .mount_bdev+0x258/0x2b0 
> [    5.334855] [c0000003e7e03aa0] [d000000002361c78] .xfs_fs_mount+0x18/0x30 [xfs] 
> [    5.334878] [c0000003e7e03b10] [c000000000218040] .mount_fs+0x70/0x230 
> [    5.334890] [c0000003e7e03bd0] [c00000000023a9f8] .vfs_kern_mount+0x58/0x140 
> [    5.334901] [c0000003e7e03c80] [c00000000023d5f0] .do_mount+0x280/0xb10 
> [    5.334912] [c0000003e7e03d70] [c00000000023df30] .SyS_mount+0xb0/0x110 
> [    5.334924] [c0000003e7e03e30] [c000000000009e54] syscall_exit+0x0/0x98 
> [    5.334945] c00000001bee2000: 58 44 32 44 09 50 00 40 0a 50 00 40 0b 50 00 40  XD2D.P.@.P.@.P.@ 
> [    5.334957] c00000001bee2010: 00 00 00 00 00 11 a3 8e 32 62 65 61 68 5f 74 61  ........2beah_ta 
> [    5.334968] c00000001bee2020: 73 6b 5f 65 64 33 33 63 61 62 36 2d 32 65 30 31  sk_ed33cab6-2e01 
> [    5.334979] c00000001bee2030: 2d 34 63 34 34 2d 38 63 31 65 2d 66 65 37 36 35  -4c44-8c1e-fe765 
> [    5.334992] XFS (dm-1): Internal error xfs_dir3_data_write_verify at line 271 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000234db94 
> [    5.334992]  
> [    5.335017] CPU: 2 PID: 372 Comm: mount Tainted: G        W    3.9.0+ #1 
> [    5.335025] Call Trace: 
> [    5.335032] [c0000003e7e02c10] [c000000000014e48] .show_stack+0x78/0x1e0 (unreliable) 
> [    5.335046] [c0000003e7e02ce0] [c00000000074be70] .dump_stack+0x28/0x3c 
> [    5.335099] [c0000003e7e02d50] [d00000000234ff14] .xfs_error_report+0x54/0x70 [xfs] 
> [    5.335153] [c0000003e7e02dc0] [d00000000234ffac] .xfs_corruption_error+0x7c/0xb0 [xfs] 
> [    5.335220] [c0000003e7e02e70] [d000000002397148] .xfs_dir3_data_write_verify+0x148/0x1c0 [xfs] 
> [    5.335284] [c0000003e7e02f20] [d00000000234db94] ._xfs_buf_ioapply+0xd4/0x400 [xfs] 
> [    5.335337] [c0000003e7e03060] [d00000000234dfcc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
> [    5.335403] [c0000003e7e030f0] [d00000000234e0c4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
> [    5.335464] [c0000003e7e03180] [d00000000234e284] .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs] 
> [    5.335527] [c0000003e7e03270] [d00000000234f318] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
> [    5.335584] [c0000003e7e03310] [d0000000023b1904] .xlog_recover_commit_trans+0xd4/0x1b0 [xfs] 
> [    5.335650] [c0000003e7e033d0] [d0000000023b1c4c] .xlog_recover_process_data+0x26c/0x340 [xfs] 
> [    5.335718] [c0000003e7e034a0] [d0000000023b2108] .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs] 
> [    5.335785] [c0000003e7e03610] [d0000000023b2360] .xlog_do_log_recovery+0xa0/0x120 [xfs] 
> [    5.335842] [c0000003e7e036b0] [d0000000023b2400] .xlog_do_recover+0x20/0x150 [xfs] 
> [    5.335909] [c0000003e7e03740] [d0000000023b25c4] .xlog_recover+0x94/0x100 [xfs] 
> [    5.335976] [c0000003e7e037d0] [d0000000023bcf84] .xfs_log_mount+0x144/0x1e0 [xfs] 
> [    5.336033] [c0000003e7e03870] [d0000000023b6098] .xfs_mountfs+0x3c8/0x780 [xfs] 
> [    5.336097] [c0000003e7e03930] [d00000000236435c] .xfs_fs_fill_super+0x31c/0x3b0 [xfs] 
> [    5.336121] [c0000003e7e039d0] [c000000000217028] .mount_bdev+0x258/0x2b0 
> [    5.336174] [c0000003e7e03aa0] [d000000002361c78] .xfs_fs_mount+0x18/0x30 [xfs] 
> [    5.336206] [c0000003e7e03b10] [c000000000218040] .mount_fs+0x70/0x230 
> [    5.336218] [c0000003e7e03bd0] [c00000000023a9f8] .vfs_kern_mount+0x58/0x140 
> [    5.336229] [c0000003e7e03c80] [c00000000023d5f0] .do_mount+0x280/0xb10 
> [    5.336240] [c0000003e7e03d70] [c00000000023df30] .SyS_mount+0xb0/0x110 
> [    5.336251] [c0000003e7e03e30] [c000000000009e54] syscall_exit+0x0/0x98 


> [    5.348410] XFS (dm-1): log mount/recovery failed: error 117 
> [    5.348491] XFS (dm-1): log mount failed 
> dracut-initqueue[275]: mount: mount /dev/mapper/rhel_ibm--p730--06--lp1-root on /sysroot failed: Structure needs cleaning  
> dracut-initqueue[275]: Warning: Failed to mount -t xfs -o ro,ro /dev/mapper/rhel_ibm--p730--06--lp1-root /sysroot  
> dracut-initqueue[275]: Warning: *** An error occurred during the file system check.  
> dracut-initqueue[275]: Warning: *** Dropping you to a shell; the system will try
> 
> Also, never saw any of those in other architectures like x64, but started get those there in 3.9.0.
> Unsure if those are related.
> 
> [ 3224.369782] ============================================================================= 
> [ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison overwritten 
> [ 3224.370017] ----------------------------------------------------------------------------- 

  2: 'F' if any module was force loaded by "insmod -f", ' ' if all
     modules were loaded normally.

Force loaded modules, what's that from?


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-06 14:31   ` Eric Sandeen
@ 2013-05-07  7:53     ` CAI Qian
  2013-05-07 19:08       ` Eric Sandeen
  0 siblings, 1 reply; 14+ messages in thread
From: CAI Qian @ 2013-05-07  7:53 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs



----- Original Message -----
> From: "Eric Sandeen" <sandeen@sandeen.net>
> To: "CAI Qian" <caiqian@redhat.com>
> Cc: xfs@oss.sgi.com
> Sent: Monday, May 6, 2013 10:31:01 PM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> On 5/6/13 2:50 AM, CAI Qian wrote:
> > Saw this on several different Power7 systems after kdump reboot. It has
> > xfsprogs-3.1.10
> > and rootfs in on LVM. Never saw one of those in any of the RC releases.
> > 
> > ] Reached target Basic System.
> > [    4.919316] bio: create slab <bio-1> at 1
> > [    5.078616] SGI XFS with ACLs, security attributes, large block/inode
> > numbers, no debug enabled
> > [    5.081925] XFS (dm-1): Mounting Filesystem
> > [    5.168530] XFS (dm-1): Starting recovery (logdev: internal)
> > [    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of
> > file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc
> 
> here:
> 
>         /*
>          * Need to have seen all the entries and all the bestfree slots.
>          */
>         XFS_WANT_CORRUPTED_RETURN(freeseen == 7);
> 
> I hope Dave knows offhand what this might mean.  :)
> 
> Could you get a metadump of the filesystem in question?
Err, less familiar here. May I ask how can I do that?
> 
> > [    5.333575]
> > [    5.333600] CPU: 2 PID: 372 Comm: mount Tainted: G        W    3.9.0+ #1
> > [    5.333609] Call Trace:
> > [    5.333619] [c0000003e7e02b40] [c000000000014e48] .show_stack+0x78/0x1e0
> > (unreliable)
> > [    5.333635] [c0000003e7e02c10] [c00000000074be70] .dump_stack+0x28/0x3c
> > [    5.333690] [c0000003e7e02c80] [d00000000234ff14]
> > .xfs_error_report+0x54/0x70 [xfs]
> > [    5.333747] [c0000003e7e02cf0] [d000000002396e84]
> > .__xfs_dir3_data_check+0x784/0x820 [xfs]
> > [    5.333805] [c0000003e7e02df0] [d000000002396fdc]
> > .xfs_dir3_data_verify+0xbc/0xe0 [xfs]
> > [    5.333871] [c0000003e7e02e70] [d00000000239703c]
> > .xfs_dir3_data_write_verify+0x3c/0x1c0 [xfs]
> > [    5.333936] [c0000003e7e02f20] [d00000000234db94]
> > ._xfs_buf_ioapply+0xd4/0x400 [xfs]
> > [    5.334003] [c0000003e7e03060] [d00000000234dfcc]
> > .xfs_buf_iorequest+0x4c/0xe0 [xfs]
> > [    5.334055] [c0000003e7e030f0] [d00000000234e0c4]
> > .xfs_bdstrat_cb+0x64/0x120 [xfs]
> > [    5.334117] [c0000003e7e03180] [d00000000234e284]
> > .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs]
> > [    5.334180] [c0000003e7e03270] [d00000000234f318]
> > .xfs_buf_delwri_submit+0x38/0xd0 [xfs]
> > [    5.334237] [c0000003e7e03310] [d0000000023b1904]
> > .xlog_recover_commit_trans+0xd4/0x1b0 [xfs]
> > [    5.334305] [c0000003e7e033d0] [d0000000023b1c4c]
> > .xlog_recover_process_data+0x26c/0x340 [xfs]
> > [    5.334372] [c0000003e7e034a0] [d0000000023b2108]
> > .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs]
> > [    5.334438] [c0000003e7e03610] [d0000000023b2360]
> > .xlog_do_log_recovery+0xa0/0x120 [xfs]
> > [    5.334503] [c0000003e7e036b0] [d0000000023b2400]
> > .xlog_do_recover+0x20/0x150 [xfs]
> > [    5.334570] [c0000003e7e03740] [d0000000023b25c4]
> > .xlog_recover+0x94/0x100 [xfs]
> > [    5.334647] [c0000003e7e037d0] [d0000000023bcf84]
> > .xfs_log_mount+0x144/0x1e0 [xfs]
> > [    5.334705] [c0000003e7e03870] [d0000000023b6098]
> > .xfs_mountfs+0x3c8/0x780 [xfs]
> > [    5.334768] [c0000003e7e03930] [d00000000236435c]
> > .xfs_fs_fill_super+0x31c/0x3b0 [xfs]
> > [    5.334801] [c0000003e7e039d0] [c000000000217028]
> > .mount_bdev+0x258/0x2b0
> > [    5.334855] [c0000003e7e03aa0] [d000000002361c78]
> > .xfs_fs_mount+0x18/0x30 [xfs]
> > [    5.334878] [c0000003e7e03b10] [c000000000218040] .mount_fs+0x70/0x230
> > [    5.334890] [c0000003e7e03bd0] [c00000000023a9f8]
> > .vfs_kern_mount+0x58/0x140
> > [    5.334901] [c0000003e7e03c80] [c00000000023d5f0] .do_mount+0x280/0xb10
> > [    5.334912] [c0000003e7e03d70] [c00000000023df30] .SyS_mount+0xb0/0x110
> > [    5.334924] [c0000003e7e03e30] [c000000000009e54] syscall_exit+0x0/0x98
> > [    5.334945] c00000001bee2000: 58 44 32 44 09 50 00 40 0a 50 00 40 0b 50
> > 00 40  XD2D.P.@.P.@.P.@
> > [    5.334957] c00000001bee2010: 00 00 00 00 00 11 a3 8e 32 62 65 61 68 5f
> > 74 61  ........2beah_ta
> > [    5.334968] c00000001bee2020: 73 6b 5f 65 64 33 33 63 61 62 36 2d 32 65
> > 30 31  sk_ed33cab6-2e01
> > [    5.334979] c00000001bee2030: 2d 34 63 34 34 2d 38 63 31 65 2d 66 65 37
> > 36 35  -4c44-8c1e-fe765
> > [    5.334992] XFS (dm-1): Internal error xfs_dir3_data_write_verify at
> > line 271 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000234db94
> > [    5.334992]
> > [    5.335017] CPU: 2 PID: 372 Comm: mount Tainted: G        W    3.9.0+ #1
> > [    5.335025] Call Trace:
> > [    5.335032] [c0000003e7e02c10] [c000000000014e48] .show_stack+0x78/0x1e0
> > (unreliable)
> > [    5.335046] [c0000003e7e02ce0] [c00000000074be70] .dump_stack+0x28/0x3c
> > [    5.335099] [c0000003e7e02d50] [d00000000234ff14]
> > .xfs_error_report+0x54/0x70 [xfs]
> > [    5.335153] [c0000003e7e02dc0] [d00000000234ffac]
> > .xfs_corruption_error+0x7c/0xb0 [xfs]
> > [    5.335220] [c0000003e7e02e70] [d000000002397148]
> > .xfs_dir3_data_write_verify+0x148/0x1c0 [xfs]
> > [    5.335284] [c0000003e7e02f20] [d00000000234db94]
> > ._xfs_buf_ioapply+0xd4/0x400 [xfs]
> > [    5.335337] [c0000003e7e03060] [d00000000234dfcc]
> > .xfs_buf_iorequest+0x4c/0xe0 [xfs]
> > [    5.335403] [c0000003e7e030f0] [d00000000234e0c4]
> > .xfs_bdstrat_cb+0x64/0x120 [xfs]
> > [    5.335464] [c0000003e7e03180] [d00000000234e284]
> > .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs]
> > [    5.335527] [c0000003e7e03270] [d00000000234f318]
> > .xfs_buf_delwri_submit+0x38/0xd0 [xfs]
> > [    5.335584] [c0000003e7e03310] [d0000000023b1904]
> > .xlog_recover_commit_trans+0xd4/0x1b0 [xfs]
> > [    5.335650] [c0000003e7e033d0] [d0000000023b1c4c]
> > .xlog_recover_process_data+0x26c/0x340 [xfs]
> > [    5.335718] [c0000003e7e034a0] [d0000000023b2108]
> > .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs]
> > [    5.335785] [c0000003e7e03610] [d0000000023b2360]
> > .xlog_do_log_recovery+0xa0/0x120 [xfs]
> > [    5.335842] [c0000003e7e036b0] [d0000000023b2400]
> > .xlog_do_recover+0x20/0x150 [xfs]
> > [    5.335909] [c0000003e7e03740] [d0000000023b25c4]
> > .xlog_recover+0x94/0x100 [xfs]
> > [    5.335976] [c0000003e7e037d0] [d0000000023bcf84]
> > .xfs_log_mount+0x144/0x1e0 [xfs]
> > [    5.336033] [c0000003e7e03870] [d0000000023b6098]
> > .xfs_mountfs+0x3c8/0x780 [xfs]
> > [    5.336097] [c0000003e7e03930] [d00000000236435c]
> > .xfs_fs_fill_super+0x31c/0x3b0 [xfs]
> > [    5.336121] [c0000003e7e039d0] [c000000000217028]
> > .mount_bdev+0x258/0x2b0
> > [    5.336174] [c0000003e7e03aa0] [d000000002361c78]
> > .xfs_fs_mount+0x18/0x30 [xfs]
> > [    5.336206] [c0000003e7e03b10] [c000000000218040] .mount_fs+0x70/0x230
> > [    5.336218] [c0000003e7e03bd0] [c00000000023a9f8]
> > .vfs_kern_mount+0x58/0x140
> > [    5.336229] [c0000003e7e03c80] [c00000000023d5f0] .do_mount+0x280/0xb10
> > [    5.336240] [c0000003e7e03d70] [c00000000023df30] .SyS_mount+0xb0/0x110
> > [    5.336251] [c0000003e7e03e30] [c000000000009e54] syscall_exit+0x0/0x98
> 
> 
> > [    5.348410] XFS (dm-1): log mount/recovery failed: error 117
> > [    5.348491] XFS (dm-1): log mount failed
> > dracut-initqueue[275]: mount: mount
> > /dev/mapper/rhel_ibm--p730--06--lp1-root on /sysroot failed: Structure
> > needs cleaning
> > dracut-initqueue[275]: Warning: Failed to mount -t xfs -o ro,ro
> > /dev/mapper/rhel_ibm--p730--06--lp1-root /sysroot
> > dracut-initqueue[275]: Warning: *** An error occurred during the file
> > system check.
> > dracut-initqueue[275]: Warning: *** Dropping you to a shell; the system
> > will try
> > 
> > Also, never saw any of those in other architectures like x64, but started
> > get those there in 3.9.0.
> > Unsure if those are related.
> > 
> > [ 3224.369782]
> > =============================================================================
> > [ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison
> > overwritten
> > [ 3224.370017]
> > -----------------------------------------------------------------------------
> 
>   2: 'F' if any module was force loaded by "insmod -f", ' ' if all
>      modules were loaded normally.
> 
> Force loaded modules, what's that from?
This could be just happened after the booting done or we were running a stress test later
that does load (modprobe *) and unload (modprobe -r *) every module. Again, those warnings
could be totally unrelated to the above rootfs corruption.
CAI Qian
> 
> 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-07  7:53     ` CAI Qian
@ 2013-05-07 19:08       ` Eric Sandeen
  2013-05-14  2:28         ` CAI Qian
                           ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Eric Sandeen @ 2013-05-07 19:08 UTC (permalink / raw)
  To: CAI Qian; +Cc: xfs

On 5/7/13 2:53 AM, CAI Qian wrote:
> 
> 
> ----- Original Message -----
>> From: "Eric Sandeen" <sandeen@sandeen.net>
>> To: "CAI Qian" <caiqian@redhat.com>
>> Cc: xfs@oss.sgi.com
>> Sent: Monday, May 6, 2013 10:31:01 PM
>> Subject: Re: 3.9.0: XFS rootfs corruption
>>
>> On 5/6/13 2:50 AM, CAI Qian wrote:
>>> Saw this on several different Power7 systems after kdump reboot. It has
>>> xfsprogs-3.1.10
>>> and rootfs in on LVM. Never saw one of those in any of the RC releases.
>>>
>>> ] Reached target Basic System.
>>> [    4.919316] bio: create slab <bio-1> at 1
>>> [    5.078616] SGI XFS with ACLs, security attributes, large block/inode
>>> numbers, no debug enabled
>>> [    5.081925] XFS (dm-1): Mounting Filesystem
>>> [    5.168530] XFS (dm-1): Starting recovery (logdev: internal)
>>> [    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of
>>> file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc
>>
>> here:
>>
>>         /*
>>          * Need to have seen all the entries and all the bestfree slots.
>>          */
>>         XFS_WANT_CORRUPTED_RETURN(freeseen == 7);
>>
>> I hope Dave knows offhand what this might mean.  :)
>>
>> Could you get a metadump of the filesystem in question?
> Err, less familiar here. May I ask how can I do that?

since it's the root fs, you might need to do it from some sort of rescue
shell, then just do xfs_metadump /dev/<device> <metadump filename>

the resulting file should compress further with something like bzip2.

...

>>> Also, never saw any of those in other architectures like x64, but started
>>> get those there in 3.9.0.
>>> Unsure if those are related.
>>>
>>> [ 3224.369782]
>>> =============================================================================
>>> [ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison
>>> overwritten
>>> [ 3224.370017]
>>> -----------------------------------------------------------------------------
>>
>>   2: 'F' if any module was force loaded by "insmod -f", ' ' if all
>>      modules were loaded normally.
>>
>> Force loaded modules, what's that from?
> This could be just happened after the booting done or we were running a stress test later
> that does load (modprobe *) and unload (modprobe -r *) every module. Again, those warnings
> could be totally unrelated to the above rootfs corruption.
> CAI Qian

hmmm :)  So any one of those modules could have caused memory corruption I guess.

If you can hit it reliably you might try to narrow it down to whether it
is a particular module causing it.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-07 19:08       ` Eric Sandeen
@ 2013-05-14  2:28         ` CAI Qian
  2013-05-14  3:17           ` Dave Chinner
  2013-05-22  4:10         ` CAI Qian
  2013-06-03  8:09         ` CAI Qian
  2 siblings, 1 reply; 14+ messages in thread
From: CAI Qian @ 2013-05-14  2:28 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs

3.10-rc1 has the same problem reproduced on 2 Power 7 systems. I am going to
get the metadata dump this time.

CAI Qian


  OK     
] Started Setup Virtual Console. 
[      
  OK     
] Reached target System Initialization. 
[    1.430947] device-mapper: uevent: version 1.0.3 
[    1.431120] device-mapper: ioctl: 4.24.0-ioctl (2013-01-15) initialised: dm-devel@redhat.com 
[      
  OK     
] Started dracut pre-udev hook. 
         Starting udev Kernel Device Manager... 
[    1.453958] systemd-udevd[244]: starting version 197 
[      
  OK     
] Started udev Kernel Device Manager. 
         Starting dracut pre-trigger hook... 
[      
  OK     
] Started dracut pre-trigger hook. 
         Starting udev Coldplug all Devices... 
[      
  OK     
] Started udev Coldplug all Devices. 
         Starting Show Plymouth Boot Screen... 
         Starting dracut initqueue hook... 
[    1.546875] ibmvscsi 30000003: SRP_VERSION: 16.a 
[    1.547169] scsi0 : IBM POWER Virtual SCSI Adapter 1.5.9 
[    1.547430] ibmvscsi 30000003: partner initialization complete 
[    1.547533] ibmvscsi 30000003: host srp version: 16.a, host partition vios (1), OS 3, max io 262144 
[    1.547684] ibmvscsi 30000003: Client reserve enabled 
[    1.547713] ibmvscsi 30000003: sent SRP login 
[    1.547798] ibmvscsi 30000003: SRP_LOGIN succeeded 
[    1.564079] scsi 0:0:1:0: Direct-Access     AIX      VDASD            0001 PQ: 0 ANSI: 3 
[    1.608450] sd 0:0:1:0: [sda] 209715200 512-byte logical blocks: (107 GB/100 GiB) 
[    1.608555] sd 0:0:1:0: [sda] Write Protect is off 
[    1.608653] sd 0:0:1:0: [sda] Cache data unavailable 
[    1.608663] sd 0:0:1:0: [sda] Assuming drive cache: write through 
[    1.609140] sd 0:0:1:0: [sda] Cache data unavailable 
[    1.609152] sd 0:0:1:0: [sda] Assuming drive cache: write through 
[    1.621164]  sda: sda1 sda2 sda3 
[    1.621841] sd 0:0:1:0: [sda] Cache data unavailable 
[    1.621849] sd 0:0:1:0: [sda] Assuming drive cache: write through 
[    1.621858] sd 0:0:1:0: [sda] Attached SCSI disk 
[      
  OK     
] Started Show Plymouth Boot Screen.  
[      
  OK     
] Reached target Basic System.  
[    1.871350] bio: create slab <bio-1> at 1 
[    2.030633] SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled 
[    2.033824] XFS (dm-1): Mounting Filesystem 
[    2.180799] XFS (dm-1): Starting recovery (logdev: internal) 
[    2.658307] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000239703c 
[    2.658307]  
[    2.658352] CPU: 14 PID: 372 Comm: mount Not tainted 3.10.0-rc1 #1 
[    2.658360] Call Trace: 
[    2.658370] [c0000003e7c02b40] [c000000000014e28] .show_stack+0x78/0x1e0 (unreliable) 
[    2.658387] [c0000003e7c02c10] [c000000000747834] .dump_stack+0x28/0x3c 
[    2.658441] [c0000003e7c02c80] [d00000000234ff14] .xfs_error_report+0x54/0x70 [xfs] 
[    2.658497] [c0000003e7c02cf0] [d000000002396ee4] .__xfs_dir3_data_check+0x784/0x820 [xfs] 
[    2.658553] [c0000003e7c02df0] [d00000000239703c] .xfs_dir3_data_verify+0xbc/0xe0 [xfs] 
[    2.658617] [c0000003e7c02e70] [d00000000239709c] .xfs_dir3_data_write_verify+0x3c/0x1c0 [xfs] 
[    2.658670] [c0000003e7c02f20] [d00000000234db94] ._xfs_buf_ioapply+0xd4/0x400 [xfs] 
[    2.658732] [c0000003e7c03060] [d00000000234dfcc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    2.658784] [c0000003e7c030f0] [d00000000234e0c4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    2.658837] [c0000003e7c03180] [d00000000234e284] .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs] 
[    2.658898] [c0000003e7c03270] [d00000000234f318] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    2.658964] [c0000003e7c03310] [d0000000023b1964] .xlog_recover_commit_trans+0xd4/0x1b0 [xfs] 
[    2.659031] [c0000003e7c033d0] [d0000000023b1cac] .xlog_recover_process_data+0x26c/0x340 [xfs] 
[    2.659089] [c0000003e7c034a0] [d0000000023b2168] .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs] 
[    2.659155] [c0000003e7c03610] [d0000000023b23c0] .xlog_do_log_recovery+0xa0/0x120 [xfs] 
[    2.659221] [c0000003e7c036b0] [d0000000023b2460] .xlog_do_recover+0x20/0x150 [xfs] 
[    2.659287] [c0000003e7c03740] [d0000000023b2624] .xlog_recover+0x94/0x100 [xfs] 
[    2.659344] [c0000003e7c037d0] [d0000000023bcfe4] .xfs_log_mount+0x144/0x1e0 [xfs] 
[    2.659410] [c0000003e7c03870] [d0000000023b60f8] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    2.659473] [c0000003e7c03930] [d0000000023643ac] .xfs_fs_fill_super+0x32c/0x3c0 [xfs] 
[    2.659498] [c0000003e7c039d0] [c000000000215e08] .mount_bdev+0x258/0x2b0 
[    2.659561] [c0000003e7c03aa0] [d000000002361cb8] .xfs_fs_mount+0x18/0x30 [xfs] 
[    2.659583] [c0000003e7c03b10] [c000000000216e10] .mount_fs+0x70/0x220 
[    2.659595] [c0000003e7c03bd0] [c000000000239708] .vfs_kern_mount+0x58/0x140 
[    2.659615] [c0000003e7c03c80] [c00000000023c330] .do_mount+0x2b0/0xb00 
[    2.659626] [c0000003e7c03d70] [c00000000023cc30] .SyS_mount+0xb0/0x110 
[    2.659639] [c0000003e7c03e30] [c000000000009e54] syscall_exit+0x0/0x98 
[    2.659651] c0000003eb671000: 58 44 32 44 09 50 00 40 0a 50 00 40 0b 50 00 40  XD2D.P.@.P.@.P.@ 
[    2.659663] c0000003eb671010: 00 00 00 00 00 a0 78 53 32 62 65 61 68 5f 74 61  ......xS2beah_ta 
[    2.659674] c0000003eb671020: 73 6b 5f 32 31 39 39 63 63 39 37 2d 64 66 32 31  sk_2199cc97-df21 
[    2.659694] c0000003eb671030: 2d 34 66 63 31 2d 39 39 61 63 2d 32 64 64 34 39  -4fc1-99ac-2dd49 
[    2.659707] XFS (dm-1): Internal error xfs_dir3_data_write_verify at line 271 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000234db94 
[    2.659707]  
[    2.659723] CPU: 14 PID: 372 Comm: mount Not tainted 3.10.0-rc1 #1 
[    2.659731] Call Trace: 
[    2.659737] [c0000003e7c02c10] [c000000000014e28] .show_stack+0x78/0x1e0 (unreliable) 
[    2.659751] [c0000003e7c02ce0] [c000000000747834] .dump_stack+0x28/0x3c 
[    2.659812] [c0000003e7c02d50] [d00000000234ff14] .xfs_error_report+0x54/0x70 [xfs] 
[    2.659874] [c0000003e7c02dc0] [d00000000234ffac] .xfs_corruption_error+0x7c/0xb0 [xfs] 
[    2.659929] [c0000003e7c02e70] [d0000000023971a8] .xfs_dir3_data_write_verify+0x148/0x1c0 [xfs] 
[    2.659992] [c0000003e7c02f20] [d00000000234db94] ._xfs_buf_ioapply+0xd4/0x400 [xfs] 
[    2.660053] [c0000003e7c03060] [d00000000234dfcc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    2.660115] [c0000003e7c030f0] [d00000000234e0c4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    2.660169] [c0000003e7c03180] [d00000000234e284] .__xfs_buf_delwri_submit+0x104/0x2a0 [xfs] 
[    2.660230] [c0000003e7c03270] [d00000000234f318] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    2.660296] [c0000003e7c03310] [d0000000023b1964] .xlog_recover_commit_trans+0xd4/0x1b0 [xfs] 
[    2.660363] [c0000003e7c033d0] [d0000000023b1cac] .xlog_recover_process_data+0x26c/0x340 [xfs] 
[    2.660420] [c0000003e7c034a0] [d0000000023b2168] .xlog_do_recovery_pass+0x3e8/0x5a0 [xfs] 
[    2.660485] [c0000003e7c03610] [d0000000023b23c0] .xlog_do_log_recovery+0xa0/0x120 [xfs] 
[    2.660552] [c0000003e7c036b0] [d0000000023b2460] .xlog_do_recover+0x20/0x150 [xfs] 
[    2.660608] [c0000003e7c03740] [d0000000023b2624] .xlog_recover+0x94/0x100 [xfs] 
[    2.660675] [c0000003e7c037d0] [d0000000023bcfe4] .xfs_log_mount+0x144/0x1e0 [xfs] 
[    2.660741] [c0000003e7c03870] [d0000000023b60f8] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    2.660795] [c0000003e7c03930] [d0000000023643ac] .xfs_fs_fill_super+0x32c/0x3c0 [xfs] 
[    2.660817] [c0000003e7c039d0] [c000000000215e08] .mount_bdev+0x258/0x2b0 
[    2.660869] [c0000003e7c03aa0] [d000000002361cb8] .xfs_fs_mount+0x18/0x30 [xfs] 
[    2.660881] [c0000003e7c03b10] [c000000000216e10] .mount_fs+0x70/0x220 
[    2.660902] [c0000003e7c03bd0] [c000000000239708] .vfs_kern_mount+0x58/0x140 
[    2.660913] [c0000003e7c03c80] [c00000000023c330] .do_mount+0x2b0/0xb00 
[    2.660923] [c0000003e7c03d70] [c00000000023cc30] .SyS_mount+0xb0/0x110 
[    2.660944] [c0000003e7c03e30] [c000000000009e54] syscall_exit+0x0/0x98 
[    2.660954] XFS (dm-1): Corruption detected. Unmount and run xfs_repair 
[    2.660965] XFS (dm-1): xfs_do_force_shutdown(0x8) called from line 1364 of file fs/xfs/xfs_buf.c.  Return address = 0xd00000000234de84 
[    2.660979] XFS (dm-1): Corruption of in-memory data detected.  Shutting down filesystem 
[    2.660989] XFS (dm-1): Please umount the filesystem and rectify the problem(s) 
[    2.661013] XFS (dm-1): metadata I/O error: block 0xd6060 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661026] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661041] XFS (dm-1): metadata I/O error: block 0xd7940 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661053] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661069] XFS (dm-1): metadata I/O error: block 0xe0190 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661091] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661106] XFS (dm-1): metadata I/O error: block 0x379590 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661118] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661134] XFS (dm-1): metadata I/O error: block 0x45f5e0 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661154] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661169] XFS (dm-1): metadata I/O error: block 0x5483c0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661181] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661197] XFS (dm-1): metadata I/O error: block 0x576190 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661218] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661233] XFS (dm-1): metadata I/O error: block 0x57acc8 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661245] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661261] XFS (dm-1): metadata I/O error: block 0x1900002 ("xlog_recover_iodone") error 5 numblks 1 
[    2.661282] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661297] XFS (dm-1): metadata I/O error: block 0x1900018 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661309] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661324] XFS (dm-1): metadata I/O error: block 0x1900030 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661346] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661361] XFS (dm-1): metadata I/O error: block 0x19004f0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661373] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661389] XFS (dm-1): metadata I/O error: block 0x1900540 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661411] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661427] XFS (dm-1): metadata I/O error: block 0x1900558 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661439] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661454] XFS (dm-1): metadata I/O error: block 0x197fe10 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661476] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661491] XFS (dm-1): metadata I/O error: block 0x19b9f70 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661503] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661518] XFS (dm-1): metadata I/O error: block 0x1f02c50 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661541] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661557] XFS (dm-1): metadata I/O error: block 0x1f0cea0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661569] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661584] XFS (dm-1): metadata I/O error: block 0x4b00001 ("xlog_recover_iodone") error 5 numblks 1 
[    2.661605] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661621] XFS (dm-1): metadata I/O error: block 0x4b00002 ("xlog_recover_iodone") error 5 numblks 1 
[    2.661633] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661648] XFS (dm-1): metadata I/O error: block 0x4bad410 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661670] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661685] XFS (dm-1): metadata I/O error: block 0x4c0aca0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661697] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661713] XFS (dm-1): metadata I/O error: block 0x4c1b3d0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661725] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661740] XFS (dm-1): metadata I/O error: block 0x4d0aa68 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661771] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661786] XFS (dm-1): metadata I/O error: block 0x4d0aab8 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661808] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661823] XFS (dm-1): metadata I/O error: block 0x4f42630 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661836] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661851] XFS (dm-1): metadata I/O error: block 0x4f42640 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661872] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661888] XFS (dm-1): metadata I/O error: block 0x4f42cb8 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661900] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661915] XFS (dm-1): metadata I/O error: block 0x4f5e8f0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.661936] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.661952] XFS (dm-1): metadata I/O error: block 0x4f67a10 ("xlog_recover_iodone") error 5 numblks 8 
[    2.661974] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.662043] XFS (dm-1): metadata I/O error: block 0xd6018 ("xlog_recover_iodone") error 117 numblks 8 
[    2.662055] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd0000000023adad0 
[    2.677551] XFS (dm-1): log mount/recovery failed: error 117 
[    2.677627] XFS (dm-1): log mount failed 
dracut-initqueue[273]: mount: mount /dev/mapper/rhel_ibm--p730--06--lp1-root on /sysroot failed: Structure needs cleaning  
dracut-initqueue[273]: Warning: Failed to mount -t xfs -o ro,ro /dev/mapper/rhel_ibm--p730--06--lp1-root /sysroot  
dracut-initqueue[273]: Warning: *** An error occurred during the file system check.  
dracut-initqueue[273]: Warning: *** Dropping you to a shell; the system will try  
dracut-initqueue[273]: Warning: *** to mount the filesystem(s), when you leave the shell.  
[      
  OK     
] Started Show Plymouth Boot Screen.  
[      
  OK     
] Reached target Basic System.  
dracut-initqueue[273]: mount: mount /dev/mapper/rhel_ibm--p730--06--lp1-root on /sysroot failed: Structure needs cleaning  
dracut-initqueue[273]: Warning: Failed to mount -t xfs -o ro,ro /dev/mapper/rhel_ibm--p730--06--lp1-root /sysroot  
dracut-initqueue[273]: Warning: *** An error occurred during the file system check.  
dracut-initqueue[273]: Warning: *** Dropping you to a shell; the system will try  
dracut-initqueue[273]: Warning: *** to mount the filesystem(s), when you leave the shell.  
dracut-initqueue[273]: Warning: 
 
 
Entering emergency mode. Exit the shell to continue. 
Type "journalctl" to view system logs. 
 
(Repair:/

----- Original Message -----
> From: "Eric Sandeen" <sandeen@sandeen.net>
> To: "CAI Qian" <caiqian@redhat.com>
> Cc: xfs@oss.sgi.com
> Sent: Wednesday, May 8, 2013 3:08:05 AM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> On 5/7/13 2:53 AM, CAI Qian wrote:
> > 
> > 
> > ----- Original Message -----
> >> From: "Eric Sandeen" <sandeen@sandeen.net>
> >> To: "CAI Qian" <caiqian@redhat.com>
> >> Cc: xfs@oss.sgi.com
> >> Sent: Monday, May 6, 2013 10:31:01 PM
> >> Subject: Re: 3.9.0: XFS rootfs corruption
> >>
> >> On 5/6/13 2:50 AM, CAI Qian wrote:
> >>> Saw this on several different Power7 systems after kdump reboot. It has
> >>> xfsprogs-3.1.10
> >>> and rootfs in on LVM. Never saw one of those in any of the RC releases.
> >>>
> >>> ] Reached target Basic System.
> >>> [    4.919316] bio: create slab <bio-1> at 1
> >>> [    5.078616] SGI XFS with ACLs, security attributes, large block/inode
> >>> numbers, no debug enabled
> >>> [    5.081925] XFS (dm-1): Mounting Filesystem
> >>> [    5.168530] XFS (dm-1): Starting recovery (logdev: internal)
> >>> [    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176
> >>> of
> >>> file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc
> >>
> >> here:
> >>
> >>         /*
> >>          * Need to have seen all the entries and all the bestfree slots.
> >>          */
> >>         XFS_WANT_CORRUPTED_RETURN(freeseen == 7);
> >>
> >> I hope Dave knows offhand what this might mean.  :)
> >>
> >> Could you get a metadump of the filesystem in question?
> > Err, less familiar here. May I ask how can I do that?
> 
> since it's the root fs, you might need to do it from some sort of rescue
> shell, then just do xfs_metadump /dev/<device> <metadump filename>
> 
> the resulting file should compress further with something like bzip2.
> 
> ...
> 
> >>> Also, never saw any of those in other architectures like x64, but started
> >>> get those there in 3.9.0.
> >>> Unsure if those are related.
> >>>
> >>> [ 3224.369782]
> >>> =============================================================================
> >>> [ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison
> >>> overwritten
> >>> [ 3224.370017]
> >>> -----------------------------------------------------------------------------
> >>
> >>   2: 'F' if any module was force loaded by "insmod -f", ' ' if all
> >>      modules were loaded normally.
> >>
> >> Force loaded modules, what's that from?
> > This could be just happened after the booting done or we were running a
> > stress test later
> > that does load (modprobe *) and unload (modprobe -r *) every module. Again,
> > those warnings
> > could be totally unrelated to the above rootfs corruption.
> > CAI Qian
> 
> hmmm :)  So any one of those modules could have caused memory corruption I
> guess.
> 
> If you can hit it reliably you might try to narrow it down to whether it
> is a particular module causing it.
> 
> -Eric
> 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-14  2:28         ` CAI Qian
@ 2013-05-14  3:17           ` Dave Chinner
  0 siblings, 0 replies; 14+ messages in thread
From: Dave Chinner @ 2013-05-14  3:17 UTC (permalink / raw)
  To: CAI Qian; +Cc: Eric Sandeen, xfs

On Mon, May 13, 2013 at 10:28:23PM -0400, CAI Qian wrote:
> 3.10-rc1 has the same problem reproduced on 2 Power 7 systems. I am going to
> get the metadata dump this time.

Of course. It's detecting an on-disk corruption, so it you haven't
fixed it the kernel will still find it...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-07 19:08       ` Eric Sandeen
  2013-05-14  2:28         ` CAI Qian
@ 2013-05-22  4:10         ` CAI Qian
  2013-05-22  8:48           ` CAI Qian
  2013-06-03  8:09         ` CAI Qian
  2 siblings, 1 reply; 14+ messages in thread
From: CAI Qian @ 2013-05-22  4:10 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs

OK, this has never been reproduced in 3.9-rc1 so far. It may because the
rootfs became full after crash dump testing though.
CAI Qian

----- Original Message -----
> From: "Eric Sandeen" <sandeen@sandeen.net>
> To: "CAI Qian" <caiqian@redhat.com>
> Cc: xfs@oss.sgi.com
> Sent: Wednesday, May 8, 2013 3:08:05 AM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> On 5/7/13 2:53 AM, CAI Qian wrote:
> > 
> > 
> > ----- Original Message -----
> >> From: "Eric Sandeen" <sandeen@sandeen.net>
> >> To: "CAI Qian" <caiqian@redhat.com>
> >> Cc: xfs@oss.sgi.com
> >> Sent: Monday, May 6, 2013 10:31:01 PM
> >> Subject: Re: 3.9.0: XFS rootfs corruption
> >>
> >> On 5/6/13 2:50 AM, CAI Qian wrote:
> >>> Saw this on several different Power7 systems after kdump reboot. It has
> >>> xfsprogs-3.1.10
> >>> and rootfs in on LVM. Never saw one of those in any of the RC releases.
> >>>
> >>> ] Reached target Basic System.
> >>> [    4.919316] bio: create slab <bio-1> at 1
> >>> [    5.078616] SGI XFS with ACLs, security attributes, large block/inode
> >>> numbers, no debug enabled
> >>> [    5.081925] XFS (dm-1): Mounting Filesystem
> >>> [    5.168530] XFS (dm-1): Starting recovery (logdev: internal)
> >>> [    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176
> >>> of
> >>> file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc
> >>
> >> here:
> >>
> >>         /*
> >>          * Need to have seen all the entries and all the bestfree slots.
> >>          */
> >>         XFS_WANT_CORRUPTED_RETURN(freeseen == 7);
> >>
> >> I hope Dave knows offhand what this might mean.  :)
> >>
> >> Could you get a metadump of the filesystem in question?
> > Err, less familiar here. May I ask how can I do that?
> 
> since it's the root fs, you might need to do it from some sort of rescue
> shell, then just do xfs_metadump /dev/<device> <metadump filename>
> 
> the resulting file should compress further with something like bzip2.
> 
> ...
> 
> >>> Also, never saw any of those in other architectures like x64, but started
> >>> get those there in 3.9.0.
> >>> Unsure if those are related.
> >>>
> >>> [ 3224.369782]
> >>> =============================================================================
> >>> [ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison
> >>> overwritten
> >>> [ 3224.370017]
> >>> -----------------------------------------------------------------------------
> >>
> >>   2: 'F' if any module was force loaded by "insmod -f", ' ' if all
> >>      modules were loaded normally.
> >>
> >> Force loaded modules, what's that from?
> > This could be just happened after the booting done or we were running a
> > stress test later
> > that does load (modprobe *) and unload (modprobe -r *) every module. Again,
> > those warnings
> > could be totally unrelated to the above rootfs corruption.
> > CAI Qian
> 
> hmmm :)  So any one of those modules could have caused memory corruption I
> guess.
> 
> If you can hit it reliably you might try to narrow it down to whether it
> is a particular module causing it.
> 
> -Eric
> 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-22  4:10         ` CAI Qian
@ 2013-05-22  8:48           ` CAI Qian
  2013-05-22  9:46             ` Dave Chinner
  0 siblings, 1 reply; 14+ messages in thread
From: CAI Qian @ 2013-05-22  8:48 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs



----- Original Message -----
> From: "CAI Qian" <caiqian@redhat.com>
> To: "Eric Sandeen" <sandeen@sandeen.net>
> Cc: xfs@oss.sgi.com
> Sent: Wednesday, May 22, 2013 12:10:07 PM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> OK, this has never been reproduced in 3.9-rc1 so far. It may because the
> rootfs became full after crash dump testing though.
> CAI Qian
Oops, it is still there,
[    1.872402] SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled 
[    1.882003] XFS (dm-1): Mounting Filesystem 
[    5.036445] XFS (dm-1): Starting recovery (logdev: internal) 
[    5.337985] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000245778c 
[    5.337985]  
[    5.338002] CPU: 15 PID: 425 Comm: mount Not tainted 3.10.0-rc2+ #1 
[    5.338007] Call Trace: 
[    5.338014] [c0000002e3782b90] [c000000000014e1c] .show_stack+0x7c/0x1f0 (unreliable) 
[    5.338024] [c0000002e3782c60] [c0000000007439dc] .dump_stack+0x28/0x3c 
[    5.338056] [c0000002e3782cd0] [d000000002410634] .xfs_error_report+0x54/0x70 [xfs] 
[    5.338088] [c0000002e3782d40] [d000000002457634] .__xfs_dir3_data_check+0x784/0x820 [xfs] 
[    5.338120] [c0000002e3782e40] [d00000000245778c] .xfs_dir3_data_verify+0xbc/0xe0 [xfs] 
[    5.338151] [c0000002e3782ec0] [d0000000024577ec] .xfs_dir3_data_write_verify+0x3c/0x1c0 [xfs] 
[    5.338181] [c0000002e3782f70] [d00000000240db44] ._xfs_buf_ioapply+0xd4/0x410 [xfs] 
[    5.338210] [c0000002e37830b0] [d00000000240df8c] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    5.338241] [c0000002e3783140] [d00000000240e084] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    5.338271] [c0000002e37831d0] [d00000000240e294] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs] 
[    5.338300] [c0000002e37832b0] [d00000000240f2d8] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    5.338334] [c0000002e3783350] [d000000002471fc4] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs] 
[    5.338366] [c0000002e3783410] [d0000000024722cc] .xlog_recover_process_data+0x25c/0x370 [xfs] 
[    5.338399] [c0000002e37834e0] [d000000002472528] .xlog_do_recovery_pass+0x148/0x590 [xfs] 
[    5.338431] [c0000002e3783650] [d000000002472a08] .xlog_do_log_recovery+0x98/0x110 [xfs] 
[    5.338463] [c0000002e37836e0] [d000000002472aa0] .xlog_do_recover+0x20/0x160 [xfs] 
[    5.338495] [c0000002e3783770] [d000000002472c78] .xlog_recover+0x98/0x110 [xfs] 
[    5.338527] [c0000002e3783800] [d00000000247d504] .xfs_log_mount+0x134/0x1d0 [xfs] 
[    5.338559] [c0000002e3783890] [d0000000024768e8] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    5.338589] [c0000002e3783940] [d000000002424bbc] .xfs_fs_fill_super+0x30c/0x3a0 [xfs] 
[    5.338598] [c0000002e37839e0] [c000000000214a78] .mount_bdev+0x258/0x2a0 
[    5.338627] [c0000002e3783ab0] [d000000002422678] .xfs_fs_mount+0x18/0x30 [xfs] 
[    5.338635] [c0000002e3783b20] [c000000000215900] .mount_fs+0x70/0x230 
[    5.338643] [c0000002e3783be0] [c000000000237ee8] .vfs_kern_mount+0x58/0x130 
[    5.338650] [c0000002e3783c90] [c00000000023b0b0] .do_mount+0x2d0/0xb30 
[    5.338657] [c0000002e3783d70] [c00000000023b9c0] .SyS_mount+0xb0/0x110 
[    5.338664] [c0000002e3783e30] [c000000000009e54] syscall_exit+0x0/0x98 
[    5.338672] c0000002d5220000: 58 44 32 44 09 50 00 40 0a 50 00 40 0b 50 00 40  XD2D.P.@.P.@.P.@ 
[    5.338679] c0000002d5220010: 00 00 00 00 08 23 e6 2d 32 62 65 61 68 5f 74 61  .....#.-2beah_ta 
[    5.338686] c0000002d5220020: 73 6b 5f 33 64 36 62 37 64 62 32 2d 61 35 35 37  sk_3d6b7db2-a557 
[    5.338693] c0000002d5220030: 2d 34 34 63 31 2d 38 65 64 36 2d 62 63 32 62 37  -44c1-8ed6-bc2b7 
[    5.338700] XFS (dm-1): Internal error xfs_dir3_data_write_verify at line 271 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd00000000240db44 
[    5.338700]  
[    5.338710] CPU: 15 PID: 425 Comm: mount Not tainted 3.10.0-rc2+ #1 
[    5.338715] Call Trace: 
[    5.338718] [c0000002e3782c60] [c000000000014e1c] .show_stack+0x7c/0x1f0 (unreliable) 
[    5.338726] [c0000002e3782d30] [c0000000007439dc] .dump_stack+0x28/0x3c 
[    5.338755] [c0000002e3782da0] [d000000002410634] .xfs_error_report+0x54/0x70 [xfs] 
[    5.338785] [c0000002e3782e10] [d0000000024106cc] .xfs_corruption_error+0x7c/0xb0 [xfs] 
[    5.338816] [c0000002e3782ec0] [d0000000024578f8] .xfs_dir3_data_write_verify+0x148/0x1c0 [xfs] 
[    5.338846] [c0000002e3782f70] [d00000000240db44] ._xfs_buf_ioapply+0xd4/0x410 [xfs] 
[    5.338875] [c0000002e37830b0] [d00000000240df8c] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    5.338906] [c0000002e3783140] [d00000000240e084] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    5.338936] [c0000002e37831d0] [d00000000240e294] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs] 
[    5.338965] [c0000002e37832b0] [d00000000240f2d8] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    5.338998] [c0000002e3783350] [d000000002471fc4] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs] 
[    5.339030] [c0000002e3783410] [d0000000024722cc] .xlog_recover_process_data+0x25c/0x370 [xfs] 
[    5.339063] [c0000002e37834e0] [d000000002472528] .xlog_do_recovery_pass+0x148/0x590 [xfs] 
[    5.339095] [c0000002e3783650] [d000000002472a08] .xlog_do_log_recovery+0x98/0x110 [xfs] 
[    5.339128] [c0000002e37836e0] [d000000002472aa0] .xlog_do_recover+0x20/0x160 [xfs] 
[    5.339160] [c0000002e3783770] [d000000002472c78] .xlog_recover+0x98/0x110 [xfs] 
[    5.339192] [c0000002e3783800] [d00000000247d504] .xfs_log_mount+0x134/0x1d0 [xfs] 
[    5.339226] [c0000002e3783890] [d0000000024768e8] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    5.339256] [c0000002e3783940] [d000000002424bbc] .xfs_fs_fill_super+0x30c/0x3a0 [xfs] 
[    5.339264] [c0000002e37839e0] [c000000000214a78] .mount_bdev+0x258/0x2a0 
[    5.339293] [c0000002e3783ab0] [d000000002422678] .xfs_fs_mount+0x18/0x30 [xfs] 
[    5.339301] [c0000002e3783b20] [c000000000215900] .mount_fs+0x70/0x230 
[    5.339308] [c0000002e3783be0] [c000000000237ee8] .vfs_kern_mount+0x58/0x130 
[    5.339315] [c0000002e3783c90] [c00000000023b0b0] .do_mount+0x2d0/0xb30 
[    5.339322] [c0000002e3783d70] [c00000000023b9c0] .SyS_mount+0xb0/0x110 
[    5.339329] [c0000002e3783e30] [c000000000009e54] syscall_exit+0x0/0x98 
[    5.339335] XFS (dm-1): Corruption detected. Unmount and run xfs_repair 
[    5.339341] XFS (dm-1): xfs_do_force_shutdown(0x8) called from line 1364 of file fs/xfs/xfs_buf.c.  Return address = 0xd00000000240db70 
[    5.339350] XFS (dm-1): Corruption of in-memory data detected.  Shutting down filesystem 
[    5.339356] XFS (dm-1): Please umount the filesystem and rectify the problem(s) 
[    5.339365] XFS (dm-1): metadata I/O error: block 0x2cb35d0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.339372] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339382] XFS (dm-1): metadata I/O error: block 0x2cb71d8 ("xlog_recover_iodone") error 5 numblks 8 
[    5.339389] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339398] XFS (dm-1): metadata I/O error: block 0x2fada78 ("xlog_recover_iodone") error 5 numblks 8 
[    5.339405] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339415] XFS (dm-1): metadata I/O error: block 0x3243eb0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.339422] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339431] XFS (dm-1): metadata I/O error: block 0x324ee10 ("xlog_recover_iodone") error 5 numblks 16 
[    5.339438] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339447] XFS (dm-1): metadata I/O error: block 0x324ee20 ("xlog_recover_iodone") error 5 numblks 16 
[    5.339454] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339463] XFS (dm-1): metadata I/O error: block 0x4150802 ("xlog_recover_iodone") error 5 numblks 1 
[    5.339471] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339480] XFS (dm-1): metadata I/O error: block 0x4323540 ("xlog_recover_iodone") error 5 numblks 8 
[    5.339487] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339496] XFS (dm-1): metadata I/O error: block 0x457c9b0 ("xlog_recover_iodone") error 5 numblks 16 
[    5.339503] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.339519] XFS (dm-1): metadata I/O error: block 0x2cb2158 ("xlog_recover_iodone") error 117 numblks 8 
[    5.339526] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000246d140 
[    5.412895] XFS (dm-1): log mount/recovery failed: error 117 
[    5.412943] XFS (dm-1): log mount failed 
[      
FAILED   
] Failed to mount /sysroot.  
See 'systemctl status sysroot.mount' for details.  
[      
DEPEND   
] Dependency failed for Initrd Root File System.  
[      
DEPEND   
] Dependency failed for Reload Configuration from the Real Root.  
[    5.423354] systemd[1]: Starting Emergency Shell... 
[    5.428268] systemd[1]: Starting Journal Service... 
[    5.431426] systemd-journald[201]: Received SIGTERM 
[    5.432383] systemd[1]: Starting Journal Service... 
[    5.432961] systemd[1]: Started Journal Service. 
[    5.433743] systemd[1]: Stopped udev Kernel Device Manager. 
[    5.433777] systemd[1]: Stopping dracut pre-udev hook... 
[    5.433789] systemd[1]: Stopped dracut pre-udev hook. 
[    5.433829] systemd[1]: Stopping dracut cmdline hook... 
[    5.433840] systemd[1]: Stopped dracut cmdline hook. 
[    5.433875] systemd[1]: Stopping udev Kernel Socket. 
[    5.433911] systemd[1]: Closed udev Kernel Socket. 
[    5.433922] systemd[1]: Stopping udev Control Socket. 
[    5.433955] systemd[1]: Closed udev Control Socket. 
 
Generating "/run/initramfs/sosreport.txt" 
 
 
Entering emergency mode. Exit the shell to continue. 
Type "journalctl" to view system logs. 
You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot 
after mounting them and attach it to a bug report. 
 
 
:/#
> 
> ----- Original Message -----
> > From: "Eric Sandeen" <sandeen@sandeen.net>
> > To: "CAI Qian" <caiqian@redhat.com>
> > Cc: xfs@oss.sgi.com
> > Sent: Wednesday, May 8, 2013 3:08:05 AM
> > Subject: Re: 3.9.0: XFS rootfs corruption
> > 
> > On 5/7/13 2:53 AM, CAI Qian wrote:
> > > 
> > > 
> > > ----- Original Message -----
> > >> From: "Eric Sandeen" <sandeen@sandeen.net>
> > >> To: "CAI Qian" <caiqian@redhat.com>
> > >> Cc: xfs@oss.sgi.com
> > >> Sent: Monday, May 6, 2013 10:31:01 PM
> > >> Subject: Re: 3.9.0: XFS rootfs corruption
> > >>
> > >> On 5/6/13 2:50 AM, CAI Qian wrote:
> > >>> Saw this on several different Power7 systems after kdump reboot. It has
> > >>> xfsprogs-3.1.10
> > >>> and rootfs in on LVM. Never saw one of those in any of the RC releases.
> > >>>
> > >>> ] Reached target Basic System.
> > >>> [    4.919316] bio: create slab <bio-1> at 1
> > >>> [    5.078616] SGI XFS with ACLs, security attributes, large
> > >>> block/inode
> > >>> numbers, no debug enabled
> > >>> [    5.081925] XFS (dm-1): Mounting Filesystem
> > >>> [    5.168530] XFS (dm-1): Starting recovery (logdev: internal)
> > >>> [    5.333575] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line
> > >>> 176
> > >>> of
> > >>> file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000002396fdc
> > >>
> > >> here:
> > >>
> > >>         /*
> > >>          * Need to have seen all the entries and all the bestfree slots.
> > >>          */
> > >>         XFS_WANT_CORRUPTED_RETURN(freeseen == 7);
> > >>
> > >> I hope Dave knows offhand what this might mean.  :)
> > >>
> > >> Could you get a metadump of the filesystem in question?
> > > Err, less familiar here. May I ask how can I do that?
> > 
> > since it's the root fs, you might need to do it from some sort of rescue
> > shell, then just do xfs_metadump /dev/<device> <metadump filename>
> > 
> > the resulting file should compress further with something like bzip2.
> > 
> > ...
> > 
> > >>> Also, never saw any of those in other architectures like x64, but
> > >>> started
> > >>> get those there in 3.9.0.
> > >>> Unsure if those are related.
> > >>>
> > >>> [ 3224.369782]
> > >>> =============================================================================
> > >>> [ 3224.370017] BUG xfs_efi_item (Tainted: GF   B       ): Poison
> > >>> overwritten
> > >>> [ 3224.370017]
> > >>> -----------------------------------------------------------------------------
> > >>
> > >>   2: 'F' if any module was force loaded by "insmod -f", ' ' if all
> > >>      modules were loaded normally.
> > >>
> > >> Force loaded modules, what's that from?
> > > This could be just happened after the booting done or we were running a
> > > stress test later
> > > that does load (modprobe *) and unload (modprobe -r *) every module.
> > > Again,
> > > those warnings
> > > could be totally unrelated to the above rootfs corruption.
> > > CAI Qian
> > 
> > hmmm :)  So any one of those modules could have caused memory corruption I
> > guess.
> > 
> > If you can hit it reliably you might try to narrow it down to whether it
> > is a particular module causing it.
> > 
> > -Eric
> > 
> > 
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-22  8:48           ` CAI Qian
@ 2013-05-22  9:46             ` Dave Chinner
  2013-06-03  7:44               ` CAI Qian
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2013-05-22  9:46 UTC (permalink / raw)
  To: CAI Qian; +Cc: Eric Sandeen, xfs

On Wed, May 22, 2013 at 04:48:56AM -0400, CAI Qian wrote:
> 
> 
> ----- Original Message -----
> > From: "CAI Qian" <caiqian@redhat.com>
> > To: "Eric Sandeen" <sandeen@sandeen.net>
> > Cc: xfs@oss.sgi.com
> > Sent: Wednesday, May 22, 2013 12:10:07 PM
> > Subject: Re: 3.9.0: XFS rootfs corruption
> > 
> > OK, this has never been reproduced in 3.9-rc1 so far. It may because the
> > rootfs became full after crash dump testing though.
> > CAI Qian
> Oops, it is still there,

Have you run xfs_repair -n  <dev>  to determine what is corrupted on
disk? Can you post the output when you do?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-22  9:46             ` Dave Chinner
@ 2013-06-03  7:44               ` CAI Qian
  0 siblings, 0 replies; 14+ messages in thread
From: CAI Qian @ 2013-06-03  7:44 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Eric Sandeen, xfs



----- Original Message -----
> From: "Dave Chinner" <david@fromorbit.com>
> To: "CAI Qian" <caiqian@redhat.com>
> Cc: "Eric Sandeen" <sandeen@sandeen.net>, xfs@oss.sgi.com
> Sent: Wednesday, May 22, 2013 5:46:48 PM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> On Wed, May 22, 2013 at 04:48:56AM -0400, CAI Qian wrote:
> > 
> > 
> > ----- Original Message -----
> > > From: "CAI Qian" <caiqian@redhat.com>
> > > To: "Eric Sandeen" <sandeen@sandeen.net>
> > > Cc: xfs@oss.sgi.com
> > > Sent: Wednesday, May 22, 2013 12:10:07 PM
> > > Subject: Re: 3.9.0: XFS rootfs corruption
> > > 
> > > OK, this has never been reproduced in 3.9-rc1 so far. It may because the
> > > rootfs became full after crash dump testing though.
> > > CAI Qian
> > Oops, it is still there,
> 
> Have you run xfs_repair -n  <dev>  to determine what is corrupted on
> disk? Can you post the output when you do?
Here you go.
CAI Qian

:/# xfs_repair -n  /dev/mapper/rhel_ibm--p720--01--lp4-root 
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
agi_freecount 10, counted 8 in ag 1
sb_icount 149248, counted 149312
sb_ifree 216, counted 27
sb_fdblocks 10788304, counted 10784210
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
bad entry count in block 8388610 of directory inode 113571
bad entry count in block 8388610 of directory inode 1178385
bad entry count in block 8388610 of directory inode 1519037
        - agno = 1
bad entry count in block 8388610 of directory inode 67109016
bad entry count in block 8388610 of directory inode 67929825
bad entry count in block 8388610 of directory inode 69407749
bad entry count in block 8388610 of directory inode 69490381
bad entry count in block 8388610 of directory inode 69534546
bad entry count in block 8388610 of directory inode 69842112
        - agno = 2
bad entry count in block 8388610 of directory inode 134321722
bad entry count in block 8388610 of directory inode 134321726
bad entry count in block 8388610 of directory inode 136067648
bad entry count in block 8388610 of directory inode 144715871
        - agno = 3
bad entry count in block 8388610 of directory inode 201326727
bad entry count in block 8388610 of directory inode 201326754
bad entry count in block 8388610 of directory inode 201327172
bad entry count in block 8388610 of directory inode 201951914
bad entry count in block 8388610 of directory inode 202409289
bad entry count in block 8388610 of directory inode 206245507
bad entry count in block 8388610 of directory inode 206253502
bad entry count in block 8388610 of directory inode 206308010
bad entry count in block 8388610 of directory inode 206632072
bad entry count in block 8388610 of directory inode 212625436
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 3
        - agno = 2
        - agno = 1
entry "tmp.Iive34" at block 0 offset 2664 in directory inode 69369993 references free inode 69015237
	would clear inode number in entry at offset 2664...
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
entry "tmp.Iive34" in directory inode 69369993 points to free inode 69015237, would junk entry
bad hash table for directory inode 69369993 (no data entry): would rebuild
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
> 
> Cheers,
> 
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-05-07 19:08       ` Eric Sandeen
  2013-05-14  2:28         ` CAI Qian
  2013-05-22  4:10         ` CAI Qian
@ 2013-06-03  8:09         ` CAI Qian
  2013-06-04  4:36           ` Dave Chinner
  2 siblings, 1 reply; 14+ messages in thread
From: CAI Qian @ 2013-06-03  8:09 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs


> since it's the root fs, you might need to do it from some sort of rescue
> shell, then just do xfs_metadump /dev/<device> <metadump filename>
> 
> the resulting file should compress further with something like bzip2.
Hmm, there is no such command in dracut rescue shell,
# xfs_metadump /dev/mapper/rhel_ibm--p720--01--lp4-root  metadump
sh: xfs_metadump: command not found
# xfs_<tab completion>
xfs_check   xfs_db      xfs_repair

Here were my other attempts to get it back again but seems destroy all the
previous transactions:
:/mnt# xfs_check /dev/mapper/rhel_ibm--p720--01--lp4-root
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_check.  If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
:/# mount /dev/mapper/rhel_ibm--p720--01--lp4-root /mnt2
[  675.871991] XFS (dm-1): Mounting Filesystem
[  675.982416] XFS (dm-1): Starting recovery (logdev: internal)
[  676.029790] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 169 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000001644f5c
[  676.029790] 
[  676.029801] CPU: 1 PID: 463 Comm: mount Tainted: GF            3.10.0-rc4 #1
[  676.029805] Call Trace:
[  676.029811] [c0000001f82a2b90] [c000000000014eac] .show_stack+0x7c/0x1f0 (unreliable)
[  676.029819] [c0000001f82a2c60] [c0000000007444fc] .dump_stack+0x28/0x3c
[  676.029846] [c0000001f82a2cd0] [d000000001600674] .xfs_error_report+0x54/0x70 [xfs]
[  676.029872] [c0000001f82a2d40] [d0000000016479f4] .__xfs_dir3_data_check+0x6c4/0x820 [xfs]
[  676.029898] [c0000001f82a2e40] [d000000001644f5c] .xfs_dir3_block_verify+0xbc/0xf0 [xfs]
[  676.029922] [c0000001f82a2ec0] [d00000000164510c] .xfs_dir3_block_write_verify+0x3c/0x1d0 [xfs]
[  676.029946] [c0000001f82a2f70] [d0000000015fdb74] ._xfs_buf_ioapply+0xd4/0x410 [xfs]
[  676.029968] [c0000001f82a30b0] [d0000000015fdfbc] .xfs_buf_iorequest+0x4c/0xe0 [xfs]
[  676.029991] [c0000001f82a3140] [d0000000015fe0b4] .xfs_bdstrat_cb+0x64/0x120 [xfs]
[  676.030014] [c0000001f82a31d0] [d0000000015fe2c4] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs]
[  676.030037] [c0000001f82a32b0] [d0000000015ff308] .xfs_buf_delwri_submit+0x38/0xd0 [xfs]
[  676.030062] [c0000001f82a3350] [d000000001662494] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs]
[  676.030088] [c0000001f82a3410] [d00000000166279c] .xlog_recover_process_data+0x25c/0x370 [xfs]
[  676.030114] [c0000001f82a34e0] [d0000000016629f8] .xlog_do_recovery_pass+0x148/0x590 [xfs]
[  676.030139] [c0000001f82a3650] [d000000001662ed8] .xlog_do_log_recovery+0x98/0x110 [xfs]
[  676.030166] [c0000001f82a36e0] [d000000001662f70] .xlog_do_recover+0x20/0x160 [xfs]
[  676.030191] [c0000001f82a3770] [d000000001663148] .xlog_recover+0x98/0x110 [xfs]
[  676.030218] [c0000001f82a3800] [d00000000166d910] .xfs_log_mount+0xa0/0x1d0 [xfs]
[  676.030244] [c0000001f82a3890] [d000000001666dc8] .xfs_mountfs+0x3c8/0x780 [xfs]
[  676.030267] [c0000001f82a3940] [d000000001614c9c] .xfs_fs_fill_super+0x30c/0x3a0 [xfs]
[  676.030274] [c0000001f82a39e0] [c000000000214d58] .mount_bdev+0x258/0x2a0
[  676.030296] [c0000001f82a3ab0] [d000000001612758] .xfs_fs_mount+0x18/0x30 [xfs]
[  676.030302] [c0000001f82a3b20] [c000000000215be0] .mount_fs+0x70/0x230
[  676.030308] [c0000001f82a3be0] [c0000000002381c8] .vfs_kern_mount+0x58/0x130
[  676.030313] [c0000001f82a3c90] [c00000000023b390] .do_mount+0x2d0/0xb30
[  676.030319] [c0000001f82a3d70] [c00000000023bca0] .SyS_mount+0xb0/0x110
[  676.030324] [c0000001f82a3e30] [c000000000009e54] syscall_exit+0x0/0x98
[  676.030330] c0000001f9079000: 58 44 32 42 0a 68 02 d8 00 78 00 18 00 d8 00 18  XD2B.h...x......
[  676.030335] c0000001f9079010: 00 00 00 00 04 22 80 89 01 2e 00 01 e2 38 00 10  .....".......8..
[  676.030340] c0000001f9079020: 00 00 00 00 00 00 00 8f 02 2e 2e 67 67 65 00 20  ...........gge. 
[  676.030344] c0000001f9079030: 00 00 00 00 04 22 d0 8c 0c 74 6d 70 59 42 33 52  ....."...tmpYB3R
[  676.030350] XFS (dm-1): Internal error xfs_dir3_block_write_verify at line 109 of file fs/xfs/xfs_dir2_block.c.  Caller 0xd0000000015fdb74
[  676.030350] 
[  676.030357] CPU: 1 PID: 463 Comm: mount Tainted: GF            3.10.0-rc4 #1
[  676.030361] Call Trace:
[  676.030364] [c0000001f82a2c60] [c000000000014eac] .show_stack+0x7c/0x1f0 (unreliable)
[  676.030370] [c0000001f82a2d30] [c0000000007444fc] .dump_stack+0x28/0x3c
[  676.030392] [c0000001f82a2da0] [d000000001600674] .xfs_error_report+0x54/0x70 [xfs]
[  676.030415] [c0000001f82a2e10] [d00000000160070c] .xfs_corruption_error+0x7c/0xb0 [xfs]
[  676.030440] [c0000001f82a2ec0] [d00000000164521c] .xfs_dir3_block_write_verify+0x14c/0x1d0 [xfs]
[  676.030463] [c0000001f82a2f70] [d0000000015fdb74] ._xfs_buf_ioapply+0xd4/0x410 [xfs]
[  676.030485] [c0000001f82a30b0] [d0000000015fdfbc] .xfs_buf_iorequest+0x4c/0xe0 [xfs]
[  676.030509] [c0000001f82a3140] [d0000000015fe0b4] .xfs_bdstrat_cb+0x64/0x120 [xfs]
[  676.030532] [c0000001f82a31d0] [d0000000015fe2c4] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs]
[  676.030554] [c0000001f82a32b0] [d0000000015ff308] .xfs_buf_delwri_submit+0x38/0xd0 [xfs]
[  676.030580] [c0000001f82a3350] [d000000001662494] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs]
[  676.030606] [c0000001f82a3410] [d00000000166279c] .xlog_recover_process_data+0x25c/0x370 [xfs]
[  676.030632] [c0000001f82a34e0] [d0000000016629f8] .xlog_do_recovery_pass+0x148/0x590 [xfs]
[  676.030658] [c0000001f82a3650] [d000000001662ed8] .xlog_do_log_recovery+0x98/0x110 [xfs]
[  676.030684] [c0000001f82a36e0] [d000000001662f70] .xlog_do_recover+0x20/0x160 [xfs]
[  676.030710] [c0000001f82a3770] [d000000001663148] .xlog_recover+0x98/0x110 [xfs]
[  676.030735] [c0000001f82a3800] [d00000000166d910] .xfs_log_mount+0xa0/0x1d0 [xfs]
[  676.030761] [c0000001f82a3890] [d000000001666dc8] .xfs_mountfs+0x3c8/0x780 [xfs]
[  676.030784] [c0000001f82a3940] [d000000001614c9c] .xfs_fs_fill_super+0x30c/0x3a0 [xfs]
[  676.030791] [c0000001f82a39e0] [c000000000214d58] .mount_bdev+0x258/0x2a0
[  676.030814] [c0000001f82a3ab0] [d000000001612758] .xfs_fs_mount+0x18/0x30 [xfs]
[  676.030820] [c0000001f82a3b20] [c000000000215be0] .mount_fs+0x70/0x230
[  676.030825] [c0000001f82a3be0] [c0000000002381c8] .vfs_kern_mount+0x58/0x130
[  676.030830] [c0000001f82a3c90] [c00000000023b390] .do_mount+0x2d0/0xb30
[  676.030835] [c0000001f82a3d70] [c00000000023bca0] .SyS_mount+0xb0/0x110
[  676.030840] [c0000001f82a3e30] [c000000000009e54] syscall_exit+0x0/0x98
[  676.030844] XFS (dm-1): Corruption detected. Unmount and run xfs_repair
[  676.030849] XFS (dm-1): xfs_do_force_shutdown(0x8) called from line 1365 of file fs/xfs/xfs_buf.c.  Return address = 0xd0000000015fdba0
[  676.030855] XFS (dm-1): Corruption of in-memory data detected.  Shutting down filesystem
[  676.030859] XFS (dm-1): Please umount the filesystem and rectify the problem(s)
[  676.030866] XFS (dm-1): metadata I/O error: block 0x1cacb80 ("xlog_recover_iodone") error 5 numblks 16
[  676.030872] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600
[  676.030908] XFS (dm-1): metadata I/O error: block 0x1a14580 ("xlog_recover_iodone") error 117 numblks 8
[  676.030913] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600
[  676.049998] XFS (dm-1): log mount/recovery failed: error 117
[  676.050042] XFS (dm-1): log mount failed
mount: mount /dev/mapper/rhel_ibm--p720--01--lp4-root on /mnt2 failed: Structure needs cleaning
:/# xfs_repair -L  /dev/mapper/rhel_ibm--p720--01--lp4-root
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
agi_freecount 10, counted 8 in ag 1
sb_icount 149248, counted 149312
sb_ifree 216, counted 27
sb_fdblocks 10788304, counted 10784210
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
bad entry count in block 8388610 of directory inode 113571
bad entry count in block 8388610 of directory inode 1178385
bad entry count in block 8388610 of directory inode 1519037
        - agno = 1
bad entry count in block 8388610 of directory inode 67109016
bad entry count in block 8388610 of directory inode 67929825
bad entry count in block 8388610 of directory inode 69407749
bad entry count in block 8388610 of directory inode 69490381
bad entry count in block 8388610 of directory inode 69534546
bad entry count in block 8388610 of directory inode 69842112
        - agno = 2
bad entry count in block 8388610 of directory inode 134321722
bad entry count in block 8388610 of directory inode 144715871
        - agno = 3
bad entry count in block 8388610 of directory inode 201326727
bad entry count in block 8388610 of directory inode 201326754
bad entry count in block 8388610 of directory inode 201327172
bad entry count in block 8388610 of directory inode 201951914
bad entry count in block 8388610 of directory inode 202409289
bad entry count in block 8388610 of directory inode 206245507
bad entry count in block 8388610 of directory inode 206253502
bad entry count in block 8388610 of directory inode 206308010
bad entry count in block 8388610 of directory inode 206632072
bad entry count in block 8388610 of directory inode 212625436
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 3
        - agno = 2
        - agno = 1
entry "tmp.Iive34" at block 0 offset 2664 in directory inode 69369993 references free inode 69015237
	clearing inode number in entry at offset 2664...
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
rebuilding directory inode 113571
rebuilding directory inode 1178385
rebuilding directory inode 1519037
rebuilding directory inode 67109016
rebuilding directory inode 67929825
bad hash table for directory inode 69369993 (no data entry): rebuilding
rebuilding directory inode 69369993
rebuilding directory inode 69407749
rebuilding directory inode 69490381
rebuilding directory inode 69534546
rebuilding directory inode 69842112
rebuilding directory inode 134321722
rebuilding directory inode 134321726
rebuilding directory inode 136067648
rebuilding directory inode 144715871
rebuilding directory inode 201326727
rebuilding directory inode 201326754
rebuilding directory inode 201327172
rebuilding directory inode 201951914
rebuilding directory inode 202409289
rebuilding directory inode 206245507
rebuilding directory inode 206253502
rebuilding directory inode 206308010
rebuilding directory inode 206632072
rebuilding directory inode 212625436
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done
:/# mount /dev/mapper/rhel_ibm--p720--01--lp4-root /mnt2
[  715.323289] XFS (dm-1): Mounting Filesystem
[  715.410055] XFS (dm-1): Ending clean mount
:/# umount /mnt2
:/# xfs_check /dev/mapper/rhel_ibm--p720--01--lp4-root

# cat sosreport.txt
+ cat /proc/self/mountinfo
1 1 0:1 / / rw shared:1 - rootfs rootfs rw
16 1 0:3 / /proc rw,nosuid,nodev,noexec,relatime shared:2 - proc proc rw
17 1 0:15 / /sys rw,nosuid,nodev,noexec,relatime shared:3 - sysfs sysfs rw
18 1 0:5 / /dev rw,nosuid shared:9 - devtmpfs devtmpfs rw,size=3995520k,nr_inodes=62430,mode=755
19 17 0:16 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:4 - securityfs securityfs rw
20 17 0:14 / /sys/fs/selinux rw,relatime shared:5 - selinuxfs selinuxfs rw
21 18 0:17 / /dev/shm rw,nosuid,nodev shared:10 - tmpfs tmpfs rw
22 18 0:10 / /dev/pts rw,nosuid,noexec,relatime shared:11 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
23 1 0:18 / /run rw,nosuid,nodev shared:12 - tmpfs tmpfs rw,mode=755
24 17 0:19 / /sys/fs/cgroup rw,nosuid,nodev,noexec shared:6 - tmpfs tmpfs rw,mode=755
25 24 0:20 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:7 - cgroup cgroup rw,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
26 17 0:21 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:8 - pstore pstore rw
27 24 0:22 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,cpuset
28 24 0:23 / /sys/fs/cgroup/cpu,cpuacct rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,cpuacct,cpu
29 24 0:24 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,memory
30 24 0:25 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,devices
31 24 0:26 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,freezer
32 24 0:27 / /sys/fs/cgroup/net_cls rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,net_cls
33 24 0:28 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,blkio
34 24 0:29 / /sys/fs/cgroup/perf_event rw,nosuid,nodev,noexec,relatime shared:20 - cgroup cgroup rw,perf_event
+ cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=3995520k,nr_inodes=62430,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs rw,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
+ blkid
/dev/sda2: UUID="95fcda14-51a9-4097-b4eb-602525a97253" TYPE="xfs" 
/dev/sda3: UUID="PgGxRo-wpgn-fGKu-zdGI-81AV-E9sK-Pijtcg" TYPE="LVM2_member" 
/dev/mapper/rhel_ibm--p720--01--lp4-swap: UUID="2adeb69f-1393-491a-bc8c-427f3982494b" TYPE="ext4" 
/dev/mapper/rhel_ibm--p720--01--lp4-root: UUID="30931897-8173-4ad6-8005-5e6c973977eb" TYPE="xfs" 
+ blkid -o udev
ID_FS_UUID=95fcda14-51a9-4097-b4eb-602525a97253
ID_FS_UUID_ENC=95fcda14-51a9-4097-b4eb-602525a97253
ID_FS_TYPE=xfs

ID_FS_UUID=PgGxRo-wpgn-fGKu-zdGI-81AV-E9sK-Pijtcg
ID_FS_UUID_ENC=PgGxRo-wpgn-fGKu-zdGI-81AV-E9sK-Pijtcg
ID_FS_TYPE=LVM2_member

ID_FS_UUID=2adeb69f-1393-491a-bc8c-427f3982494b
ID_FS_UUID_ENC=2adeb69f-1393-491a-bc8c-427f3982494b
ID_FS_TYPE=ext4

ID_FS_UUID=30931897-8173-4ad6-8005-5e6c973977eb
ID_FS_UUID_ENC=30931897-8173-4ad6-8005-5e6c973977eb
ID_FS_TYPE=xfs
+ ls -l /dev/disk/by-id /dev/disk/by-path /dev/disk/by-uuid
/dev/disk/by-id:
total 0
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 dm-name-rhel_ibm--p720--01--lp4-root -> ../../dm-1
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 dm-name-rhel_ibm--p720--01--lp4-swap -> ../../dm-0
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 dm-uuid-LVM-iLqOBO1yd7F60BDuy4rzDc22fi9RNRZTe3Q6gzBr9HzmDTEv78cn7FkLTLfzsUjZ -> ../../dm-1
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 dm-uuid-LVM-iLqOBO1yd7F60BDuy4rzDc22fi9RNRZThiXYwcS7qFFTS8LMTzuJhE9GaB4itczs -> ../../dm-0
lrwxrwxrwx 1 root 0  9 Jun  3 03:49 scsi-SAIX_VDASD_00f6db0f00004c0000000136a3035480.3 -> ../../sda
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 scsi-SAIX_VDASD_00f6db0f00004c0000000136a3035480.3-part1 -> ../../sda1
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 scsi-SAIX_VDASD_00f6db0f00004c0000000136a3035480.3-part2 -> ../../sda2
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 scsi-SAIX_VDASD_00f6db0f00004c0000000136a3035480.3-part3 -> ../../sda3

/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root 0  9 Jun  3 03:49 scsi-0:0:1:0 -> ../../sda
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 scsi-0:0:1:0-part1 -> ../../sda1
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 scsi-0:0:1:0-part2 -> ../../sda2
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 scsi-0:0:1:0-part3 -> ../../sda3

/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 2adeb69f-1393-491a-bc8c-427f3982494b -> ../../dm-0
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 30931897-8173-4ad6-8005-5e6c973977eb -> ../../dm-1
lrwxrwxrwx 1 root 0 10 Jun  3 03:49 95fcda14-51a9-4097-b4eb-602525a97253 -> ../../sda2
+ cat /proc/cmdline
BOOT_IMAGE=/vmlinux-3.10.0-rc4 root=/dev/mapper/rhel_ibm--p720--01--lp4-root ro rd.lvm.lv=rhel_ibm-p720-01-lp4/swap rd.lvm.lv=rhel_ibm-p720-01-lp4/root rd.md=0 rd.dm=0 vconsole.keymap=us crashkernel=256M rd.luks=0
+ '[' -f /etc/cmdline ']'
+ for _i in '/etc/cmdline.d/*.conf'
+ '[' -f /etc/cmdline.d/90lvm.conf ']'
+ echo /etc/cmdline.d/90lvm.conf
/etc/cmdline.d/90lvm.conf
+ cat /etc/cmdline.d/90lvm.conf
 rd.lvm.lv=rhel_ibm-p720-01-lp4/root 
 rd.lvm.lv=rhel_ibm-p720-01-lp4/swap 
+ for _i in '/etc/conf.d/*.conf'
+ '[' -f /etc/conf.d/systemd.conf ']'
+ echo /etc/conf.d/systemd.conf
/etc/conf.d/systemd.conf
+ cat /etc/conf.d/systemd.conf
systemdutildir="/usr/lib/systemd"
systemdsystemunitdir="/usr/lib/systemd/system"
systemdsystemconfdir="/etc/systemd/system"
+ command -v lvm
+ lvm pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda3
  VG Name               rhel_ibm-p720-01-lp4
  PV Size               99.51 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              25473
  Free PE               0
  Allocated PE          25473
  PV UUID               PgGxRo-wpgn-fGKu-zdGI-81AV-E9sK-Pijtcg
   
+ lvm vgdisplay
  --- Volume group ---
  VG Name               rhel_ibm-p720-01-lp4
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               99.50 GiB
  PE Size               4.00 MiB
  Total PE              25473
  Alloc PE / Size       25473 / 99.50 GiB
  Free  PE / Size       0 / 0   
  VG UUID               iLqOBO-1yd7-F60B-Duy4-rzDc-22fi-9RNRZT
   
+ lvm lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel_ibm-p720-01-lp4/swap
  LV Name                swap
  VG Name                rhel_ibm-p720-01-lp4
  LV UUID                hiXYwc-S7qF-FTS8-LMTz-uJhE-9GaB-4itczs
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                7.94 GiB
  Current LE             2032
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/rhel_ibm-p720-01-lp4/home
  LV Name                home
  VG Name                rhel_ibm-p720-01-lp4
  LV UUID                wbhpwz-urNr-mQAs-I0WG-xpjE-aOq3-qM48yJ
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                41.57 GiB
  Current LE             10641
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/rhel_ibm-p720-01-lp4/root
  LV Name                root
  VG Name                rhel_ibm-p720-01-lp4
  LV UUID                e3Q6gz-Br9H-zmDT-Ev78-cn7F-kLTL-fzsUjZ
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                50.00 GiB
  Current LE             12800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
+ command -v dmsetup
+ dmsetup ls --tree
rhel_ibm--p720--01--lp4-swap (253:0)
 `- (8:3)
rhel_ibm--p720--01--lp4-root (253:1)
 `- (8:3)
+ cat /proc/mdstat
Personalities : 
unused devices: <none>
+ command -v journalctl
+ journalctl -ab --no-pager -o short-monotonic
-- Logs begin at Mon 2013-06-03 03:49:14 UTC, end at Mon 2013-06-03 03:49:17 UTC. --
[    0.788500] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-journal[178]: Allowing runtime journal files to grow to 394.2M.
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Reserving 256MB of memory at 128MB for crashkernel (System RAM: 8192MB)
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Allocated 1048576 bytes for 1024 pacas at c000000007f00000
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Using pSeries machine description
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Page sizes from device-tree:
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=12: shift=12, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=0
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=12: shift=16, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=7
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=12: shift=24, sllp=0x0000, avpnm=0x00000000, tlbiel=1, penc=56
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=16: shift=16, sllp=0x0110, avpnm=0x00000000, tlbiel=1, penc=1
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=16: shift=24, sllp=0x0110, avpnm=0x00000000, tlbiel=1, penc=8
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=24: shift=24, sllp=0x0100, avpnm=0x00000001, tlbiel=0, penc=0
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: base_shift=34: shift=34, sllp=0x0120, avpnm=0x000007ff, tlbiel=0, penc=3
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Page orders: linear mapping = 24, virtual = 16, io = 12
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Using 1TB segments
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Found initrd at 0xc000000004e00000:0xc000000005742a84
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: bootconsole [udbg0] enabled
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Partition configured for 28 cpus.
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: CPU maps initialized for 4 threads per core
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:  (thread shift is 2)
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Freed 983040 bytes for unused pacas
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Starting Linux PPC64 #1 SMP Mon Jun 3 00:01:47 EDT 2013
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: -----------------------------------------------------
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ppc64_pft_size                = 0x1c
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: physicalMemorySize            = 0x200000000
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: htab_hash_mask                = 0x1fffff
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: -----------------------------------------------------
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys cpuset
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys cpu
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys cpuacct
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Linux version 3.10.0-rc4 (root@ibm-p720-01-lp4.rhts.eng.bos.redhat.com) (gcc version 4.8.0 20130419 (Red Hat 4.8.0-3) (GCC) ) #1 SMP Mon Jun 3 00:01:47 EDT 2013
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [boot]0012 Setup Arch
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Node 0 Memory: 0x0-0x200000000
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: PPC64 nvram contains 15360 bytes
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Zone ranges:
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:   DMA      [mem 0x00000000-0x1ffffffff]
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:   Normal   empty
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Movable zone start for each node
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Early memory node ranges
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:   node   0: [mem 0x00000000-0x1ffffffff]
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: On node 0 totalpages: 131072
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:   DMA zone: 112 pages used for memmap
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:   DMA zone: 0 pages reserved
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:   DMA zone: 131072 pages, LIFO batch:1
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [boot]0015 Setup Done
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: PERCPU: Embedded 2 pages/cpu @c000000001500000 s89088 r0 d41984 u131072
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pcpu-alloc: s89088 r0 d41984 u131072 alloc=1*1048576
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pcpu-alloc: [0] 16 17 18 19 20 21 22 23 [0] 24 25 26 27 -- -- -- -- 
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Built 1 zonelists in Node order, mobility grouping on.  Total pages: 130960
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Policy zone: DMA
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Kernel command line: BOOT_IMAGE=/vmlinux-3.10.0-rc4 root=/dev/mapper/rhel_ibm--p720--01--lp4-root ro rd.lvm.lv=rhel_ibm-p720-01-lp4/swap rd.lvm.lv=rhel_ibm-p720-01-lp4/root rd.md=0 rd.dm=0 vconsole.keymap=us crashkernel=256M rd.luks=0
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: PID hash table entries: 4096 (order: -1, 32768 bytes)
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Sorting __ex_table...
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: freeing bootmem node 0
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Memory: 7991040k/8388608k available (16256k kernel code, 397568k reserved, 1728k data, 3083k bss, 5632k init)
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: SLUB: HWalign=128, Order=0-3, MinObjects=0, CPUs=28, Nodes=256
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Hierarchical RCU implementation.
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: 	RCU restricting CPUs from NR_CPUS=1024 to nr_cpu_ids=28.
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NR_IRQS:512 nr_irqs:512 16
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pic: no ISA interrupt controller
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: time_init: decrementer frequency = 512.000000 MHz
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: time_init: processor frequency   = 3000.000000 MHz
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: clocksource: timebase mult[1f40000] shift[24] registered
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: clockevent: decrementer mult[83126e98] shift[32] cpu[0]
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Console: colour dummy device 80x25
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: console [hvc0] enabled, bootconsole disabled
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: allocated 2097152 bytes of page_cgroup
[    0.000000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: please try 'cgroup_disable=memory' option if you don't want memory cgroups
[    0.004360] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pid_max: default: 32768 minimum: 301
[    0.004413] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Security Framework initialized
[    0.004423] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: SELinux:  Initializing.
[    0.004433] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: SELinux:  Starting in permissive mode
[    0.004602] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Dentry cache hash table entries: 1048576 (order: 7, 8388608 bytes)
[    0.006264] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Inode-cache hash table entries: 524288 (order: 6, 4194304 bytes)
[    0.007241] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Mount-cache hash table entries: 4096
[    0.008469] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys memory
[    0.008653] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys devices
[    0.008662] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys freezer
[    0.008666] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys net_cls
[    0.008669] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys blkio
[    0.008673] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing cgroup subsys perf_event
[    0.008836] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: EEH: pSeries platform initialized
[    0.008840] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: POWER7 performance monitor hardware support registered
[    0.021453] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Brought up 28 CPUs
[    0.021470] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Node 0 CPUs: 0-27
[    0.021586] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Enabling Asymmetric SMT scheduling
[    0.024690] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: devtmpfs: initialized
[    0.062630] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: EEH: devices created
[    0.065559] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: atomic64 test passed
[    0.065748] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NET: Registered protocol family 16
[    0.065799] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: EEH: No capable adapters found
[    0.065945] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: IBM eBus Device Driver
[    0.072208] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: PCI: Probing PCI hardware
[    0.072218] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: PCI: Probing PCI hardware done
[    0.072226] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: opal: Node not found
[    0.074665] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: bio: create slab <bio-0> at 0
[    0.075134] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: vgaarb: loaded
[    0.075288] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: SCSI subsystem initialized
[    0.075378] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbfs
[    0.075403] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbcore: registered new interface driver hub
[    0.075462] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbcore: registered new device driver usb
[    0.075943] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NetLabel: Initializing
[    0.075949] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NetLabel:  domain hash size = 128
[    0.075956] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NetLabel:  protocols = UNLABELED CIPSOv4
[    0.075987] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NetLabel:  unlabeled traffic allowed by default
[    0.076115] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Switching to clocksource timebase
[    0.098959] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NET: Registered protocol family 2
[    0.099380] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: TCP established hash table entries: 65536 (order: 4, 1048576 bytes)
[    0.100592] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: TCP bind hash table entries: 65536 (order: 4, 1048576 bytes)
[    0.101581] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: TCP: Hash tables configured (established 65536 bind 65536)
[    0.101607] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: TCP: reno registered
[    0.101623] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: UDP hash table entries: 4096 (order: 1, 131072 bytes)
[    0.101795] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: UDP-Lite hash table entries: 4096 (order: 1, 131072 bytes)
[    0.102192] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NET: Registered protocol family 1
[    0.102211] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: PCI: CLS 0 bytes, default 128
[    0.102300] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Unpacking initramfs...
[    0.519156] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Freeing initrd memory: 9472K (c000000004e00000 - c000000005740000)
[    0.519536] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: RTAS daemon started
[    0.543329] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: IOMMU table initialized, virtual merging enabled
[    0.543648] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pseries_idle_driver registered
[    0.543923] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initialise module verification
[    0.544030] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: audit: initializing netlink socket (disabled)
[    0.544056] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: type=2000 audit(1370231353.530:1): initialized
[    0.753238] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: HugeTLB registered 16 MB page size, pre-allocated 0 pages
[    0.753258] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: HugeTLB registered 16 GB page size, pre-allocated 0 pages
[    0.757129] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: VFS: Disk quotas dquot_6.5.2
[    0.757347] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Dquot-cache hash table entries: 8192 (order 0, 65536 bytes)
[    0.758599] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: msgmni has been set to 15756
[    0.758805] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: SELinux:  Registering netfilter hooks
[    0.759954] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: alg: No test for stdrng (krng)
[    0.759979] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NET: Registered protocol family 38
[    0.759989] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Key type asymmetric registered
[    0.759996] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Asymmetric key parser 'x509' registered
[    0.760084] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 252)
[    0.760222] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: io scheduler noop registered
[    0.760230] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: io scheduler deadline registered (default)
[    0.760305] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: io scheduler cfq registered
[    0.760470] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.760478] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: rpaphp: RPA HOT Plug PCI Controller Driver version: 0.1
[    0.761193] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
[    0.761764] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Linux agpgart interface v0.103
[    0.763936] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: loop: module loaded
[    0.763951] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: rdac: device handler registered
[    0.764109] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: hp_sw: device handler registered
[    0.764117] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: emc: device handler registered
[    0.764123] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: alua: device handler registered
[    0.764201] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: libphy: Fixed MDIO Bus: probed
[    0.764286] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.764307] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ehci-pci: EHCI PCI platform driver
[    0.764328] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.764361] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: uhci_hcd: USB Universal Host Controller Interface driver
[    0.764496] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbserial
[    0.764517] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbserial_generic
[    0.764536] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbserial: USB Serial support registered for generic
[    0.764658] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: mousedev: PS/2 mouse device common for all mice
[    0.765043] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: rtc-generic rtc-generic: rtc core: registered rtc-generic as rtc0
[    0.765359] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: cpuidle: using governor ladder
[    0.765831] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: cpuidle: using governor menu
[    0.765847] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: hidraw: raw HID events driver (C) Jiri Kosina
[    0.765923] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbhid
[    0.765926] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: usbhid: USB HID core driver
[    0.765973] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: drop_monitor: Initializing network drop monitor service
[    0.766045] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: TCP: cubic registered
[    0.766049] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Initializing XFRM netlink socket
[    0.766149] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NET: Registered protocol family 10
[    0.766358] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: NET: Registered protocol family 17
[    0.766411] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Running MSI bitmap self-tests ...
[    0.767492] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Loading module verification certificates
[    0.767510] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: X.509: Cert 0c13c183e928455e2f5c0b7b606a338a0d3028da is not yet valid
[    0.767515] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: MODSIGN: Problem loading in-kernel X.509 certificate (-129)
[    0.767525] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: registered taskstats version 1
[    0.767851] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: rtc-generic rtc-generic: setting system clock to 2013-06-03 03:49:14 UTC (1370231354)
[    0.768746] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Freeing unused kernel memory: 5632K (c000000000a60000 - c000000000fe0000)
[    0.779188] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: systemd 202 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ)
[    0.779254] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Running in initial RAM disk.
[    0.779607] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Set hostname to <ibm-p720-01-lp4.rhts.eng.bos.redhat.com>.
[    0.787752] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Expecting device dev-mapper-rhel_ibm\x2d\x2dp720\x2d\x2d01\x2d\x2dlp4\x2droot.device...
[    0.787992] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Expecting device dev-disk-by\x2duuid-95fcda14\x2d51a9\x2d4097\x2db4eb\x2d602525a97253.device...
[    0.788147] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Expecting device dev-mapper-rhel_ibm\x2d\x2dp720\x2d\x2d01\x2d\x2dlp4\x2dswap.device...
[    0.788297] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Timers.
[    0.788446] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target Timers.
[    0.788462] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Journal Socket.
[    0.788682] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Listening on Journal Socket.
[    0.788955] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting dracut cmdline hook...
[    0.790075] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Load Kernel Modules.
[    0.790092] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Journal Service...
[    0.791000] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Journal Service.
[    0.791274] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting udev Kernel Socket.
[    0.791458] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Listening on udev Kernel Socket.
[    0.791532] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting udev Control Socket.
[    0.791718] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Listening on udev Control Socket.
[    0.791733] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Sockets.
[    0.791887] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target Sockets.
[    0.791901] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Swap.
[    0.792053] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target Swap.
[    0.792068] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Local File Systems.
[    0.792224] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target Local File Systems.
[    0.791864] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-journal[178]: Journal started
[    0.805628] ibm-p720-01-lp4.rhts.eng.bos.redhat.com dracut-cmdline[177]: dracut-6.93Server (Maipo) dracut-027-45.git20130430.el7
[    0.934170] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started dracut cmdline hook.
[    0.934730] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Setup Virtual Console...
[    0.935505] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting dracut pre-udev hook...
[    0.941679] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Setup Virtual Console.
[    0.957333] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    0.957931] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: dm_mod: module verification failed: signature and/or required key missing - tainting kernel
[    0.958574] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: device-mapper: uevent: version 1.0.3
[    0.958683] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: device-mapper: ioctl: 4.24.0-ioctl (2013-01-15) initialised: dm-devel@redhat.com
[    0.960262] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    0.960473] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    0.960650] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    0.974321] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started dracut pre-udev hook.
[    0.974880] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting udev Kernel Device Manager...
[    0.977472] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started udev Kernel Device Manager.
[    0.978018] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started dracut pre-trigger hook.
[    0.978558] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting udev Coldplug all Devices...
[    0.982832] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-udevd[275]: starting version 202
[    0.994254] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started udev Coldplug all Devices.
[    0.995203] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting dracut initqueue hook...
[    0.996300] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting System Initialization.
[    0.997467] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target System Initialization.
[    0.998328] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Show Plymouth Boot Screen...
[    1.001736] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    1.002251] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    1.002530] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    1.003372] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ibmvscsi 30000002: SRP_VERSION: 16.a
[    1.003526] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: scsi0 : IBM POWER Virtual SCSI Adapter 1.5.9
[    1.003727] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ibmvscsi 30000002: partner initialization complete
[    1.003786] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ibmvscsi 30000002: host srp version: 16.a, host partition vios (1), OS 3, max io 1048576
[    1.003852] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ibmvscsi 30000002: Client reserve enabled
[    1.003867] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ibmvscsi 30000002: sent SRP login
[    1.003919] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: ibmvscsi 30000002: SRP_LOGIN succeeded
[    1.017643] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reloading.
[    1.016835] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: scsi 0:0:1:0: Direct-Access     AIX      VDASD            0001 PQ: 0 ANSI: 3
[    1.042738] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    1.042898] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    1.043785] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] 209715200 512-byte logical blocks: (107 GB/100 GiB)
[    1.043836] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Write Protect is off
[    1.043841] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Mode Sense: 17 00 00 08
[    1.043886] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Cache data unavailable
[    1.043896] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Assuming drive cache: write through
[    1.044193] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Cache data unavailable
[    1.044200] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Assuming drive cache: write through
[    1.062525] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel:  sda: sda1 sda2 sda3
[    1.062941] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Cache data unavailable
[    1.062946] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Assuming drive cache: write through
[    1.062951] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: sd 0:0:1:0: [sda] Attached SCSI disk
[    1.533971] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Show Plymouth Boot Screen.
[    1.534480] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[    1.534947] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Paths.
[    1.535413] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target Paths.
[    1.535880] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Forward Password Requests to Plymouth Directory Watch.
[    1.536347] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Forward Password Requests to Plymouth Directory Watch.
[    1.536812] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Basic System.
[    1.537278] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Reached target Basic System.
[    1.578808] ibm-p720-01-lp4.rhts.eng.bos.redhat.com dracut-initqueue[288]: Scanning devices sda3  for LVM logical volumes rhel_ibm-p720-01-lp4/root rhel_ibm-p720-01-lp4/swap rhel_ibm-p720-01-lp4/swap rhel_ibm-p720-01-lp4/root
[    2.204698] ibm-p720-01-lp4.rhts.eng.bos.redhat.com dracut-initqueue[288]: inactive '/dev/rhel_ibm-p720-01-lp4/swap' [7.94 GiB] inherit
[    2.205127] ibm-p720-01-lp4.rhts.eng.bos.redhat.com dracut-initqueue[288]: inactive '/dev/rhel_ibm-p720-01-lp4/home' [41.57 GiB] inherit
[    2.205545] ibm-p720-01-lp4.rhts.eng.bos.redhat.com dracut-initqueue[288]: inactive '/dev/rhel_ibm-p720-01-lp4/root' [50.00 GiB] inherit
[    2.767245] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: bio: create slab <bio-1> at 1
[    3.062278] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Found device /dev/disk/by-uuid/30931897-8173-4ad6-8005-5e6c973977eb.
[    3.062838] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Found device /dev/disk/by-id/dm-uuid-LVM-iLqOBO1yd7F60BDuy4rzDc22fi9RNRZTe3Q6gzBr9HzmDTEv78cn7FkLTLfzsUjZ.
[    3.063381] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Found device /dev/disk/by-id/dm-name-rhel_ibm--p720--01--lp4-root.
[    3.063922] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Found device /dev/dm-1.
[    3.064461] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Found device /sys/devices/virtual/block/dm-1.
[    3.068665] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started dracut initqueue hook.
[    3.069238] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started dracut pre-mount hook.
[    3.069781] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Mounting /sysroot...
[    3.297432] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    3.297881] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Request for unknown module key 'Magrathea: Glacier signing key: 0c13c183e928455e2f5c0b7b606a338a0d3028da' err -11
[    3.308481] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
[    3.387164] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): Mounting Filesystem
[    3.529607] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): Starting recovery (logdev: internal)
[    3.583037] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 169 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000001644f5c

[    3.583048] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: CPU: 0 PID: 399 Comm: mount Tainted: GF            3.10.0-rc4 #1
[    3.583052] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Call Trace:
[    3.583057] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ab90] [c000000000014eac] .show_stack+0x7c/0x1f0 (unreliable)
[    3.583064] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ac60] [c0000000007444fc] .dump_stack+0x28/0x3c
[    3.583091] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0acd0] [d000000001600674] .xfs_error_report+0x54/0x70 [xfs]
[    3.583113] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ad40] [d0000000016479f4] .__xfs_dir3_data_check+0x6c4/0x820 [xfs]
[    3.583133] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ae40] [d000000001644f5c] .xfs_dir3_block_verify+0xbc/0xf0 [xfs]
[    3.583153] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0aec0] [d00000000164510c] .xfs_dir3_block_write_verify+0x3c/0x1d0 [xfs]
[    3.583171] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0af70] [d0000000015fdb74] ._xfs_buf_ioapply+0xd4/0x410 [xfs]
[    3.583188] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b0b0] [d0000000015fdfbc] .xfs_buf_iorequest+0x4c/0xe0 [xfs]
[    3.583206] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b140] [d0000000015fe0b4] .xfs_bdstrat_cb+0x64/0x120 [xfs]
[    3.583223] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b1d0] [d0000000015fe2c4] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs]
[    3.583240] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b2b0] [d0000000015ff308] .xfs_buf_delwri_submit+0x38/0xd0 [xfs]
[    3.583262] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b350] [d000000001662494] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs]
[    3.583283] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b410] [d00000000166279c] .xlog_recover_process_data+0x25c/0x370 [xfs]
[    3.583305] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b4e0] [d0000000016629f8] .xlog_do_recovery_pass+0x148/0x590 [xfs]
[    3.583326] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b650] [d000000001662ed8] .xlog_do_log_recovery+0x98/0x110 [xfs]
[    3.583348] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b6e0] [d000000001662f70] .xlog_do_recover+0x20/0x160 [xfs]
[    3.583369] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b770] [d000000001663148] .xlog_recover+0x98/0x110 [xfs]
[    3.583392] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b800] [d00000000166d9a4] .xfs_log_mount+0x134/0x1d0 [xfs]
[    3.583414] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b890] [d000000001666dc8] .xfs_mountfs+0x3c8/0x780 [xfs]
[    3.583432] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b940] [d000000001614c9c] .xfs_fs_fill_super+0x30c/0x3a0 [xfs]
[    3.583439] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b9e0] [c000000000214d58] .mount_bdev+0x258/0x2a0
[    3.583458] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bab0] [d000000001612758] .xfs_fs_mount+0x18/0x30 [xfs]
[    3.583463] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bb20] [c000000000215be0] .mount_fs+0x70/0x230
[    3.583468] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bbe0] [c0000000002381c8] .vfs_kern_mount+0x58/0x130
[    3.583473] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bc90] [c00000000023b390] .do_mount+0x2d0/0xb30
[    3.583478] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bd70] [c00000000023bca0] .SyS_mount+0xb0/0x110
[    3.583483] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0be30] [c000000000009e54] syscall_exit+0x0/0x98
[    3.583488] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: c0000001f907c000: 58 44 32 42 0a 68 02 d8 00 78 00 18 00 d8 00 18  XD2B.h...x......
[    3.583493] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: c0000001f907c010: 00 00 00 00 04 22 80 89 01 2e 00 01 e2 38 00 10  .....".......8..
[    3.583497] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: c0000001f907c020: 00 00 00 00 00 00 00 8f 02 2e 2e 67 67 65 00 20  ...........gge. 
[    3.583502] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: c0000001f907c030: 00 00 00 00 04 22 d0 8c 0c 74 6d 70 59 42 33 52  ....."...tmpYB3R
[    3.583507] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): Internal error xfs_dir3_block_write_verify at line 109 of file fs/xfs/xfs_dir2_block.c.  Caller 0xd0000000015fdb74

[    3.583513] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: CPU: 0 PID: 399 Comm: mount Tainted: GF            3.10.0-rc4 #1
[    3.583517] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: Call Trace:
[    3.583520] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ac60] [c000000000014eac] .show_stack+0x7c/0x1f0 (unreliable)
[    3.583525] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ad30] [c0000000007444fc] .dump_stack+0x28/0x3c
[    3.583542] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ada0] [d000000001600674] .xfs_error_report+0x54/0x70 [xfs]
[    3.583560] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0ae10] [d00000000160070c] .xfs_corruption_error+0x7c/0xb0 [xfs]
[    3.583581] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0aec0] [d00000000164521c] .xfs_dir3_block_write_verify+0x14c/0x1d0 [xfs]
[    3.583598] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0af70] [d0000000015fdb74] ._xfs_buf_ioapply+0xd4/0x410 [xfs]
[    3.583615] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b0b0] [d0000000015fdfbc] .xfs_buf_iorequest+0x4c/0xe0 [xfs]
[    3.583632] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b140] [d0000000015fe0b4] .xfs_bdstrat_cb+0x64/0x120 [xfs]
[    3.583649] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b1d0] [d0000000015fe2c4] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs]
[    3.583667] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b2b0] [d0000000015ff308] .xfs_buf_delwri_submit+0x38/0xd0 [xfs]
[    3.583688] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b350] [d000000001662494] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs]
[    3.583710] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b410] [d00000000166279c] .xlog_recover_process_data+0x25c/0x370 ]
[    3.583752] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b650] [d000000001662ed8] .xlog_do_log_recovery+0x98/0x110 [xfs]
[    3.583774] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b6e0] [d000000001662f70] .xlog_do_recover+0x20/0x160 [xfs]
[    3.583795] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b770] [d000000001663148] .xlog_recover+0x98/0x110 [xfs]
[    3.583817] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b800] [d00000000166d9a4] .xfs_log_mount+0x134/0x1d0 [xfs]
[    3.583839] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b890] [d000000001666dc8] .xfs_mountfs+0x3c8/0x780 [xfs]
[    3.583857] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b940] [d000000001614c9c] .xfs_fs_fill_super+0x30c/0x3a0 [xfs]
[    3.583863] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0b9e0] [c000000000214d58] .mount_bdev+0x258/0x2a0
[    3.583880] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bab0] [d000000001612758] .xfs_fs_mount+0x18/0x30 [xfs]
[    3.583886] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bb20] [c000000000215be0] .mount_fs+0x70/0x230
[    3.583890] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bbe0] [c0000000002381c8] .vfs_kern_mount+0x58/0x130
[    3.583895] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bc90] [c00000000023b390] .do_mount+0x2d0/0xb30
[    3.583900] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0bd70] [c00000000023bca0] .SyS_mount+0xb0/0x110
[    3.583905] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: [c00000001be0be30] [c000000000009e54] syscall_exit+0x0/0x98
[    3.583909] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): Corruption detected. Unmount and run xfs_repair
[    3.583913] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): xfs_do_force_shutdown(0x8) called from line 1365 of file fs/xfs/xfs_buf.c.  Return address = 0xd0000000015fdba0
[    3.583918] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): Corruption of in-memory data detected.  Shutting down filesystem
[    3.583922] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): Please umount the filesystem and rectify the problem(s)
[    3.583929] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): metadata I/O error: block 0x1cacb80 ("xlog_recover_iodone") error 5 numblks 16
[    3.583934] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600
[    3.583969] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): metadata I/O error: block 0x1a14580 ("xlog_recover_iodone") error 117 numblks 8
[    3.583975] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600
[    3.597774] ibm-p720-01-lp4.rhts.eng.bos.redhat.com mount[399]: mount: mount /dev/mapper/rhel_ibm--p720--01--lp4-root on /sysroot failed: Structure needs cleaning
[    3.598253] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: sysroot.mount mount process exited, code=exited status=32
[    3.598739] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Failed to mount /sysroot.
[    3.599240] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Dependency failed for Initrd Root File System.
[    3.599969] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Dependency failed for Reload Configuration from the Real Root.
[    3.600981] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Triggering OnFailure= dependencies of initrd-parse-etc.service.
[    3.601932] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Triggering OnFailure= dependencies of initrd-root-fs.target.
[    3.602389] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: MESSAGE=Unit sysroot.mount entered failed state.
[    3.602845] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping dracut initqueue hook...
[    3.603298] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopped dracut initqueue hook.
[    3.603752] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopped target Initrd File Systems.
[    3.604207] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping Basic System.
[    3.604660] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopped Show Plymouth Boot Screen.
[    3.605113] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping Journal Service...
[    3.605525] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-journal[178]: Journal stopped
[    3.609824] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-journal[417]: Allowing runtime journal files to grow to 394.2M.
[    3.603202] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): log mount/recovery failed: error 117
[    3.603241] ibm-p720-01-lp4.rhts.eng.bos.redhat.com kernel: XFS (dm-1): log mount failed
[    3.605676] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Emergency Shell...
[    3.607284] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Journal Service...
[    3.607513] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopped udev Kernel Device Manager.
[    3.607563] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping dracut pre-udev hook...
[    3.607578] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopped dracut pre-udev hook.
[    3.607628] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping dracut cmdline hook...
[    3.607641] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopped dracut cmdline hook.
[    3.607690] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping udev Kernel Socket.
[    3.607742] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Closed udev Kernel Socket.
[    3.607757] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Stopping udev Control Socket.
[    3.607805] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Closed udev Control Socket.
[    3.611362] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-journald[178]: Received SIGTERM
[    3.612376] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Starting Journal Service...
[    3.612900] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd[1]: Started Journal Service.
[    3.610950] ibm-p720-01-lp4.rhts.eng.bos.redhat.com systemd-journal[417]: Journal started

CAI Qian

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-06-03  8:09         ` CAI Qian
@ 2013-06-04  4:36           ` Dave Chinner
  2013-06-04  4:48             ` CAI Qian
  2013-06-04  5:02             ` CAI Qian
  0 siblings, 2 replies; 14+ messages in thread
From: Dave Chinner @ 2013-06-04  4:36 UTC (permalink / raw)
  To: CAI Qian; +Cc: Eric Sandeen, xfs

On Mon, Jun 03, 2013 at 04:09:06AM -0400, CAI Qian wrote:
[snip]

> :/# xfs_repair -L  /dev/mapper/rhel_ibm--p720--01--lp4-root
> Phase 1 - find and verify superblock...
> Phase 2 - using internal log
....

Now that you've repaired the filesystem, can you reproduce the
problem?

It looks somewhat like the same bug we fixed in 3.8-rc4 that Dave
Jones hit (37f1356 xfs: recalculate leaf entry pointer after
compacting a dir2 block), but if you've never repaired the damage on
disk that this problem caused then you'll just keep tripping over
it.

So, can you reproduce the problem now on this machine/filesystem?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-06-04  4:36           ` Dave Chinner
@ 2013-06-04  4:48             ` CAI Qian
  2013-06-04  5:02             ` CAI Qian
  1 sibling, 0 replies; 14+ messages in thread
From: CAI Qian @ 2013-06-04  4:48 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Eric Sandeen, xfs



----- Original Message -----
> From: "Dave Chinner" <david@fromorbit.com>
> To: "CAI Qian" <caiqian@redhat.com>
> Cc: "Eric Sandeen" <sandeen@sandeen.net>, xfs@oss.sgi.com
> Sent: Tuesday, June 4, 2013 12:36:10 PM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> On Mon, Jun 03, 2013 at 04:09:06AM -0400, CAI Qian wrote:
> [snip]
> 
> > :/# xfs_repair -L  /dev/mapper/rhel_ibm--p720--01--lp4-root
> > Phase 1 - find and verify superblock...
> > Phase 2 - using internal log
> ....
> 
> Now that you've repaired the filesystem, can you reproduce the
> problem?
> 
> It looks somewhat like the same bug we fixed in 3.8-rc4 that Dave
> Jones hit (37f1356 xfs: recalculate leaf entry pointer after
> compacting a dir2 block), but if you've never repaired the damage on
> disk that this problem caused then you'll just keep tripping over
> it.
> 
> So, can you reproduce the problem now on this machine/filesystem?
After repaired, the system can be booted up successfully. If I want
to corrupt it again, I suppose I'll need to re-run the workload, i.e.,
1) trinity
2) xfstests
3) kdump reboot.
CAI Qian
> 
> Cheers,
> 
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: 3.9.0: XFS rootfs corruption
  2013-06-04  4:36           ` Dave Chinner
  2013-06-04  4:48             ` CAI Qian
@ 2013-06-04  5:02             ` CAI Qian
  1 sibling, 0 replies; 14+ messages in thread
From: CAI Qian @ 2013-06-04  5:02 UTC (permalink / raw)
  To: Dave Chinner; +Cc: Eric Sandeen, xfs



----- Original Message -----
> From: "Dave Chinner" <david@fromorbit.com>
> To: "CAI Qian" <caiqian@redhat.com>
> Cc: "Eric Sandeen" <sandeen@sandeen.net>, xfs@oss.sgi.com
> Sent: Tuesday, June 4, 2013 12:36:10 PM
> Subject: Re: 3.9.0: XFS rootfs corruption
> 
> On Mon, Jun 03, 2013 at 04:09:06AM -0400, CAI Qian wrote:
> [snip]
> 
> > :/# xfs_repair -L  /dev/mapper/rhel_ibm--p720--01--lp4-root
> > Phase 1 - find and verify superblock...
> > Phase 2 - using internal log
> ....
> 
> Now that you've repaired the filesystem, can you reproduce the
> problem?
> 
> It looks somewhat like the same bug we fixed in 3.8-rc4 that Dave
> Jones hit (37f1356 xfs: recalculate leaf entry pointer after
> compacting a dir2 block), but if you've never repaired the damage on
> disk that this problem caused then you'll just keep tripping over
> it.
BTW, this can still be reproduced in 3.10-rc4 running the original reproducer.
[    1.718742] SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled 
[    1.720985] XFS (dm-1): Mounting Filesystem 
[    1.812149] XFS (dm-1): Starting recovery (logdev: internal) 
[    2.123553] XFS: Internal error XFS_WANT_CORRUPTED_RETURN at line 176 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd000000001647c0c 
[    2.123553]  
[    2.123564] CPU: 2 PID: 400 Comm: mount Not tainted 3.10.0-rc4 #1 
[    2.123568] Call Trace: 
[    2.123575] [c00000001bd8ab90] [c000000000014eac] .show_stack+0x7c/0x1f0 (unreliable) 
[    2.123583] [c00000001bd8ac60] [c0000000007444fc] .dump_stack+0x28/0x3c 
[    2.123614] [c00000001bd8acd0] [d000000001600674] .xfs_error_report+0x54/0x70 [xfs] 
[    2.123644] [c00000001bd8ad40] [d000000001647ab4] .__xfs_dir3_data_check+0x784/0x820 [xfs] 
[    2.123673] [c00000001bd8ae40] [d000000001647c0c] .xfs_dir3_data_verify+0xbc/0xe0 [xfs] 
[    2.123702] [c00000001bd8aec0] [d000000001647c6c] .xfs_dir3_data_write_verify+0x3c/0x1c0 [xfs] 
[    2.123730] [c00000001bd8af70] [d0000000015fdb74] ._xfs_buf_ioapply+0xd4/0x410 [xfs] 
[    2.123757] [c00000001bd8b0b0] [d0000000015fdfbc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    2.123785] [c00000001bd8b140] [d0000000015fe0b4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    2.123812] [c00000001bd8b1d0] [d0000000015fe2c4] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs] 
[    2.123840] [c00000001bd8b2b0] [d0000000015ff308] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    2.123870] [c00000001bd8b350] [d000000001662494] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs] 
[    2.123900] [c00000001bd8b410] [d00000000166279c] .xlog_recover_process_data+0x25c/0x370 [xfs] 
[    2.123930] [c00000001bd8b4e0] [d0000000016629f8] .xlog_do_recovery_pass+0x148/0x590 [xfs] 
[    2.123959] [c00000001bd8b650] [d000000001662ed8] .xlog_do_log_recovery+0x98/0x110 [xfs] 
[    2.123988] [c00000001bd8b6e0] [d000000001662f70] .xlog_do_recover+0x20/0x160 [xfs] 
[    2.124018] [c00000001bd8b770] [d000000001663148] .xlog_recover+0x98/0x110 [xfs] 
[    2.124047] [c00000001bd8b800] [d00000000166d9a4] .xfs_log_mount+0x134/0x1d0 [xfs] 
[    2.124077] [c00000001bd8b890] [d000000001666dc8] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    2.124105] [c00000001bd8b940] [d000000001614c9c] .xfs_fs_fill_super+0x30c/0x3a0 [xfs] 
[    2.124111] [c00000001bd8b9e0] [c000000000214d58] .mount_bdev+0x258/0x2a0 
[    2.124139] [c00000001bd8bab0] [d000000001612758] .xfs_fs_mount+0x18/0x30 [xfs] 
[    2.124145] [c00000001bd8bb20] [c000000000215be0] .mount_fs+0x70/0x230 
[    2.124150] [c00000001bd8bbe0] [c0000000002381c8] .vfs_kern_mount+0x58/0x130 
[    2.124156] [c00000001bd8bc90] [c00000000023b390] .do_mount+0x2d0/0xb30 
[    2.124161] [c00000001bd8bd70] [c00000000023bca0] .SyS_mount+0xb0/0x110 
[    2.124167] [c00000001bd8be30] [c000000000009e54] syscall_exit+0x0/0x98 
[    2.124173] c00000001b5ab000: 58 44 32 44 09 90 00 40 0a 90 00 40 0b 90 00 40  XD2D...@...@...@ 
[    2.124178] c00000001b5ab010: 00 00 00 00 08 14 9c 07 2e 72 68 74 73 5f 74 61  .........rhts_ta 
[    2.124183] c00000001b5ab020: 73 6b 5f 4a 34 32 37 38 31 39 2d 53 37 33 31 31  sk_J427819-S7311 
[    2.124187] c00000001b5ab030: 35 36 2d 52 38 39 39 38 37 39 2d 54 31 32 38 37  56-R899879-T1287 
[    2.124193] XFS (dm-1): Internal error xfs_dir3_data_write_verify at line 271 of file fs/xfs/xfs_dir2_data.c.  Caller 0xd0000000015fdb74 
[    2.124193]  
[    2.124200] CPU: 2 PID: 400 Comm: mount Not tainted 3.10.0-rc4 #1 
[    2.124203] Call Trace: 
[    2.124206] [c00000001bd8ac60] [c000000000014eac] .show_stack+0x7c/0x1f0 (unreliable) 
[    2.124212] [c00000001bd8ad30] [c0000000007444fc] .dump_stack+0x28/0x3c 
[    2.124239] [c00000001bd8ada0] [d000000001600674] .xfs_error_report+0x54/0x70 [xfs] 
[    2.124267] [c00000001bd8ae10] [d00000000160070c] .xfs_corruption_error+0x7c/0xb0 [xfs] 
[    2.124296] [c00000001bd8aec0] [d000000001647d78] .xfs_dir3_data_write_verify+0x148/0x1c0 [xfs] 
[    2.124323] [c00000001bd8af70] [d0000000015fdb74] ._xfs_buf_ioapply+0xd4/0x410 [xfs] 
[    2.124351] [c00000001bd8b0b0] [d0000000015fdfbc] .xfs_buf_iorequest+0x4c/0xe0 [xfs] 
[    2.124379] [c00000001bd8b140] [d0000000015fe0b4] .xfs_bdstrat_cb+0x64/0x120 [xfs] 
[    2.124406] [c00000001bd8b1d0] [d0000000015fe2c4] .__xfs_buf_delwri_submit+0x154/0x2b0 [xfs] 
[    2.124434] [c00000001bd8b2b0] [d0000000015ff308] .xfs_buf_delwri_submit+0x38/0xd0 [xfs] 
[    2.124463] [c00000001bd8b350] [d000000001662494] .xlog_recover_commit_trans+0xf4/0x1a0 [xfs] 
[    2.124493] [c00000001bd8b410] [d00000000166279c] .xlog_recover_process_data+0x25c/0x370 [xfs] 
[    2.124522] [c00000001bd8b4e0] [d0000000016629f8] .xlog_do_recovery_pass+0x148/0x590 [xfs] 
[    2.124552] [c00000001bd8b650] [d000000001662ed8] .xlog_do_log_recovery+0x98/0x110 [xfs] 
[    2.124581] [c00000001bd8b6e0] [d000000001662f70] .xlog_do_recover+0x20/0x160 [xfs] 
[    2.124611] [c00000001bd8b770] [d000000001663148] .xlog_recover+0x98/0x110 [xfs] 
[    2.124640] [c00000001bd8b800] [d00000000166d9a4] .xfs_log_mount+0x134/0x1d0 [xfs] 
[    2.124670] [c00000001bd8b890] [d000000001666dc8] .xfs_mountfs+0x3c8/0x780 [xfs] 
[    2.124698] [c00000001bd8b940] [d000000001614c9c] .xfs_fs_fill_super+0x30c/0x3a0 [xfs] 
[    2.124703] [c00000001bd8b9e0] [c000000000214d58] .mount_bdev+0x258/0x2a0 
[    2.124731] [c00000001bd8bab0] [d000000001612758] .xfs_fs_mount+0x18/0x30 [xfs] 
[    2.124736] [c00000001bd8bb20] [c000000000215be0] .mount_fs+0x70/0x230 
[    2.124741] [c00000001bd8bbe0] [c0000000002381c8] .vfs_kern_mount+0x58/0x130 
[    2.124746] [c00000001bd8bc90] [c00000000023b390] .do_mount+0x2d0/0xb30 
[    2.124752] [c00000001bd8bd70] [c00000000023bca0] .SyS_mount+0xb0/0x110 
[    2.124757] [c00000001bd8be30] [c000000000009e54] syscall_exit+0x0/0x98 
[    2.124761] XFS (dm-1): Corruption detected. Unmount and run xfs_repair 
[    2.124766] XFS (dm-1): xfs_do_force_shutdown(0x8) called from line 1365 of file fs/xfs/xfs_buf.c.  Return address = 0xd0000000015fdba0 
[    2.124772] XFS (dm-1): Corruption of in-memory data detected.  Shutting down filesystem 
[    2.124776] XFS (dm-1): Please umount the filesystem and rectify the problem(s) 
[    2.124783] XFS (dm-1): metadata I/O error: block 0x32a55f0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.124789] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124795] XFS (dm-1): metadata I/O error: block 0x32ad118 ("xlog_recover_iodone") error 5 numblks 8 
[    2.124800] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124807] XFS (dm-1): metadata I/O error: block 0x35a5b60 ("xlog_recover_iodone") error 5 numblks 8 
[    2.124812] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124819] XFS (dm-1): metadata I/O error: block 0x3748af0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.124824] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124830] XFS (dm-1): metadata I/O error: block 0x37490f0 ("xlog_recover_iodone") error 5 numblks 16 
[    2.124835] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124842] XFS (dm-1): metadata I/O error: block 0x4b00002 ("xlog_recover_iodone") error 5 numblks 1 
[    2.124847] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124853] XFS (dm-1): metadata I/O error: block 0x4c1cc20 ("xlog_recover_iodone") error 5 numblks 16 
[    2.124858] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124865] XFS (dm-1): metadata I/O error: block 0x4d018b8 ("xlog_recover_iodone") error 5 numblks 8 
[    2.124870] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124876] XFS (dm-1): metadata I/O error: block 0x4dbde68 ("xlog_recover_iodone") error 5 numblks 8 
[    2.124881] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124888] XFS (dm-1): metadata I/O error: block 0x4f9c990 ("xlog_recover_iodone") error 5 numblks 16 
[    2.124893] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.124904] XFS (dm-1): metadata I/O error: block 0x32a55d0 ("xlog_recover_iodone") error 117 numblks 8 
[    2.124910] XFS (dm-1): xfs_do_force_shutdown(0x1) called from line 386 of file fs/xfs/xfs_log_recover.c.  Return address = 0xd00000000165d600 
[    2.198068] XFS (dm-1): log mount/recovery failed: error 117 
[    2.198106] XFS (dm-1): log mount failed 
[    2.200723] systemd[1]: Starting Emergency Shell... 
[      
FAILED   
] Failed to mount /sysroot.  
See 'systemctl status sysroot.mount' for details.  
[      
DEPEND   
] Dependency failed for Initrd Root File System.  
[      
DEPEND   
] Dependency failed for Reload Configuration from the Real Root.  
[    2.201901] systemd[1]: Starting Journal Service... 
[    2.206066] systemd-journald[178]: Received SIGTERM 
[    2.207016] systemd[1]: Starting Journal Service... 
[    2.207434] systemd[1]: Started Journal Service. 
[    2.207815] systemd[1]: Stopped udev Kernel Device Manager. 
[    2.207845] systemd[1]: Stopping dracut pre-udev hook... 
[    2.207855] systemd[1]: Stopped dracut pre-udev hook. 
[    2.207894] systemd[1]: Stopping dracut cmdline hook... 
[    2.207904] systemd[1]: Stopped dracut cmdline hook. 
[    2.207940] systemd[1]: Stopping udev Kernel Socket. 
[    2.207978] systemd[1]: Closed udev Kernel Socket. 
[    2.207989] systemd[1]: Stopping udev Control Socket. 
[    2.208024] systemd[1]: Closed udev Control Socket. 
 
Generating "/run/initramfs/sosreport.txt" 
 
 
Entering emergency mode. Exit the shell to continue. 
Type "journalctl" to view system logs. 
You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot 
after mounting them and attach it to a bug report. 
 
 
:/#[-- MARK -- Mon Jun  3 10:30:00 2013] 
CAI Qian
> 
> So, can you reproduce the problem now on this machine/filesystem?
> 
> Cheers,
> 
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2013-06-04  5:02 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <2105365384.7582278.1367825507929.JavaMail.root@redhat.com>
2013-05-06  7:50 ` 3.9.0: XFS rootfs corruption CAI Qian
2013-05-06 14:31   ` Eric Sandeen
2013-05-07  7:53     ` CAI Qian
2013-05-07 19:08       ` Eric Sandeen
2013-05-14  2:28         ` CAI Qian
2013-05-14  3:17           ` Dave Chinner
2013-05-22  4:10         ` CAI Qian
2013-05-22  8:48           ` CAI Qian
2013-05-22  9:46             ` Dave Chinner
2013-06-03  7:44               ` CAI Qian
2013-06-03  8:09         ` CAI Qian
2013-06-04  4:36           ` Dave Chinner
2013-06-04  4:48             ` CAI Qian
2013-06-04  5:02             ` CAI Qian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox