public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* easily reproducible filesystem crash on rebuilding array
@ 2014-12-11 11:39 Emmanuel Florac
  2014-12-11 15:52 ` Eric Sandeen
                   ` (2 more replies)
  0 siblings, 3 replies; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-11 11:39 UTC (permalink / raw)
  To: xfs


Here's the setup: hardware RAID controller (Adaptec 7xx5 series, latest
firmware), RAID-6 array (problem occured with different RAID width,
sizes, and disk configuration), and different kernels from 3.2.x to
3.16.x.

What happens: while the array is rebuilding, simultaneously reading and
writing is a sure way to break the filesystem and at times, corrupt
data.

If the array is NOT rebuilding, nothing ever happens. When using the
array in read-only mode while it rebuilds, nothing ever happens.
However, while the array is rebuilding, relatively heavy IO almost
certainly brings up something as follows:

Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repai<<<<<<<1<1<1>XFS (dm-0): Unmount and <<<<1<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<<<<1<1<1>XFS (dm-0): Unmount and run xf<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs<<<<<<<1<1<1>XFS (dm-0): Unmount and run<<<<<<<1<1><1>XFS (dm-0): Unmount and run<<<<<<<1><1<1>XFS (dm-0): Unmount and<<<<<<<1<1<1>XFS (dm-0): Unmount<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<1<1<1>XFS (dm-0): Unmount and run xfs_<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<1<1<1>XF<1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:00:58 TEST-ADAPTEC kernel: <1<<<<<<1<1>XFS (dm-0): Unmount and run xf<<<<1<1>XFS (dm-0): Unmount and run xfs_repa<<<<<<<1<1><1>XFS (dm-0): Unmount and run xfs_re<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_r<<<<<<<1<1><1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:01:01 TEST-ADAPTEC kernel: <<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:01:01 TEST-ADAPTEC kernel: <<<<<<<1<1<1>XFS (dm-0): Unmount and run<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
Dec 10 17:01:02 TEST-ADAPTEC kernel: CPU: 6 PID: 16818 Comm: cp Tainted: G           O  3.16.7-storiq64-opteron #1
Dec 10 17:01:02 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 10 17:01:02 TEST-ADAPTEC kernel:  0000000000000000 0000000000000001 ffffffff814ca287 ffff88040404a4f8
Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffffffff81213f7d ffffffff81230203 ffff880200000001 ffff8802009ce703
Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffff8802aa193560 0000000000000001 0000000000000002 0000000000000000
Dec 10 17:01:02 TEST-ADAPTEC kernel: Call Trace:
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81213f7d>] ? xfs_alloc_fixup_trees+0x2dd/0x390
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81230203>] ? xfs_btree_get_rec+0x53/0x90
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812168a5>] ? xfs_alloc_ag_vextent_near+0x8a5/0xae0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81216ba5>] ? xfs_alloc_ag_vextent+0xc5/0x100
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812178c1>] ? xfs_alloc_vextent+0x441/0x5f0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8121f573>] ? xfs_bmap_btalloc_nullfb+0x73/0xe0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81226aa1>] ? xfs_bmap_btalloc+0x481/0x720
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812277ad>] ? xfs_bmapi_write+0x55d/0x9f0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8122a857>] ? xfs_btree_read_buf_block.constprop.28+0x87/0xc0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81231976>] ? xfs_da_grow_inode_int+0xd6/0x360
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8109669d>] ? up+0xd/0x40
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811fba30>] ? xfs_buf_unlock+0x10/0x60
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811fb49e>] ? xfs_buf_rele+0x4e/0x170
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8112d246>] ? cache_alloc_refill+0x96/0x2d0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8124b32f>] ? xfs_iread+0x11f/0x410
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8123508f>] ? xfs_dir2_grow_inode+0x6f/0x130
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812372b9>] ? xfs_dir2_sf_to_block+0xb9/0x5b0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812137be>] ? kmem_zone_alloc+0x6e/0xf0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8114ee0a>] ? unlock_new_inode+0x3a/0x60
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8124544b>] ? xfs_ialloc+0x29b/0x530
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8123edc3>] ? xfs_dir2_sf_addname+0x113/0x5d0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81235938>] ? xfs_dir_createname+0x168/0x1a0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81245f87>] ? xfs_create+0x547/0x710
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8120981c>] ? xfs_generic_create+0xdc/0x250
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811445c1>] ? vfs_create+0x71/0xc0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81144d45>] ? do_last.isra.62+0x735/0xd00
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811415d1>] ? link_path_walk+0x61/0x7e0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811453de>] ? path_openat+0xce/0x5f0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145a8b>] ? user_path_at_empty+0x6b/0xb0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145b97>] ? do_filp_open+0x47/0xb0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811519da>] ? __alloc_fd+0x3a/0x100
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81135bc0>] ? do_sys_open+0x140/0x230
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 10 17:01:02 TEST-ADAPTEC kernel: CPU: 6 PID: 16818 Comm: cp Tainted: G           O  3.16.7-storiq64-opteron #1
Dec 10 17:01:02 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 10 17:01:02 TEST-ADAPTEC kernel:  0000000000000000 000000000000000c ffffffff814ca287 ffff88040cde45c8
Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffffffff81212fdf ffff8803201b1000 ffff8802aa193c68 ffff88040be30000
Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffffffff81245d8b 0000000000000023 ffff8802aa193ba8 ffff8802aa193ba4
Dec 10 17:01:02 TEST-ADAPTEC kernel: Call Trace:
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81212fdf>] ? xfs_trans_cancel+0xef/0x110
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81245d8b>] ? xfs_create+0x34b/0x710
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8120981c>] ? xfs_generic_create+0xdc/0x250
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811445c1>] ? vfs_create+0x71/0xc0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81144d45>] ? do_last.isra.62+0x735/0xd00
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811415d1>] ? link_path_walk+0x61/0x7e0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811453de>] ? path_openat+0xce/0x5f0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145a8b>] ? user_path_at_empty+0x6b/0xb0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145b97>] ? do_filp_open+0x47/0xb0
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811519da>] ? __alloc_fd+0x3a/0x100
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81135bc0>] ? do_sys_open+0x140/0x230
Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 10 17:01:02 TEST-ADAPTEC kernel: XFS (dm-0): xfs_do_force_shutdown(0x8) called from line 959 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff81212ff8
Dec 10 17:01:25 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error 5 returned.
Dec 10 17:01:55 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error 5 returned.
Dec 10 17:02:55 TEST-ADAPTEC last message repeated 2 times

Any idea is welcome...

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-11 11:39 easily reproducible filesystem crash on rebuilding array Emmanuel Florac
@ 2014-12-11 15:52 ` Eric Sandeen
  2014-12-15 12:07 ` Emmanuel Florac
  2015-01-13 11:21 ` easily reproducible filesystem crash on rebuilding array Emmanuel Florac
  2 siblings, 0 replies; 22+ messages in thread
From: Eric Sandeen @ 2014-12-11 15:52 UTC (permalink / raw)
  To: Emmanuel Florac, xfs

On 12/11/14 5:39 AM, Emmanuel Florac wrote:
> 
> Here's the setup: hardware RAID controller (Adaptec 7xx5 series, latest
> firmware), RAID-6 array (problem occured with different RAID width,
> sizes, and disk configuration), and different kernels from 3.2.x to
> 3.16.x.
> 
> What happens: while the array is rebuilding, simultaneously reading and
> writing is a sure way to break the filesystem and at times, corrupt
> data.
> 
> If the array is NOT rebuilding, nothing ever happens. When using the
> array in read-only mode while it rebuilds, nothing ever happens.
> However, while the array is rebuilding, relatively heavy IO almost
> certainly brings up something as follows:
> 
> Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repai<<<<<<<1<1<1>XFS (dm-0): Unmount and <<<<1<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
> Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<<<<1<1<1>XFS (dm-0): Unmount and run xf<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs<<<<<<<1<1<1>XFS (dm-0): Unmount and run<<<<<<<1<1><1>XFS (dm-0): Unmount and run<<<<<<<1><1<1>XFS (dm-0): Unmount and<<<<<<<1<1<1>XFS (dm-0): Unmount<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
> Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<1<1<1>XFS (dm-0): Unmount and run xfs_<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
> Dec 10 17:00:56 TEST-ADAPTEC kernel: <1<<<1<1<1>XF<1>XFS (dm-0): Unmount and run xfs_repair
> Dec 10 17:00:58 TEST-ADAPTEC kernel: <1<<<<<<1<1>XFS (dm-0): Unmount and run xf<<<<1<1>XFS (dm-0): Unmount and run xfs_repa<<<<<<<1<1><1>XFS (dm-0): Unmount and run xfs_re<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_r<<<<<<<1<1><1>XFS (dm-0): Unmount and run xfs_repair
> Dec 10 17:01:01 TEST-ADAPTEC kernel: <<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair
> Dec 10 17:01:01 TEST-ADAPTEC kernel: <<<<<<<1<1<1>XFS (dm-0): Unmount and run<<<<<<<1<1<1>XFS (dm-0): Unmount and run xfs_repair

wow, that's a mess...

> Dec 10 17:01:02 TEST-ADAPTEC kernel: CPU: 6 PID: 16818 Comm: cp Tainted: G           O  3.16.7-storiq64-opteron #1
> Dec 10 17:01:02 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  0000000000000000 0000000000000001 ffffffff814ca287 ffff88040404a4f8
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffffffff81213f7d ffffffff81230203 ffff880200000001 ffff8802009ce703
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffff8802aa193560 0000000000000001 0000000000000002 0000000000000000
> Dec 10 17:01:02 TEST-ADAPTEC kernel: Call Trace:
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81213f7d>] ? xfs_alloc_fixup_trees+0x2dd/0x390

the actual WANT_CORRUPTED_GOTO isn't shown, but apparently xfs encountered
allocation btrees in a bad state.

Given that this only happens when your raid array is under duress, I'd lay
odds on it being a storage problem, not a filesystem problem.

-Eric

> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81230203>] ? xfs_btree_get_rec+0x53/0x90
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812168a5>] ? xfs_alloc_ag_vextent_near+0x8a5/0xae0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81216ba5>] ? xfs_alloc_ag_vextent+0xc5/0x100
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812178c1>] ? xfs_alloc_vextent+0x441/0x5f0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8121f573>] ? xfs_bmap_btalloc_nullfb+0x73/0xe0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81226aa1>] ? xfs_bmap_btalloc+0x481/0x720
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812277ad>] ? xfs_bmapi_write+0x55d/0x9f0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8122a857>] ? xfs_btree_read_buf_block.constprop.28+0x87/0xc0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81231976>] ? xfs_da_grow_inode_int+0xd6/0x360
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8109669d>] ? up+0xd/0x40
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811fba30>] ? xfs_buf_unlock+0x10/0x60
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811fb49e>] ? xfs_buf_rele+0x4e/0x170
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8112d246>] ? cache_alloc_refill+0x96/0x2d0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8124b32f>] ? xfs_iread+0x11f/0x410
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8123508f>] ? xfs_dir2_grow_inode+0x6f/0x130
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812372b9>] ? xfs_dir2_sf_to_block+0xb9/0x5b0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff812137be>] ? kmem_zone_alloc+0x6e/0xf0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8114ee0a>] ? unlock_new_inode+0x3a/0x60
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8124544b>] ? xfs_ialloc+0x29b/0x530
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8123edc3>] ? xfs_dir2_sf_addname+0x113/0x5d0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81235938>] ? xfs_dir_createname+0x168/0x1a0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81245f87>] ? xfs_create+0x547/0x710
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8120981c>] ? xfs_generic_create+0xdc/0x250
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811445c1>] ? vfs_create+0x71/0xc0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81144d45>] ? do_last.isra.62+0x735/0xd00
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811415d1>] ? link_path_walk+0x61/0x7e0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811453de>] ? path_openat+0xce/0x5f0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145a8b>] ? user_path_at_empty+0x6b/0xb0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145b97>] ? do_filp_open+0x47/0xb0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811519da>] ? __alloc_fd+0x3a/0x100
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81135bc0>] ? do_sys_open+0x140/0x230
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
> Dec 10 17:01:02 TEST-ADAPTEC kernel: CPU: 6 PID: 16818 Comm: cp Tainted: G           O  3.16.7-storiq64-opteron #1
> Dec 10 17:01:02 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  0000000000000000 000000000000000c ffffffff814ca287 ffff88040cde45c8
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffffffff81212fdf ffff8803201b1000 ffff8802aa193c68 ffff88040be30000
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  ffffffff81245d8b 0000000000000023 ffff8802aa193ba8 ffff8802aa193ba4
> Dec 10 17:01:02 TEST-ADAPTEC kernel: Call Trace:
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81212fdf>] ? xfs_trans_cancel+0xef/0x110
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81245d8b>] ? xfs_create+0x34b/0x710
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff8120981c>] ? xfs_generic_create+0xdc/0x250
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811445c1>] ? vfs_create+0x71/0xc0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81144d45>] ? do_last.isra.62+0x735/0xd00
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811415d1>] ? link_path_walk+0x61/0x7e0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811453de>] ? path_openat+0xce/0x5f0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145a8b>] ? user_path_at_empty+0x6b/0xb0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81145b97>] ? do_filp_open+0x47/0xb0
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff811519da>] ? __alloc_fd+0x3a/0x100
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff81135bc0>] ? do_sys_open+0x140/0x230
> Dec 10 17:01:02 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
> Dec 10 17:01:02 TEST-ADAPTEC kernel: XFS (dm-0): xfs_do_force_shutdown(0x8) called from line 959 of file fs/xfs/xfs_trans.c.  Return address = 0xffffffff81212ff8
> Dec 10 17:01:25 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error 5 returned.
> Dec 10 17:01:55 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error 5 returned.
> Dec 10 17:02:55 TEST-ADAPTEC last message repeated 2 times
> 
> Any idea is welcome...
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-11 11:39 easily reproducible filesystem crash on rebuilding array Emmanuel Florac
  2014-12-11 15:52 ` Eric Sandeen
@ 2014-12-15 12:07 ` Emmanuel Florac
  2014-12-15 12:25   ` Emmanuel Florac
  2015-01-13 11:21 ` easily reproducible filesystem crash on rebuilding array Emmanuel Florac
  2 siblings, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-15 12:07 UTC (permalink / raw)
  To: xfs

Le Thu, 11 Dec 2014 12:39:36 +0100
Emmanuel Florac <eflorac@intellique.com> écrivait:

> What happens: while the array is rebuilding, simultaneously reading
> and writing is a sure way to break the filesystem and at times,
> corrupt data.
> 

I've rerun the same test (heavy read/write IO while rebuilding) with
disk drives write cache off (that is RAID controller is running
in write-back mode, but the independant disks caches are set to
write-through).

The filesystem went corrupted too, however much less than previously:
it went back up online after a umount/mount cycle. Nothing at all
appears in xfs_repair output. However the IO error is weird as the RAID
controller reported no such error.


Dec 12 00:40:18 TEST-ADAPTEC kernel: XFS (dm-0): xfs_do_force_shutdown(0x1) called from line 383 of file fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffff8125cc90
Dec 12 00:40:31 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error 5 returned.
Dec 12 00:41:02 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error 5 returned.


Still investigating...

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-15 12:07 ` Emmanuel Florac
@ 2014-12-15 12:25   ` Emmanuel Florac
  2014-12-15 20:10     ` Dave Chinner
  2014-12-16 11:08     ` easily reproducible filesystem crash on rebuilding array [XFS bug in my book] Emmanuel Florac
  0 siblings, 2 replies; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-15 12:25 UTC (permalink / raw)
  To: xfs

Le Mon, 15 Dec 2014 13:07:15 +0100
Emmanuel Florac <eflorac@intellique.com> écrivait:

> Dec 12 00:40:18 TEST-ADAPTEC kernel: XFS (dm-0):
> xfs_do_force_shutdown(0x1) called from line 383 of file
> fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffff8125cc90
> Dec 12 00:40:31 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error
> 5 returned.
> Dec 12 00:41:02 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error
> 5 returned.
> 

Reading the source I see that the error occured in xfs_buf_read_map, I
suppose it's when xfsbufd tries to scan dirty metadata? This is a read
error, so it could very well be a simple IO starvation at the controller
level (as the controller probably gives priority to whatever writes are
pending over reads).

Maybe setting xfsbufd_centisecs to the max could help here? Trying
right away... Any advice welcome.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-15 12:25   ` Emmanuel Florac
@ 2014-12-15 20:10     ` Dave Chinner
  2014-12-16  7:56       ` Christoph Hellwig
  2014-12-16 11:34       ` Emmanuel Florac
  2014-12-16 11:08     ` easily reproducible filesystem crash on rebuilding array [XFS bug in my book] Emmanuel Florac
  1 sibling, 2 replies; 22+ messages in thread
From: Dave Chinner @ 2014-12-15 20:10 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs

On Mon, Dec 15, 2014 at 01:25:00PM +0100, Emmanuel Florac wrote:
> Le Mon, 15 Dec 2014 13:07:15 +0100
> Emmanuel Florac <eflorac@intellique.com> écrivait:
> 
> > Dec 12 00:40:18 TEST-ADAPTEC kernel: XFS (dm-0):
> > xfs_do_force_shutdown(0x1) called from line 383 of file
> > fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffff8125cc90
> > Dec 12 00:40:31 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error
> > 5 returned.
> > Dec 12 00:41:02 TEST-ADAPTEC kernel: XFS (dm-0): xfs_log_force: error
> > 5 returned.
> > 
> 
> Reading the source I see that the error occured in xfs_buf_read_map, I
> suppose it's when xfsbufd tries to scan dirty metadata?

a) we don't have an xfsbufd anymore, and b) the xfsbufd never
"scanned" or read metadata - it only wrote dirty buffers back to
disk.

> This is a read
> error, so it could very well be a simple IO starvation at the controller
> level (as the controller probably gives priority to whatever writes are
> pending over reads).

The controller is broken if it's returning EIO to reads when it
is busy.

> Maybe setting xfsbufd_centisecs to the max could help here?

Deprecated Sysctls
==================

  fs.xfs.xfsbufd_centisecs      (Min: 50  Default: 100  Max: 3000)
        Dirty metadata is now tracked by the log subsystem and
        flushing is driven by log space and idling demands. The
        xfsbufd no longer exists, so this syctl does nothing.

        Due for removal in 3.14.

Seems like the removal patch is overdue....

> Trying
> right away... Any advice welcome.

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

I'd start with upgrading the firmware on your RAID controller and
turning the XFS error level up to 11....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-15 20:10     ` Dave Chinner
@ 2014-12-16  7:56       ` Christoph Hellwig
  2014-12-16 11:38         ` Emmanuel Florac
  2014-12-16 11:34       ` Emmanuel Florac
  1 sibling, 1 reply; 22+ messages in thread
From: Christoph Hellwig @ 2014-12-16  7:56 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Tue, Dec 16, 2014 at 07:10:36AM +1100, Dave Chinner wrote:
> The controller is broken if it's returning EIO to reads when it
> is busy.

What controller is this?  SCSI devices can return a QUEUE BUSY
indicator, so having a RAID controller do something similar doesn't
sound unusual.  But the driver needs to returns translate that into
a QUEUE BUSY so that the SCSI midlayer can handle it correctly.

It might make sense to take this to linux-scsi with the driver
maintainer in Cc.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array [XFS bug in my book]
  2014-12-15 12:25   ` Emmanuel Florac
  2014-12-15 20:10     ` Dave Chinner
@ 2014-12-16 11:08     ` Emmanuel Florac
  2014-12-16 20:04       ` Dave Chinner
  1 sibling, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-16 11:08 UTC (permalink / raw)
  To: xfs

Le Mon, 15 Dec 2014 13:25:00 +0100
Emmanuel Florac <eflorac@intellique.com> écrivait:

> Reading the source I see that the error occured in xfs_buf_read_map, I
> suppose it's when xfsbufd tries to scan dirty metadata? This is a read
> error, so it could very well be a simple IO starvation at the
> controller level (as the controller probably gives priority to
> whatever writes are pending over reads).
> 
> Maybe setting xfsbufd_centisecs to the max could help here? Trying
> right away... Any advice welcome.
> 

Alas, same thing;

dmesg output:


ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
XFS (dm-0): Unmount and run xfs_repair
XFS (dm-0): First 64 bytes of corrupted metadata buffer:
ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
XFS (dm-0): metadata I/O error: block 0xeffffff40 ("xfs_trans_read_buf_map") error 117 numblks 16
XFS (dm-0): xfs_do_force_shutdown(0x1) called from line 383 of file fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffff8125cc90
XFS (dm-0): I/O Error Detected. Shutting down filesystem
XFS (dm-0): Please umount the filesystem and rectify the problem(s)
XFS (dm-0): xfs_imap_to_bp: xfs_trans_read_buf() returned error 117.
XFS (dm-0): xfs_log_force: error 5 returned.
XFS (dm-0): xfs_log_force: error 5 returned.

There is no IO error at the RAID controller level, at all. The buffer
hasn't been overwritten with zeros, I'm pretty sure it actually timed
out and just read nothing. This is not a case for an IO error IMO, a
retry would almost certainly succeed; after all the problem occurred
after more than 8 hours of continuous heavy read/write activity.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-15 20:10     ` Dave Chinner
  2014-12-16  7:56       ` Christoph Hellwig
@ 2014-12-16 11:34       ` Emmanuel Florac
  2014-12-16 19:58         ` Dave Chinner
  1 sibling, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-16 11:34 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Le Tue, 16 Dec 2014 07:10:36 +1100
Dave Chinner <david@fromorbit.com> écrivait:

> 
> Deprecated Sysctls
> ==================
> 
>   fs.xfs.xfsbufd_centisecs      (Min: 50  Default: 100  Max: 3000)
>         Dirty metadata is now tracked by the log subsystem and
>         flushing is driven by log space and idling demands. The
>         xfsbufd no longer exists, so this syctl does nothing.
> 
>         Due for removal in 3.14.
> 
> Seems like the removal patch is overdue....

Probably, because the /proc/sys/fs/xfs/xfsbufd_centisecs is here on my
3.16.7....

> 
> > Trying
> > right away... Any advice welcome.
> 
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

I think I included some of the info in the first message:

kernel 3.16.7 plain vanilla version.
xfs_progs 3.2.1

CPU is opteron 6212 8 cores.


MemTotal:       16451948 kB
MemFree:          145756 kB
MemAvailable:   16190184 kB
Buffers:          146780 kB
Cached:         15457656 kB
SwapCached:            0 kB
Active:           304216 kB
Inactive:       15389180 kB
Active(anon):      80012 kB
Inactive(anon):    12844 kB
Active(file):     224204 kB
Inactive(file): 15376336 kB
Unevictable:        3444 kB
Mlocked:            3444 kB
SwapTotal:        976892 kB
SwapFree:         976892 kB
Dirty:           1334032 kB
Writeback:             0 kB
AnonPages:         92444 kB
Mapped:            30116 kB
Shmem:              1688 kB
Slab:             528524 kB
SReclaimable:     504668 kB
SUnreclaim:        23856 kB
KernelStack:        5008 kB
PageTables:         6204 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     9202864 kB
Committed_AS:     614164 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      334792 kB
VmallocChunk:   34359296000 kB
HardwareCorrupted:     0 kB
DirectMap4k:       10816 kB
DirectMap2M:     2068480 kB
DirectMap1G:    14680064 kB


# cat /proc/mounts 
rootfs / rootfs rw 0 0
/dev/root / reiserfs rw,relatime 0 0
tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=1645196k,mode=755 0 0
tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev tmpfs rw,relatime,size=10240k,mode=755 0 0
tmpfs /run/shm tmpfs rw,nosuid,nodev,noexec,relatime,size=3485760k 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
rpc_pipefs /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
guitare:/mnt/raid/partage /mnt/partage nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.1.5,mountvers=3,mountport=50731,mountproto=udp,local_lock=none,addr=10.0.1.5
0 0 nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
taiko:/mnt/raid/shared/partage /mnt/shared nfs
rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.1.12,mountvers=3,mountport=56679,mountproto=udp,local_lock=none,addr=10.0.1.12
0 0 /dev/mapper/vg0-raid /mnt/raid xfs
rw,relatime,attr2,nobarrier,inode64,noquota 0 0

cat /proc/partitions 
major minor  #blocks  name

   8        0 54683228160 sda
   8        1    4881408 sda1
   8        2     976896 sda2
   8        3    4882432 sda3
   8        5 54672484352 sda5
 254        0 54672482304 dm-0

The RAID hardware is an adaptec 71685 running the latest firmware
( 32033 ). This is a 16 drives RAID-6 array of 4 TB HGST drives. The
problem occurs repeatly with any combination of 7xx5 controllers and 3
or 4 TB HGST drives in RAID-6 of various types, with XFS or JFS (it
never occurs with either ext4 or reiserfs).

As I mentioned, when the disk drives cache is on the corruption is
serious. With disk cache off, the corruption is minimal, however the
filesystem shuts down.

There's an LVM volume on sda5, which is the exercized partition.

The filesystem has been primed with a few (23) terabytes of mixed data
with both small (few KB or less), medium, and big (few gigabytes or
more) files. Two simultaneous, long running copies are made ( cp -a
somedir someotherdir) , while three simultaneous, long running read
operations are run ( md5sum -c mydir.md5 mydir), while the array is
busy rebuilding. Disk usage (as reported by iostat -mx 5) stays solidly
at 100%, with a continuous throughput of a few hundred megabytes per
second. The full test runs for about 12 hours (when not failing), and
ends up copying 6 TB or so, and md5summing 12 TB or so.

> I'd start with upgrading the firmware on your RAID controller and
> turning the XFS error level up to 11....

The firmware is the latest available. How do I turn logging to 11
please ?

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-16  7:56       ` Christoph Hellwig
@ 2014-12-16 11:38         ` Emmanuel Florac
  2014-12-16 17:21           ` Emmanuel Florac
  0 siblings, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-16 11:38 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

Le Mon, 15 Dec 2014 23:56:05 -0800
Christoph Hellwig <hch@infradead.org> écrivait:

> On Tue, Dec 16, 2014 at 07:10:36AM +1100, Dave Chinner wrote:
> > The controller is broken if it's returning EIO to reads when it
> > is busy.
> 
> What controller is this? 

ASR-71685, but the problem occured several times with various ASR-7xx5
controllers and different firmware and drivers. 

> SCSI devices can return a QUEUE BUSY
> indicator, so having a RAID controller do something similar doesn't
> sound unusual.  But the driver needs to returns translate that into
> a QUEUE BUSY so that the SCSI midlayer can handle it correctly.
> 
> It might make sense to take this to linux-scsi with the driver
> maintainer in Cc.

The driver would be either aacraid or sd, then?

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-16 11:38         ` Emmanuel Florac
@ 2014-12-16 17:21           ` Emmanuel Florac
  0 siblings, 0 replies; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-16 17:21 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

Le Tue, 16 Dec 2014 12:38:58 +0100
Emmanuel Florac <eflorac@intellique.com> écrivait:

> > SCSI devices can return a QUEUE BUSY
> > indicator, so having a RAID controller do something similar doesn't
> > sound unusual.  But the driver needs to returns translate that into
> > a QUEUE BUSY so that the SCSI midlayer can handle it correctly.
> > 
> > It might make sense to take this to linux-scsi with the driver
> > maintainer in Cc.  

Just in case, I'll redo the test once more without LVM.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-16 11:34       ` Emmanuel Florac
@ 2014-12-16 19:58         ` Dave Chinner
  2014-12-17 11:21           ` Emmanuel Florac
  2014-12-18 15:40           ` Emmanuel Florac
  0 siblings, 2 replies; 22+ messages in thread
From: Dave Chinner @ 2014-12-16 19:58 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs

On Tue, Dec 16, 2014 at 12:34:05PM +0100, Emmanuel Florac wrote:
> The RAID hardware is an adaptec 71685 running the latest firmware
> ( 32033 ). This is a 16 drives RAID-6 array of 4 TB HGST drives. The
> problem occurs repeatly with any combination of 7xx5 controllers and 3
> or 4 TB HGST drives in RAID-6 of various types, with XFS or JFS (it
> never occurs with either ext4 or reiserfs).

Do you have systems with any other type of 3/4TB drives in them?

> As I mentioned, when the disk drives cache is on the corruption is
> serious. With disk cache off, the corruption is minimal, however the
> filesystem shuts down.

That really sounds like a hardware problem - maybe with the disk
drives themselves, not necessarily the controller.

> The filesystem has been primed with a few (23) terabytes of mixed data
> with both small (few KB or less), medium, and big (few gigabytes or
> more) files. Two simultaneous, long running copies are made ( cp -a
> somedir someotherdir) , while three simultaneous, long running read
> operations are run ( md5sum -c mydir.md5 mydir), while the array is
> busy rebuilding. Disk usage (as reported by iostat -mx 5) stays solidly
> at 100%, with a continuous throughput of a few hundred megabytes per
> second. The full test runs for about 12 hours (when not failing), and
> ends up copying 6 TB or so, and md5summing 12 TB or so.
> 
> > I'd start with upgrading the firmware on your RAID controller and
> > turning the XFS error level up to 11....
> 
> The firmware is the latest available. How do I turn logging to 11
> please ?

# echo 11 > /proc/sys/fs/xfs/error_level

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array [XFS bug in my book]
  2014-12-16 11:08     ` easily reproducible filesystem crash on rebuilding array [XFS bug in my book] Emmanuel Florac
@ 2014-12-16 20:04       ` Dave Chinner
  0 siblings, 0 replies; 22+ messages in thread
From: Dave Chinner @ 2014-12-16 20:04 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs

On Tue, Dec 16, 2014 at 12:08:21PM +0100, Emmanuel Florac wrote:
> Le Mon, 15 Dec 2014 13:25:00 +0100
> Emmanuel Florac <eflorac@intellique.com> écrivait:
> 
> > Reading the source I see that the error occured in xfs_buf_read_map, I
> > suppose it's when xfsbufd tries to scan dirty metadata? This is a read
> > error, so it could very well be a simple IO starvation at the
> > controller level (as the controller probably gives priority to
> > whatever writes are pending over reads).
> > 
> > Maybe setting xfsbufd_centisecs to the max could help here? Trying
> > right away... Any advice welcome.
> > 
> 
> Alas, same thing;
> 
> dmesg output:
> 
> 
> ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
> XFS (dm-0): Unmount and run xfs_repair
> XFS (dm-0): First 64 bytes of corrupted metadata buffer:
> ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, block 0xeffffff40
> XFS (dm-0): Unmount and run xfs_repair

So the underlying storage stack is returning zeros without any IO
errors here. It's probably a lookup operation, so it simply fails
and returns the error to userspace. Every one of these messages is a
separate read IO, but they are all returning zeros.

....

> XFS (dm-0): First 64 bytes of corrupted metadata buffer:
> ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
> XFS (dm-0): metadata I/O error: block 0xeffffff40 ("xfs_trans_read_buf_map") error 117 numblks 16
> XFS (dm-0): xfs_do_force_shutdown(0x1) called from line 383 of file fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffff8125cc90
> XFS (dm-0): I/O Error Detected. Shutting down filesystem
> XFS (dm-0): Please umount the filesystem and rectify the problem(s)
> XFS (dm-0): xfs_imap_to_bp: xfs_trans_read_buf() returned error 117.
> XFS (dm-0): xfs_log_force: error 5 returned.
> XFS (dm-0): xfs_log_force: error 5 returned.

And here the same read error has occurred in a dirty transaction,
and so the filesystem shut down.

> There is no IO error at the RAID controller level, at all. The buffer
> hasn't been overwritten with zeros, I'm pretty sure it actually timed
> out and just read nothing. This is not a case for an IO error IMO, a
> retry would almost certainly succeed; after all the problem occurred
> after more than 8 hours of continuous heavy read/write activity.

What you see above is a persistent corruption that has been
reported several times as XFS has errored out and then re-read
the data from disk multiple times. A retry would most certainly
return zeros again.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-16 19:58         ` Dave Chinner
@ 2014-12-17 11:21           ` Emmanuel Florac
  2014-12-18 15:40           ` Emmanuel Florac
  1 sibling, 0 replies; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-17 11:21 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Le Wed, 17 Dec 2014 06:58:15 +1100
Dave Chinner <david@fromorbit.com> écrivait:

> On Tue, Dec 16, 2014 at 12:34:05PM +0100, Emmanuel Florac wrote:
> > The RAID hardware is an adaptec 71685 running the latest firmware
> > ( 32033 ). This is a 16 drives RAID-6 array of 4 TB HGST drives. The
> > problem occurs repeatly with any combination of 7xx5 controllers
> > and 3 or 4 TB HGST drives in RAID-6 of various types, with XFS or
> > JFS (it never occurs with either ext4 or reiserfs).
> 
> Do you have systems with any other type of 3/4TB drives in them?

No, only HGST drives.
 
> > As I mentioned, when the disk drives cache is on the corruption is
> > serious. With disk cache off, the corruption is minimal, however the
> > filesystem shuts down.
> 
> That really sounds like a hardware problem - maybe with the disk
> drives themselves, not necessarily the controller.

Actually the problem occurs without any error in the controller log, no
IO error, no disk time out, no bad block, nothing. So far I was pretty
confident about the Adaptec firmware being the culprit, I'm not so sure
now.


> > > I'd start with upgrading the firmware on your RAID controller and
> > > turning the XFS error level up to 11....
> > 
> > The firmware is the latest available. How do I turn logging to 11
> > please ?
> 
> # echo 11 > /proc/sys/fs/xfs/error_level
> 

Thanks done, while running again but *without using lvm* this time. I'm
changing one parameter at a time...

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-16 19:58         ` Dave Chinner
  2014-12-17 11:21           ` Emmanuel Florac
@ 2014-12-18 15:40           ` Emmanuel Florac
  2014-12-18 22:58             ` Dave Chinner
  1 sibling, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-18 15:40 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Le Wed, 17 Dec 2014 06:58:15 +1100
Dave Chinner <david@fromorbit.com> écrivait:

> > 
> > The firmware is the latest available. How do I turn logging to 11
> > please ?  
> 
> # echo 11 > /proc/sys/fs/xfs/error_level

OK, so now I've set the error level up, I've rerun my test without
using LVM, and the FS crashed again, this time more seriously. Here's
the significant exerpt from /var/log/messages:

Dec 18 03:56:05 TEST-ADAPTEC -- MARK --
Dec 18 04:00:04 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 04:00:04 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 04:00:04 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 04:00:04 TEST-ADAPTEC kernel:  0000000000000000 ffff88040e2d5080 ffffffff814ca287 ffff88040e2d5120
Dec 18 04:00:04 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff8800df925940 ffff88040e2d5120 ffff8800df925940
Dec 18 04:00:04 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
Dec 18 04:00:04 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:05 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 04:00:05 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 04:00:05 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 04:00:05 TEST-ADAPTEC kernel:  0000000000000000 ffff8801f89af1c0 ffffffff814ca287 ffff8801f89af260
Dec 18 04:00:05 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff8800df925940 ffff8801f89af260 ffff8800df925940
Dec 18 04:00:05 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
Dec 18 04:00:05 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 04:00:05 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:06 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 04:00:06 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 04:00:06 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 04:00:06 TEST-ADAPTEC kernel:  0000000000000000 ffff8801f89af340 ffffffff814ca287 ffff8801f89af3e0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff8800df925940 ffff8801f89af3e0 ffff8800df925940
Dec 18 04:00:06 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
Dec 18 04:00:06 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:06 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 04:00:06 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 04:00:06 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 04:00:06 TEST-ADAPTEC kernel:  0000000000000000 ffff8803185b1000 ffffffff814ca287 ffff880405268c40
Dec 18 04:00:06 TEST-ADAPTEC kernel:  ffffffff812314af ffff88040c509400 0000000000000000 ffff880405268ce0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  ffff880405268c40 ffff88041ec13b40 ffffffff811fbb0d ffff8800df925940
Dec 18 04:00:06 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff812314af>] ? xfs_da3_node_read_verify+0x4f/0x140
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 04:00:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 04:03:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:03:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:03:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:03:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:03:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:03:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:05:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:05:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:05:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:05:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:05:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:05:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:07:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:07:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:07:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:07:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:07:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:07:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:09:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:09:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:09:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:09:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:09:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:09:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:11:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:11:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:11:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:11:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:11:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:11:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:13:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:13:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:13:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:13:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:13:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:13:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:15:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:15:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:15:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:15:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:15:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:15:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:17:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:17:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:17:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:17:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:17:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:17:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:19:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:19:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:19:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:19:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:19:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:19:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:21:23 TEST-ADAPTEC kernel: stupdate_db     D ffff880312cf2f20     0 28856  28853 0x00000000
Dec 18 04:21:23 TEST-ADAPTEC kernel:  ffff88040e186950 0000000000000086 00000000000002d5 ffff880312cf2b10
Dec 18 04:21:23 TEST-ADAPTEC kernel:  0000000000013f00 ffff88010044bfd8 0000000000013f00 ffff880312cf2b10
Dec 18 04:21:23 TEST-ADAPTEC kernel:  0000000f000000f0 ffff88005e3cd830 7fffffffffffffff ffff880312cf2b10
Dec 18 04:21:23 TEST-ADAPTEC kernel: Call Trace:
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc0a>] ? schedule_timeout+0x14a/0x1c0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8121eb67>] ? xfs_bmap_search_multi_extents+0xb7/0x130
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff814cf2be>] ? __down_common+0x96/0xf0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff814cbc00>] ? schedule_timeout+0x140/0x1c0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81096686>] ? down+0x36/0x40
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff811fb6ff>] ? xfs_buf_lock+0x2f/0xc0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff811fb8cd>] ? _xfs_buf_find+0x13d/0x290
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff811fbb66>] ? xfs_buf_get_map+0x36/0x190
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff811fc8d7>] ? xfs_buf_read_map+0x37/0x110
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8125cabb>] ? xfs_trans_read_buf_map+0x1bb/0x490
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81232316>] ? xfs_da_read_buf+0xd6/0x110
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81232369>] ? xfs_da3_node_read+0x19/0xb0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8109651b>] ? down_trylock+0x2b/0x40
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8123375e>] ? xfs_da3_node_lookup_int+0x5e/0x2e0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8123daff>] ? xfs_dir2_node_lookup+0x3f/0x140
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff812351d4>] ? xfs_dir2_isleaf+0x24/0x50
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff812357b5>] ? xfs_dir_lookup+0x185/0x1a0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8114d1b4>] ? __d_alloc+0x54/0x180
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81245127>] ? xfs_lookup+0x87/0x110
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8114d29f>] ? __d_alloc+0x13f/0x180
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81209700>] ? xfs_vn_lookup+0x50/0x90
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8113fec3>] ? lookup_dcache+0xa3/0xd0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8113f984>] ? lookup_real+0x14/0x50
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8113ff22>] ? __lookup_hash+0x32/0x50
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff814c8ed3>] ? lookup_slow+0x3c/0xa2
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff811423ed>] ? path_lookupat+0x69d/0x6f0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8114246e>] ? filename_lookup.isra.49+0x2e/0x90
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81145a7f>] ? user_path_at_empty+0x5f/0xb0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff812bf9e8>] ? lockref_put_or_lock+0x48/0x80
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8113b599>] ? vfs_fstatat+0x39/0x90
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81138617>] ? __fput+0x157/0x1e0
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff8113b652>] ? SYSC_newlstat+0x12/0x30
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81012d21>] ? do_notify_resume+0x61/0x90
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff81138483>] ? fput+0x43/0x80
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff814d0ae0>] ? int_signal+0x12/0x17
Dec 18 04:21:23 TEST-ADAPTEC kernel:  [<ffffffff814d08a9>] ? system_call_fastpath+0x16/0x1b
Dec 18 04:36:05 TEST-ADAPTEC -- MARK --
Dec 18 04:56:05 TEST-ADAPTEC -- MARK --
Dec 18 05:16:05 TEST-ADAPTEC -- MARK --
Dec 18 05:36:05 TEST-ADAPTEC -- MARK --
Dec 18 05:56:05 TEST-ADAPTEC -- MARK --
Dec 18 06:16:05 TEST-ADAPTEC -- MARK --
Dec 18 06:25:06 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 06:25:06 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 06:25:06 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 06:25:06 TEST-ADAPTEC kernel:  0000000000000000 ffff88040d0070c0 ffffffff814ca287 ffff88040d007160
Dec 18 06:25:06 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff88040d007be0 ffff88040d007160 ffff8800df925940
Dec 18 06:25:06 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
Dec 18 06:25:06 TEST-ADAPTEC kernel: Call Trace:
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 06:25:06 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 06:25:06 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 06:25:06 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 06:25:06 TEST-ADAPTEC kernel:  0000000000000000 ffff88040c05cc40 ffffffff814ca287 ffff88040c05cce0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff88040d007160 ffff88040c05cce0 ffff8800df925940
Dec 18 06:25:06 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
Dec 18 06:25:06 TEST-ADAPTEC kernel: Call Trace:
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 06:25:06 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
Dec 18 06:25:06 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
Dec 18 06:25:06 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
Dec 18 06:25:06 TEST-ADAPTEC kernel:  0000000000000000 ffff88040c05cc40 ffffffff814ca287 ffff88040c05cce0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff8800df925940 ffff88040c05cce0 ffff8800df925940
Dec 18 06:25:06 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
Dec 18 06:25:06 TEST-ADAPTEC kernel: Call Trace:
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81070b40>] ? rescuer_thread+0x2b0/0x2b0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff810774fc>] ? kthread+0xbc/0xe0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff814d07fc>] ? ret_from_fork+0x7c/0xb0
Dec 18 06:25:06 TEST-ADAPTEC kernel:  [<ffffffff81077440>] ? flush_kthread_worker+0x80/0x80
Dec 18 06:36:05 TEST-ADAPTEC -- MARK --

And the output from xfs_repair version 3.2.2 run afterwards:

Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
agf_freeblks 1, counted 264995592 in ag 32
sb_fdblocks 8909579002, counted 9174574593
        - 16:30:21: scanning filesystem freespace - 51 of 51 allocation groups done
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - 16:30:21: scanning agi unlinked lists - 51 of 51 allocation groups done
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 45
        - agno = 15
        - agno = 30
        - agno = 31
        - agno = 46
        - agno = 1
        - agno = 16
        - agno = 47
        - agno = 32
        - agno = 17
imap claims a free inode 137438953625 is in use, correcting imap and clearing inode
cleared inode 137438953625
imap claims a free inode 137438953626 is in use, correcting imap and clearing inode
cleared inode 137438953626
imap claims a free inode 137438953627 is in use, correcting imap and clearing inode
cleared inode 137438953627
imap claims a free inode 137438953628 is in use, correcting imap and clearing inode
cleared inode 137438953628
imap claims a free inode 137438953629 is in use, correcting imap and clearing inode
cleared inode 137438953629
imap claims a free inode 137438953630 is in use, correcting imap and clearing inode
cleared inode 137438953630
imap claims a free inode 137438953631 is in use, correcting imap and clearing inode
cleared inode 137438953631
imap claims a free inode 137438953632 is in use, correcting imap and clearing inode
cleared inode 137438953632
imap claims a free inode 137438953633 is in use, correcting imap and clearing inode
cleared inode 137438953633
imap claims a free inode 137438953634 is in use, correcting imap and clearing inode
cleared inode 137438953634
imap claims a free inode 137438953635 is in use, correcting imap and clearing inode
cleared inode 137438953635
imap claims a free inode 137438953636 is in use, correcting imap and clearing inode
cleared inode 137438953636
imap claims a free inode 137438953637 is in use, correcting imap and clearing inode
cleared inode 137438953637
imap claims a free inode 137438953638 is in use, correcting imap and clearing inode
cleared inode 137438953638
imap claims a free inode 137438953639 is in use, correcting imap and clearing inode
cleared inode 137438953639
imap claims a free inode 137438953640 is in use, correcting imap and clearing inode
cleared inode 137438953640
imap claims a free inode 137438953641 is in use, correcting imap and clearing inode
cleared inode 137438953641
imap claims a free inode 137438953642 is in use, correcting imap and clearing inode
cleared inode 137438953642
imap claims a free inode 137438953643 is in use, correcting imap and clearing inode
cleared inode 137438953643
imap claims a free inode 137438953644 is in use, correcting imap and clearing inode
cleared inode 137438953644
imap claims a free inode 137438953645 is in use, correcting imap and clearing inode
cleared inode 137438953645
imap claims a free inode 137438953646 is in use, correcting imap and clearing inode
cleared inode 137438953646
imap claims a free inode 137438953647 is in use, correcting imap and clearing inode
cleared inode 137438953647
imap claims a free inode 137438953648 is in use, correcting imap and clearing inode
cleared inode 137438953648
imap claims a free inode 137438953649 is in use, correcting imap and clearing inode
cleared inode 137438953649
imap claims a free inode 137438953650 is in use, correcting imap and clearing inode
cleared inode 137438953650
imap claims a free inode 137438953651 is in use, correcting imap and clearing inode
cleared inode 137438953651
imap claims a free inode 137438953652 is in use, correcting imap and clearing inode
cleared inode 137438953652
imap claims a free inode 137438953653 is in use, correcting imap and clearing inode
cleared inode 137438953653
imap claims a free inode 137438953654 is in use, correcting imap and clearing inode
cleared inode 137438953654
imap claims a free inode 137438953655 is in use, correcting imap and clearing inode
cleared inode 137438953655
imap claims a free inode 137438953656 is in use, correcting imap and clearing inode
cleared inode 137438953656
imap claims a free inode 137438953657 is in use, correcting imap and clearing inode
cleared inode 137438953657
imap claims a free inode 137438953658 is in use, correcting imap and clearing inode
cleared inode 137438953658
imap claims a free inode 137438953659 is in use, correcting imap and clearing inode
cleared inode 137438953659
imap claims a free inode 137438953660 is in use, correcting imap and clearing inode
cleared inode 137438953660
imap claims a free inode 137438953661 is in use, correcting imap and clearing inode
cleared inode 137438953661
imap claims a free inode 137438953662 is in use, correcting imap and clearing inode
cleared inode 137438953662
imap claims a free inode 137438953663 is in use, correcting imap and clearing inode
cleared inode 137438953663
        - agno = 48
        - agno = 2
        - agno = 33
        - agno = 18
        - agno = 49
        - agno = 3
        - agno = 50
        - agno = 34
        - agno = 19
        - agno = 4
        - agno = 20
        - agno = 35
        - agno = 5
        - agno = 36
        - agno = 6
        - agno = 21
        - agno = 37
        - agno = 22
        - agno = 7
        - agno = 23
        - agno = 8
        - agno = 38
        - agno = 24
        - agno = 9
        - agno = 39
        - agno = 25
        - agno = 40
        - agno = 10
        - agno = 26
        - agno = 27
        - agno = 11
        - agno = 41
        - agno = 12
        - agno = 42
        - agno = 13
        - agno = 28
        - agno = 14
        - agno = 29
        - agno = 43
        - agno = 44
        - 16:30:22: process known inodes and inode discovery - 28864 of 28864 inodes done
        - process newly discovered inodes...
        - 16:30:22: process newly discovered inodes - 51 of 51 allocation groups done
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - 16:30:22: setting up duplicate extent list - 51 of 51 allocation groups done
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 2
        - agno = 0
        - agno = 6
        - agno = 4
        - agno = 5
        - agno = 3
        - agno = 7
        - agno = 8
        - agno = 9
        - agno = 10
        - agno = 12
        - agno = 11
        - agno = 13
        - agno = 14
        - agno = 15
        - agno = 16
        - agno = 17
        - agno = 18
        - agno = 19
        - agno = 20
        - agno = 21
        - agno = 22
        - agno = 23
        - agno = 24
        - agno = 25
        - agno = 26
        - agno = 27
        - agno = 28
        - agno = 29
        - agno = 30
        - agno = 31
        - agno = 32
        - agno = 33
        - agno = 34
entry ".." at block 0 offset 32 in directory inode 141733920918 references free inode 137438953625
	clearing inode number in entry at offset 32...
no .. entry for directory 141733920918
        - agno = 35
entry "Films" at block 0 offset 128 in directory inode 150323855504 references free inode 137438953625
	clearing inode number in entry at offset 128...
entry ".." at block 0 offset 32 in directory inode 150323855505 references free inode 137438953625
	clearing inode number in entry at offset 32...
no .. entry for directory 150323855505
        - agno = 36
        - agno = 37
        - agno = 38
        - agno = 39
        - agno = 40
        - agno = 41
        - agno = 42
        - agno = 43
        - agno = 44
        - agno = 45
        - agno = 46
        - agno = 47
        - agno = 48
        - agno = 49
        - agno = 50
        - 16:30:22: check for inodes claiming duplicate blocks - 28864 of 28864 inodes done
Phase 5 - rebuild AG headers and trees...
        - 16:30:22: rebuild AG headers and trees - 51 of 51 allocation groups done
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
bad hash table for directory inode 141733920918 (no data entry): rebuilding
rebuilding directory inode 141733920918
bad hash table for directory inode 150323855504 (no data entry): rebuilding
rebuilding directory inode 150323855504
bad hash table for directory inode 150323855505 (no data entry): rebuilding
rebuilding directory inode 150323855505
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 137541844000, moving to lost+found
disconnected inode 137541844001, moving to lost+found
disconnected inode 137541844002, moving to lost+found
disconnected inode 137541844003, moving to lost+found
disconnected inode 137541844004, moving to lost+found
disconnected inode 137541844005, moving to lost+found
disconnected inode 137541844006, moving to lost+found
disconnected inode 137541844007, moving to lost+found
disconnected inode 137541844008, moving to lost+found
disconnected inode 137541844009, moving to lost+found
disconnected inode 137541844010, moving to lost+found
disconnected inode 137541844011, moving to lost+found
disconnected inode 137541844012, moving to lost+found
disconnected inode 137541844013, moving to lost+found
disconnected inode 137541844014, moving to lost+found
disconnected inode 137541844015, moving to lost+found
disconnected inode 137541844016, moving to lost+found
disconnected inode 137541844017, moving to lost+found
disconnected inode 137541844018, moving to lost+found
disconnected inode 137541844019, moving to lost+found
disconnected inode 137541844020, moving to lost+found
disconnected inode 137541844021, moving to lost+found
disconnected inode 137541844022, moving to lost+found
disconnected inode 137541844023, moving to lost+found
disconnected inode 137541844024, moving to lost+found
disconnected inode 137541844025, moving to lost+found
disconnected inode 137541844026, moving to lost+found
disconnected inode 137541844027, moving to lost+found
disconnected inode 137541844028, moving to lost+found
disconnected inode 137541844029, moving to lost+found
disconnected inode 137541844030, moving to lost+found
disconnected inode 137541844031, moving to lost+found
disconnected inode 137541844032, moving to lost+found
disconnected inode 137541844033, moving to lost+found
disconnected inode 137541844034, moving to lost+found
disconnected inode 137541844035, moving to lost+found
disconnected inode 137541844036, moving to lost+found
disconnected inode 137541844037, moving to lost+found
disconnected inode 137541844038, moving to lost+found
disconnected inode 137541844039, moving to lost+found
disconnected inode 137541844040, moving to lost+found
disconnected inode 137541844041, moving to lost+found
disconnected inode 137541844042, moving to lost+found
disconnected inode 137541844043, moving to lost+found
disconnected inode 137541844044, moving to lost+found
disconnected inode 137541844045, moving to lost+found
disconnected inode 137541844046, moving to lost+found
disconnected inode 137541844047, moving to lost+found
disconnected inode 137541844048, moving to lost+found
disconnected inode 137541844049, moving to lost+found
disconnected inode 137541844050, moving to lost+found
disconnected inode 137541844051, moving to lost+found
disconnected inode 137541844052, moving to lost+found
disconnected inode 137541844053, moving to lost+found
disconnected inode 137541844054, moving to lost+found
disconnected inode 137541844055, moving to lost+found
disconnected inode 137541844056, moving to lost+found
disconnected inode 137541844057, moving to lost+found
disconnected inode 137541844058, moving to lost+found
disconnected inode 137541844059, moving to lost+found
disconnected inode 137541844060, moving to lost+found
disconnected inode 137541844061, moving to lost+found
disconnected inode 137541844062, moving to lost+found
disconnected inode 137541844063, moving to lost+found
disconnected inode 137626760928, moving to lost+found
disconnected inode 137626760929, moving to lost+found
disconnected inode 137626760930, moving to lost+found
disconnected inode 137626760931, moving to lost+found
disconnected inode 137626760932, moving to lost+found
disconnected inode 137626760933, moving to lost+found
disconnected inode 137626760934, moving to lost+found
disconnected inode 137626760935, moving to lost+found
disconnected inode 137626760936, moving to lost+found
disconnected inode 137626760937, moving to lost+found
disconnected inode 137626760938, moving to lost+found
disconnected inode 137626760939, moving to lost+found
disconnected inode 137626760940, moving to lost+found
disconnected inode 137626760941, moving to lost+found
disconnected inode 137626760942, moving to lost+found
disconnected inode 137626760943, moving to lost+found
disconnected inode 137626760944, moving to lost+found
disconnected inode 137626760945, moving to lost+found
disconnected inode 137626760946, moving to lost+found
disconnected inode 137626760947, moving to lost+found
disconnected inode 137626760948, moving to lost+found
disconnected inode 137626760949, moving to lost+found
disconnected inode 137626760950, moving to lost+found
disconnected inode 137626760951, moving to lost+found
disconnected inode 137626760952, moving to lost+found
disconnected inode 137626760953, moving to lost+found
disconnected inode 137626760954, moving to lost+found
disconnected inode 137626760955, moving to lost+found
disconnected inode 137626760956, moving to lost+found
disconnected inode 137626760957, moving to lost+found
disconnected inode 137626760958, moving to lost+found
disconnected inode 137626760959, moving to lost+found
disconnected inode 137626760960, moving to lost+found
disconnected inode 137626760961, moving to lost+found
disconnected inode 137626760962, moving to lost+found
disconnected inode 137626760963, moving to lost+found
disconnected inode 137626760964, moving to lost+found
disconnected inode 137626760965, moving to lost+found
disconnected inode 137626760966, moving to lost+found
disconnected inode 137626760967, moving to lost+found
disconnected inode 137626760968, moving to lost+found
disconnected inode 137626760969, moving to lost+found
disconnected inode 137626760970, moving to lost+found
disconnected inode 137626760971, moving to lost+found
disconnected inode 137626760972, moving to lost+found
disconnected inode 137626760973, moving to lost+found
disconnected inode 137626760974, moving to lost+found
disconnected inode 137626760975, moving to lost+found
disconnected inode 137626760976, moving to lost+found
disconnected inode 137626760977, moving to lost+found
disconnected inode 137626760978, moving to lost+found
disconnected inode 137626760979, moving to lost+found
disconnected inode 137626760980, moving to lost+found
disconnected inode 137626760981, moving to lost+found
disconnected inode 137626760982, moving to lost+found
disconnected inode 137626760983, moving to lost+found
disconnected inode 137626760984, moving to lost+found
disconnected inode 137626760985, moving to lost+found
disconnected inode 137626760986, moving to lost+found
disconnected inode 137626760987, moving to lost+found
disconnected inode 137626760988, moving to lost+found
disconnected inode 137626760989, moving to lost+found
disconnected inode 137626760990, moving to lost+found
disconnected inode 137626760991, moving to lost+found
disconnected inode 137626760992, moving to lost+found
disconnected inode 137626760993, moving to lost+found
disconnected inode 137626760994, moving to lost+found
disconnected inode 137626760995, moving to lost+found
disconnected inode 137626760996, moving to lost+found
disconnected inode 137626760997, moving to lost+found
disconnected inode 137626760998, moving to lost+found
disconnected inode 137626760999, moving to lost+found
disconnected inode 137626761000, moving to lost+found
disconnected inode 137626761001, moving to lost+found
disconnected inode 137626761002, moving to lost+found
disconnected inode 137626761003, moving to lost+found
disconnected inode 137626761004, moving to lost+found
disconnected inode 137626761005, moving to lost+found
disconnected inode 137626761006, moving to lost+found
disconnected inode 137626761007, moving to lost+found
disconnected inode 137626761008, moving to lost+found
disconnected inode 137626761009, moving to lost+found
disconnected inode 137626761010, moving to lost+found
disconnected inode 137626761011, moving to lost+found
disconnected inode 137626761012, moving to lost+found
disconnected inode 137626761013, moving to lost+found
disconnected inode 137626761014, moving to lost+found
disconnected inode 137626761015, moving to lost+found
disconnected inode 137626761016, moving to lost+found
disconnected inode 137626761017, moving to lost+found
disconnected inode 137626761018, moving to lost+found
disconnected inode 137626761019, moving to lost+found
disconnected inode 137626761020, moving to lost+found
disconnected inode 137626761021, moving to lost+found
disconnected inode 137626761022, moving to lost+found
disconnected inode 137626761023, moving to lost+found
disconnected inode 137626761024, moving to lost+found
disconnected inode 137626761025, moving to lost+found
disconnected inode 137626761026, moving to lost+found
disconnected inode 137626761027, moving to lost+found
disconnected inode 137626761028, moving to lost+found
disconnected inode 137626761029, moving to lost+found
disconnected inode 137626761030, moving to lost+found
disconnected inode 137626761031, moving to lost+found
disconnected inode 137626761032, moving to lost+found
disconnected inode 137626761033, moving to lost+found
disconnected inode 137626761034, moving to lost+found
disconnected inode 137626761035, moving to lost+found
disconnected inode 137626761036, moving to lost+found
disconnected inode 137626761037, moving to lost+found
disconnected inode 137626761038, moving to lost+found
disconnected inode 137626761039, moving to lost+found
disconnected inode 137626761040, moving to lost+found
disconnected inode 137626761041, moving to lost+found
disconnected inode 137626761042, moving to lost+found
disconnected inode 137626761043, moving to lost+found
disconnected inode 137626761044, moving to lost+found
disconnected inode 137626761045, moving to lost+found
disconnected inode 137626761046, moving to lost+found
disconnected inode 137626761047, moving to lost+found
disconnected inode 137626761048, moving to lost+found
disconnected inode 137626761049, moving to lost+found
disconnected inode 137626761050, moving to lost+found
disconnected inode 137626761051, moving to lost+found
disconnected inode 137626761052, moving to lost+found
disconnected inode 137626761053, moving to lost+found
disconnected inode 137626761054, moving to lost+found
disconnected inode 137626761055, moving to lost+found
disconnected inode 137626761056, moving to lost+found
disconnected inode 137626761057, moving to lost+found
disconnected inode 137626761058, moving to lost+found
disconnected inode 137626761059, moving to lost+found
disconnected inode 137626761060, moving to lost+found
disconnected inode 137626761061, moving to lost+found
disconnected inode 137626761062, moving to lost+found
disconnected inode 137626761063, moving to lost+found
disconnected inode 137626761064, moving to lost+found
disconnected inode 137626761065, moving to lost+found
disconnected inode 137626761066, moving to lost+found
disconnected inode 137626761067, moving to lost+found
disconnected inode 137626761068, moving to lost+found
disconnected inode 137626761069, moving to lost+found
disconnected inode 137626761070, moving to lost+found
disconnected inode 137626761071, moving to lost+found
disconnected inode 137626761072, moving to lost+found
disconnected inode 137626761073, moving to lost+found
disconnected inode 137626761074, moving to lost+found
disconnected inode 137626761075, moving to lost+found
disconnected inode 137626761076, moving to lost+found
disconnected inode 137626761077, moving to lost+found
disconnected inode 137626761078, moving to lost+found
disconnected inode 137626761079, moving to lost+found
disconnected inode 137626761080, moving to lost+found
disconnected inode 137626761081, moving to lost+found
disconnected inode 137626761082, moving to lost+found
disconnected inode 137626761083, moving to lost+found
disconnected inode 137626761084, moving to lost+found
disconnected inode 137626761085, moving to lost+found
disconnected inode 137626761086, moving to lost+found
disconnected inode 137626761087, moving to lost+found
disconnected inode 137626761088, moving to lost+found
disconnected inode 137626761089, moving to lost+found
disconnected inode 137626761090, moving to lost+found
disconnected inode 137626761091, moving to lost+found
disconnected inode 137626761092, moving to lost+found
disconnected inode 137626761093, moving to lost+found
disconnected inode 137626761094, moving to lost+found
disconnected inode 137626761095, moving to lost+found
disconnected inode 137626761096, moving to lost+found
disconnected inode 137626761097, moving to lost+found
disconnected inode 137626761098, moving to lost+found
disconnected inode 137626761099, moving to lost+found
disconnected inode 137626761100, moving to lost+found
disconnected inode 137626761101, moving to lost+found
disconnected inode 137626761102, moving to lost+found
disconnected inode 137626761103, moving to lost+found
disconnected inode 137626761104, moving to lost+found
disconnected inode 137626761105, moving to lost+found
disconnected inode 137626761106, moving to lost+found
disconnected inode 137626761107, moving to lost+found
disconnected inode 137626761108, moving to lost+found
disconnected inode 137626761109, moving to lost+found
disconnected inode 137626761110, moving to lost+found
disconnected inode 137626761111, moving to lost+found
disconnected inode 137626761112, moving to lost+found
disconnected inode 137626761113, moving to lost+found
disconnected inode 137626761114, moving to lost+found
disconnected inode 137626761115, moving to lost+found
disconnected inode 137626761116, moving to lost+found
disconnected inode 137626761117, moving to lost+found
disconnected inode 137626761118, moving to lost+found
disconnected inode 137626761119, moving to lost+found
disconnected inode 137626761216, moving to lost+found
disconnected inode 137626761217, moving to lost+found
disconnected inode 137626761218, moving to lost+found
disconnected inode 137626761219, moving to lost+found
disconnected inode 137626761220, moving to lost+found
disconnected inode 137626761221, moving to lost+found
disconnected inode 137626761222, moving to lost+found
disconnected inode 137626761223, moving to lost+found
disconnected inode 137626761224, moving to lost+found
disconnected inode 137626761225, moving to lost+found
disconnected inode 137626761226, moving to lost+found
disconnected inode 137626761227, moving to lost+found
disconnected inode 137626761228, moving to lost+found
disconnected inode 137626761229, moving to lost+found
disconnected inode 137626761230, moving to lost+found
disconnected inode 137626761231, moving to lost+found
disconnected inode 137626761232, moving to lost+found
disconnected inode 137626761233, moving to lost+found
disconnected inode 137626761234, moving to lost+found
disconnected inode 137626761235, moving to lost+found
disconnected inode 137626761236, moving to lost+found
disconnected inode 137626761237, moving to lost+found
disconnected inode 137626761238, moving to lost+found
disconnected inode 137626761239, moving to lost+found
disconnected inode 137626761240, moving to lost+found
disconnected inode 137626761241, moving to lost+found
disconnected inode 137626761242, moving to lost+found
disconnected inode 137626761243, moving to lost+found
disconnected inode 137626761244, moving to lost+found
disconnected inode 137626761245, moving to lost+found
disconnected inode 137626761246, moving to lost+found
disconnected inode 137626761247, moving to lost+found
disconnected inode 137626761248, moving to lost+found
disconnected inode 137626761249, moving to lost+found
disconnected inode 137626761250, moving to lost+found
disconnected inode 137626761251, moving to lost+found
disconnected inode 137626761252, moving to lost+found
disconnected inode 137626761253, moving to lost+found
disconnected inode 137626761254, moving to lost+found
disconnected inode 137626761255, moving to lost+found
disconnected inode 137626761256, moving to lost+found
disconnected inode 137626761257, moving to lost+found
disconnected inode 137626761258, moving to lost+found
disconnected inode 137626761259, moving to lost+found
disconnected inode 137626761260, moving to lost+found
disconnected inode 137626761261, moving to lost+found
disconnected inode 137626761262, moving to lost+found
disconnected inode 137626761263, moving to lost+found
disconnected inode 137626761264, moving to lost+found
disconnected inode 137626761265, moving to lost+found
disconnected inode 137626761266, moving to lost+found
disconnected inode 137626761267, moving to lost+found
disconnected inode 137626761268, moving to lost+found
disconnected inode 137626761269, moving to lost+found
disconnected inode 137626761270, moving to lost+found
disconnected inode 137626761271, moving to lost+found
disconnected inode 137626761272, moving to lost+found
disconnected inode 137626761273, moving to lost+found
disconnected inode 137626761274, moving to lost+found
disconnected inode 137626761275, moving to lost+found
disconnected inode 137626761276, moving to lost+found
disconnected inode 137626761277, moving to lost+found
disconnected inode 137626761278, moving to lost+found
disconnected inode 137626761279, moving to lost+found
disconnected inode 137626761312, moving to lost+found
disconnected inode 137626761313, moving to lost+found
disconnected inode 137626761314, moving to lost+found
disconnected inode 137626761315, moving to lost+found
disconnected inode 137626761316, moving to lost+found
disconnected inode 137626761317, moving to lost+found
disconnected inode 137626761318, moving to lost+found
disconnected inode 137626761319, moving to lost+found
disconnected inode 137626761320, moving to lost+found
disconnected inode 137626761321, moving to lost+found
disconnected inode 137626761322, moving to lost+found
disconnected inode 137626761323, moving to lost+found
disconnected inode 137626761324, moving to lost+found
disconnected inode 137626761325, moving to lost+found
disconnected inode 137626761326, moving to lost+found
disconnected inode 137626761327, moving to lost+found
disconnected inode 137626761328, moving to lost+found
disconnected inode 137626761329, moving to lost+found
disconnected inode 137626761330, moving to lost+found
disconnected inode 137626761331, moving to lost+found
disconnected inode 137626761332, moving to lost+found
disconnected inode 137626761333, moving to lost+found
disconnected inode 137626761334, moving to lost+found
disconnected inode 137626761335, moving to lost+found
disconnected inode 137626761336, moving to lost+found
disconnected inode 137626761337, moving to lost+found
disconnected inode 137626761338, moving to lost+found
disconnected inode 137626761339, moving to lost+found
disconnected inode 137626761340, moving to lost+found
disconnected inode 137626761341, moving to lost+found
disconnected inode 137626761342, moving to lost+found
disconnected inode 137626761343, moving to lost+found
disconnected inode 137626761344, moving to lost+found
disconnected inode 137626761345, moving to lost+found
disconnected inode 137626761346, moving to lost+found
disconnected inode 137626761347, moving to lost+found
disconnected inode 137626761348, moving to lost+found
disconnected inode 137626761349, moving to lost+found
disconnected inode 137626761350, moving to lost+found
disconnected inode 137626761351, moving to lost+found
disconnected inode 137626761352, moving to lost+found
disconnected inode 137626761353, moving to lost+found
disconnected inode 137626761354, moving to lost+found
disconnected inode 137626761355, moving to lost+found
disconnected inode 137626761356, moving to lost+found
disconnected inode 137626761357, moving to lost+found
disconnected inode 137626761358, moving to lost+found
disconnected inode 137626761359, moving to lost+found
disconnected inode 137626761360, moving to lost+found
disconnected inode 137626761361, moving to lost+found
disconnected inode 137626761362, moving to lost+found
disconnected inode 137626761363, moving to lost+found
disconnected inode 137626761364, moving to lost+found
disconnected inode 137626761365, moving to lost+found
disconnected inode 137626761366, moving to lost+found
disconnected inode 137626761367, moving to lost+found
disconnected inode 137626761368, moving to lost+found
disconnected inode 137626761369, moving to lost+found
disconnected inode 137626761370, moving to lost+found
disconnected inode 137626761371, moving to lost+found
disconnected inode 137626761372, moving to lost+found
disconnected inode 137626761373, moving to lost+found
disconnected inode 137626761374, moving to lost+found
disconnected inode 137626761375, moving to lost+found
disconnected inode 137626761376, moving to lost+found
disconnected inode 137626761377, moving to lost+found
disconnected inode 137626761378, moving to lost+found
disconnected inode 137626761379, moving to lost+found
disconnected inode 137626761380, moving to lost+found
disconnected inode 137626761381, moving to lost+found
disconnected inode 137626761382, moving to lost+found
disconnected inode 137626761383, moving to lost+found
disconnected inode 137626761384, moving to lost+found
disconnected inode 137626761385, moving to lost+found
disconnected inode 137626761386, moving to lost+found
disconnected inode 137626761387, moving to lost+found
disconnected inode 137626761388, moving to lost+found
disconnected inode 137626761389, moving to lost+found
disconnected inode 137626761390, moving to lost+found
disconnected inode 137626761391, moving to lost+found
disconnected inode 137626761392, moving to lost+found
disconnected inode 137626761393, moving to lost+found
disconnected inode 137626761394, moving to lost+found
disconnected inode 137626761395, moving to lost+found
disconnected inode 137626761396, moving to lost+found
disconnected inode 137626761397, moving to lost+found
disconnected inode 137626761398, moving to lost+found
disconnected inode 137626761399, moving to lost+found
disconnected inode 137626761400, moving to lost+found
disconnected inode 137626761401, moving to lost+found
disconnected inode 137626761402, moving to lost+found
disconnected inode 137626761403, moving to lost+found
disconnected inode 137626761404, moving to lost+found
disconnected inode 137626761405, moving to lost+found
disconnected inode 137626761406, moving to lost+found
disconnected inode 137626761407, moving to lost+found
disconnected inode 137626761408, moving to lost+found
disconnected inode 137626761409, moving to lost+found
disconnected inode 137626761410, moving to lost+found
disconnected inode 137626761411, moving to lost+found
disconnected inode 137626761412, moving to lost+found
disconnected inode 137626761413, moving to lost+found
disconnected inode 137626761414, moving to lost+found
disconnected inode 137626761415, moving to lost+found
disconnected inode 137626761416, moving to lost+found
disconnected inode 137626761417, moving to lost+found
disconnected inode 137626761418, moving to lost+found
disconnected inode 137626761419, moving to lost+found
disconnected inode 137626761420, moving to lost+found
disconnected inode 137626761421, moving to lost+found
disconnected inode 137626761422, moving to lost+found
disconnected inode 137626761423, moving to lost+found
disconnected inode 137626761424, moving to lost+found
disconnected inode 137626761425, moving to lost+found
disconnected inode 137626761426, moving to lost+found
disconnected inode 137626761427, moving to lost+found
disconnected inode 137626761428, moving to lost+found
disconnected inode 137626761429, moving to lost+found
disconnected inode 137626761430, moving to lost+found
disconnected inode 137626761431, moving to lost+found
disconnected inode 137626761432, moving to lost+found
disconnected inode 137626761433, moving to lost+found
disconnected inode 137626761434, moving to lost+found
disconnected inode 137626761435, moving to lost+found
disconnected inode 137626761436, moving to lost+found
disconnected inode 137626761437, moving to lost+found
disconnected inode 137626761438, moving to lost+found
disconnected inode 137626761439, moving to lost+found
disconnected inode 137626786560, moving to lost+found
disconnected inode 137626786561, moving to lost+found
disconnected inode 137626786562, moving to lost+found
disconnected inode 137626786563, moving to lost+found
disconnected inode 137626786564, moving to lost+found
disconnected inode 137626786565, moving to lost+found
disconnected inode 137626786566, moving to lost+found
disconnected inode 137626786567, moving to lost+found
disconnected inode 137626786568, moving to lost+found
disconnected inode 137626786569, moving to lost+found
disconnected inode 137626786570, moving to lost+found
disconnected inode 137626786571, moving to lost+found
disconnected inode 137626786572, moving to lost+found
disconnected inode 137626786573, moving to lost+found
disconnected inode 137626786574, moving to lost+found
disconnected inode 137626786575, moving to lost+found
disconnected inode 137626786576, moving to lost+found
disconnected inode 137626786577, moving to lost+found
disconnected inode 137626786578, moving to lost+found
disconnected inode 137626786579, moving to lost+found
disconnected inode 137626786580, moving to lost+found
disconnected inode 137626786581, moving to lost+found
disconnected inode 137626786582, moving to lost+found
disconnected inode 137626786583, moving to lost+found
disconnected inode 137626786584, moving to lost+found
disconnected inode 137626786585, moving to lost+found
disconnected inode 137626786586, moving to lost+found
disconnected inode 137626786587, moving to lost+found
disconnected inode 137626786588, moving to lost+found
disconnected inode 137626786589, moving to lost+found
disconnected inode 137626786590, moving to lost+found
disconnected inode 137626786591, moving to lost+found
disconnected inode 137626786592, moving to lost+found
disconnected inode 137626786593, moving to lost+found
disconnected inode 137626786594, moving to lost+found
disconnected inode 137626786595, moving to lost+found
disconnected inode 137626786596, moving to lost+found
disconnected inode 137626786597, moving to lost+found
disconnected inode 137626786598, moving to lost+found
disconnected inode 137626786599, moving to lost+found
disconnected inode 137626786600, moving to lost+found
disconnected inode 137626786601, moving to lost+found
disconnected inode 137626786602, moving to lost+found
disconnected inode 137626786603, moving to lost+found
disconnected inode 137626786604, moving to lost+found
disconnected inode 137626786605, moving to lost+found
disconnected inode 137626786606, moving to lost+found
disconnected inode 137626786607, moving to lost+found
disconnected inode 137626786608, moving to lost+found
disconnected inode 137626786609, moving to lost+found
disconnected inode 137626786610, moving to lost+found
disconnected inode 137626786611, moving to lost+found
disconnected inode 137626786612, moving to lost+found
disconnected inode 137626786613, moving to lost+found
disconnected inode 137626786614, moving to lost+found
disconnected inode 137626786615, moving to lost+found
disconnected inode 137626786616, moving to lost+found
disconnected inode 137626786617, moving to lost+found
disconnected inode 137626786618, moving to lost+found
disconnected inode 137626786619, moving to lost+found
disconnected inode 137626786620, moving to lost+found
disconnected inode 137626786621, moving to lost+found
disconnected inode 137626786622, moving to lost+found
disconnected inode 137626786623, moving to lost+found
disconnected inode 137626786624, moving to lost+found
disconnected inode 137626786625, moving to lost+found
disconnected inode 137626786626, moving to lost+found
disconnected inode 137626786627, moving to lost+found
disconnected inode 137626786628, moving to lost+found
disconnected inode 137626786629, moving to lost+found
disconnected inode 137626786630, moving to lost+found
disconnected inode 137626786631, moving to lost+found
disconnected inode 137626786632, moving to lost+found
disconnected inode 137626786633, moving to lost+found
disconnected inode 137626786634, moving to lost+found
disconnected inode 137626786635, moving to lost+found
disconnected inode 137626786636, moving to lost+found
disconnected inode 137626786637, moving to lost+found
disconnected inode 137626786638, moving to lost+found
disconnected inode 137626786639, moving to lost+found
disconnected inode 137626786640, moving to lost+found
disconnected inode 137626786641, moving to lost+found
disconnected inode 137626786642, moving to lost+found
disconnected inode 137626786643, moving to lost+found
disconnected inode 137626786644, moving to lost+found
disconnected inode 137626786645, moving to lost+found
disconnected inode 137626786646, moving to lost+found
disconnected inode 137626786647, moving to lost+found
disconnected inode 137626786648, moving to lost+found
disconnected inode 137626786649, moving to lost+found
disconnected inode 137626786650, moving to lost+found
disconnected inode 137626786651, moving to lost+found
disconnected inode 137626786652, moving to lost+found
disconnected inode 137626786653, moving to lost+found
disconnected inode 137626786654, moving to lost+found
disconnected inode 137626786655, moving to lost+found
disconnected inode 137626786656, moving to lost+found
disconnected inode 137626786657, moving to lost+found
disconnected inode 137626786658, moving to lost+found
disconnected inode 137626786659, moving to lost+found
disconnected inode 137626786660, moving to lost+found
disconnected inode 137626786661, moving to lost+found
disconnected inode 137626786662, moving to lost+found
disconnected inode 137626786663, moving to lost+found
disconnected inode 137626786664, moving to lost+found
disconnected inode 137626786665, moving to lost+found
disconnected inode 137626786666, moving to lost+found
disconnected inode 137626786667, moving to lost+found
disconnected inode 137626786668, moving to lost+found
disconnected inode 137626786669, moving to lost+found
disconnected inode 137626786670, moving to lost+found
disconnected inode 137626786671, moving to lost+found
disconnected inode 137626786672, moving to lost+found
disconnected inode 137626786673, moving to lost+found
disconnected inode 137626786674, moving to lost+found
disconnected inode 137626786675, moving to lost+found
disconnected inode 137626786676, moving to lost+found
disconnected inode 137626786677, moving to lost+found
disconnected inode 137626786678, moving to lost+found
disconnected inode 137626786679, moving to lost+found
disconnected inode 137626786680, moving to lost+found
disconnected inode 137626786681, moving to lost+found
disconnected inode 137626786682, moving to lost+found
disconnected inode 137626786683, moving to lost+found
disconnected inode 137626786684, moving to lost+found
disconnected inode 137626786685, moving to lost+found
disconnected inode 137626786686, moving to lost+found
disconnected inode 137626786687, moving to lost+found
disconnected inode 137626786720, moving to lost+found
disconnected inode 137626786721, moving to lost+found
disconnected inode 137626786722, moving to lost+found
disconnected inode 137626786723, moving to lost+found
disconnected inode 137626786724, moving to lost+found
disconnected inode 137626786725, moving to lost+found
disconnected inode 137626786726, moving to lost+found
disconnected inode 137626786727, moving to lost+found
disconnected inode 137626786728, moving to lost+found
disconnected inode 137626786729, moving to lost+found
disconnected inode 137626786730, moving to lost+found
disconnected inode 137626786731, moving to lost+found
disconnected inode 137626786732, moving to lost+found
disconnected inode 137626786733, moving to lost+found
disconnected inode 137626786734, moving to lost+found
disconnected inode 137626786735, moving to lost+found
disconnected inode 137626786736, moving to lost+found
disconnected inode 137626786737, moving to lost+found
disconnected inode 137626786738, moving to lost+found
disconnected inode 137626786739, moving to lost+found
disconnected inode 137626786740, moving to lost+found
disconnected inode 137626786741, moving to lost+found
disconnected inode 137626786742, moving to lost+found
disconnected inode 137626786743, moving to lost+found
disconnected inode 137626786744, moving to lost+found
disconnected inode 137626786745, moving to lost+found
disconnected inode 137626786746, moving to lost+found
disconnected inode 137626786747, moving to lost+found
disconnected inode 137626786748, moving to lost+found
disconnected inode 137626786749, moving to lost+found
disconnected inode 137626786750, moving to lost+found
disconnected inode 137626786751, moving to lost+found
disconnected inode 137626786752, moving to lost+found
disconnected inode 137626786753, moving to lost+found
disconnected inode 137626786754, moving to lost+found
disconnected inode 137626786755, moving to lost+found
disconnected inode 137626786756, moving to lost+found
disconnected inode 137626786757, moving to lost+found
disconnected inode 137626786758, moving to lost+found
disconnected inode 137626786759, moving to lost+found
disconnected inode 137626786760, moving to lost+found
disconnected inode 137626786761, moving to lost+found
disconnected inode 137626786762, moving to lost+found
disconnected inode 137626786763, moving to lost+found
disconnected inode 137626786764, moving to lost+found
disconnected inode 137626786765, moving to lost+found
disconnected inode 137626786766, moving to lost+found
disconnected inode 137626786767, moving to lost+found
disconnected inode 137626786768, moving to lost+found
disconnected inode 137626786769, moving to lost+found
disconnected inode 137626786770, moving to lost+found
disconnected inode 137626786771, moving to lost+found
disconnected inode 137626786772, moving to lost+found
disconnected inode 137626786773, moving to lost+found
disconnected inode 137626786774, moving to lost+found
disconnected inode 137626786775, moving to lost+found
disconnected inode 137626786776, moving to lost+found
disconnected inode 137626786777, moving to lost+found
disconnected inode 137626786778, moving to lost+found
disconnected inode 137626786779, moving to lost+found
disconnected inode 137626786780, moving to lost+found
disconnected inode 137626786781, moving to lost+found
disconnected inode 137626786782, moving to lost+found
disconnected inode 137626786783, moving to lost+found
disconnected inode 137626786784, moving to lost+found
disconnected inode 137626786785, moving to lost+found
disconnected inode 137626786786, moving to lost+found
disconnected inode 137626786787, moving to lost+found
disconnected inode 137626786788, moving to lost+found
disconnected inode 137626786789, moving to lost+found
disconnected inode 137626786790, moving to lost+found
disconnected inode 137626786791, moving to lost+found
disconnected inode 137626786792, moving to lost+found
disconnected inode 137626786793, moving to lost+found
disconnected inode 137626786794, moving to lost+found
disconnected inode 137626786795, moving to lost+found
disconnected inode 137626786796, moving to lost+found
disconnected inode 137626786797, moving to lost+found
disconnected inode 137626786798, moving to lost+found
disconnected inode 137626786799, moving to lost+found
disconnected inode 137626786800, moving to lost+found
disconnected inode 137626786801, moving to lost+found
disconnected inode 137626786802, moving to lost+found
disconnected inode 137626786803, moving to lost+found
disconnected inode 137626786804, moving to lost+found
disconnected inode 137626786805, moving to lost+found
disconnected inode 137626786806, moving to lost+found
disconnected inode 137626786807, moving to lost+found
disconnected inode 137626786808, moving to lost+found
disconnected inode 137626786809, moving to lost+found
disconnected inode 137626786810, moving to lost+found
disconnected inode 137626786811, moving to lost+found
disconnected inode 137626786812, moving to lost+found
disconnected inode 137626786813, moving to lost+found
disconnected inode 137626786814, moving to lost+found
disconnected inode 137626786815, moving to lost+found
disconnected inode 137626786816, moving to lost+found
disconnected inode 137626786817, moving to lost+found
disconnected inode 137626786818, moving to lost+found
disconnected inode 137626786819, moving to lost+found
disconnected inode 137626786820, moving to lost+found
disconnected inode 137626786821, moving to lost+found
disconnected inode 137626786822, moving to lost+found
disconnected inode 137626786823, moving to lost+found
disconnected inode 137626786824, moving to lost+found
disconnected inode 137626786825, moving to lost+found
disconnected inode 137626786826, moving to lost+found
disconnected inode 137626786827, moving to lost+found
disconnected inode 137626786828, moving to lost+found
disconnected inode 137626786829, moving to lost+found
disconnected inode 137626786830, moving to lost+found
disconnected inode 137626786831, moving to lost+found
disconnected inode 137626786832, moving to lost+found
disconnected inode 137626786833, moving to lost+found
disconnected inode 137626786834, moving to lost+found
disconnected inode 137626786835, moving to lost+found
disconnected inode 137626786836, moving to lost+found
disconnected inode 137626786837, moving to lost+found
disconnected inode 137626786838, moving to lost+found
disconnected inode 137626786839, moving to lost+found
disconnected inode 137626786840, moving to lost+found
disconnected inode 137626786841, moving to lost+found
disconnected inode 137626786842, moving to lost+found
disconnected inode 137626786843, moving to lost+found
disconnected inode 137626786844, moving to lost+found
disconnected inode 137626786845, moving to lost+found
disconnected inode 137626786846, moving to lost+found
disconnected inode 137626786847, moving to lost+found
disconnected inode 137626786880, moving to lost+found
disconnected inode 137626786881, moving to lost+found
disconnected inode 137626786882, moving to lost+found
disconnected inode 137626786883, moving to lost+found
disconnected inode 137626786884, moving to lost+found
disconnected inode 137626786885, moving to lost+found
disconnected inode 137626786886, moving to lost+found
disconnected inode 137626786887, moving to lost+found
disconnected inode 137626786888, moving to lost+found
disconnected inode 137626786889, moving to lost+found
disconnected inode 137626786890, moving to lost+found
disconnected inode 137626786891, moving to lost+found
disconnected inode 137626786892, moving to lost+found
disconnected inode 137626786893, moving to lost+found
disconnected inode 137626786894, moving to lost+found
disconnected inode 137626786895, moving to lost+found
disconnected inode 137626786896, moving to lost+found
disconnected inode 137626786897, moving to lost+found
disconnected inode 137626786898, moving to lost+found
disconnected inode 137626786899, moving to lost+found
disconnected inode 137626786900, moving to lost+found
disconnected inode 137626786901, moving to lost+found
disconnected inode 137626786902, moving to lost+found
disconnected inode 137626786903, moving to lost+found
disconnected inode 137626786904, moving to lost+found
disconnected inode 137626786905, moving to lost+found
disconnected inode 137626786906, moving to lost+found
disconnected inode 137626786907, moving to lost+found
disconnected inode 137626786908, moving to lost+found
disconnected inode 137626786909, moving to lost+found
disconnected inode 137626786910, moving to lost+found
disconnected inode 137626786911, moving to lost+found
disconnected inode 137626786912, moving to lost+found
disconnected inode 137626786913, moving to lost+found
disconnected inode 137626786914, moving to lost+found
disconnected inode 137626786915, moving to lost+found
disconnected inode 137626786916, moving to lost+found
disconnected inode 137626786917, moving to lost+found
disconnected inode 137626786918, moving to lost+found
disconnected inode 137626786919, moving to lost+found
disconnected inode 137626786920, moving to lost+found
disconnected inode 137626786921, moving to lost+found
disconnected inode 137626786922, moving to lost+found
disconnected inode 137626786923, moving to lost+found
disconnected inode 137626786924, moving to lost+found
disconnected inode 137626786925, moving to lost+found
disconnected inode 137626786926, moving to lost+found
disconnected inode 137626786927, moving to lost+found
disconnected inode 137626786928, moving to lost+found
disconnected inode 137626786929, moving to lost+found
disconnected inode 137626786930, moving to lost+found
disconnected inode 137626786931, moving to lost+found
disconnected inode 137626786932, moving to lost+found
disconnected inode 137626786933, moving to lost+found
disconnected inode 137626786934, moving to lost+found
disconnected inode 137626786935, moving to lost+found
disconnected inode 137626786936, moving to lost+found
disconnected inode 137626786937, moving to lost+found
disconnected inode 137626786938, moving to lost+found
disconnected inode 137626786939, moving to lost+found
disconnected inode 137626786940, moving to lost+found
disconnected inode 137626786941, moving to lost+found
disconnected inode 137626786942, moving to lost+found
disconnected inode 137626786943, moving to lost+found
disconnected inode 137626786944, moving to lost+found
disconnected inode 137626786945, moving to lost+found
disconnected inode 137626786946, moving to lost+found
disconnected inode 137626786947, moving to lost+found
disconnected inode 137626786948, moving to lost+found
disconnected inode 137626786949, moving to lost+found
disconnected inode 137626786950, moving to lost+found
disconnected inode 137626786951, moving to lost+found
disconnected inode 137626786952, moving to lost+found
disconnected inode 137626786953, moving to lost+found
disconnected inode 137626786954, moving to lost+found
disconnected inode 137626786955, moving to lost+found
disconnected inode 137626786956, moving to lost+found
disconnected inode 137626786957, moving to lost+found
disconnected inode 137626786958, moving to lost+found
disconnected inode 137626786959, moving to lost+found
disconnected inode 137626786960, moving to lost+found
disconnected inode 137626786961, moving to lost+found
disconnected inode 137626786962, moving to lost+found
disconnected inode 137626786963, moving to lost+found
disconnected inode 137626786964, moving to lost+found
disconnected inode 137626786965, moving to lost+found
disconnected inode 137626786966, moving to lost+found
disconnected inode 137626786967, moving to lost+found
disconnected inode 137626786968, moving to lost+found
disconnected inode 137626786969, moving to lost+found
disconnected inode 137626786970, moving to lost+found
disconnected inode 137626786971, moving to lost+found
disconnected inode 137626786972, moving to lost+found
disconnected inode 137626786973, moving to lost+found
disconnected inode 137626786974, moving to lost+found
disconnected inode 137626786975, moving to lost+found
disconnected inode 137626786976, moving to lost+found
disconnected inode 137626786977, moving to lost+found
disconnected inode 137626786978, moving to lost+found
disconnected inode 137626786979, moving to lost+found
disconnected inode 137626786980, moving to lost+found
disconnected inode 137626786981, moving to lost+found
disconnected inode 137626786982, moving to lost+found
disconnected inode 137626786983, moving to lost+found
disconnected inode 137626786984, moving to lost+found
disconnected inode 137626786985, moving to lost+found
disconnected inode 137626786986, moving to lost+found
disconnected inode 137626786987, moving to lost+found
disconnected inode 137626786988, moving to lost+found
disconnected inode 137626786989, moving to lost+found
disconnected inode 137626786990, moving to lost+found
disconnected inode 137626786991, moving to lost+found
disconnected inode 137626786992, moving to lost+found
disconnected inode 137626786993, moving to lost+found
disconnected inode 137626786994, moving to lost+found
disconnected inode 137626786995, moving to lost+found
disconnected inode 137626786996, moving to lost+found
disconnected inode 137626786997, moving to lost+found
disconnected inode 137626786998, moving to lost+found
disconnected inode 137626786999, moving to lost+found
disconnected inode 137626787000, moving to lost+found
disconnected inode 137626787001, moving to lost+found
disconnected inode 137626787002, moving to lost+found
disconnected inode 137626787003, moving to lost+found
disconnected inode 137626787004, moving to lost+found
disconnected inode 137626787005, moving to lost+found
disconnected inode 137626787006, moving to lost+found
disconnected inode 137626787007, moving to lost+found
disconnected inode 137626787040, moving to lost+found
disconnected inode 137626787041, moving to lost+found
disconnected inode 137626787042, moving to lost+found
disconnected inode 137626787043, moving to lost+found
disconnected inode 137626787044, moving to lost+found
disconnected inode 137626787045, moving to lost+found
disconnected inode 137626787046, moving to lost+found
disconnected inode 137626787047, moving to lost+found
disconnected inode 137626787048, moving to lost+found
disconnected inode 137626787049, moving to lost+found
disconnected inode 137626787050, moving to lost+found
disconnected inode 137626787051, moving to lost+found
disconnected inode 137626787052, moving to lost+found
disconnected inode 137626787053, moving to lost+found
disconnected inode 137626787054, moving to lost+found
disconnected inode 137626787055, moving to lost+found
disconnected inode 137626787056, moving to lost+found
disconnected inode 137626787057, moving to lost+found
disconnected inode 137626787058, moving to lost+found
disconnected inode 137626787059, moving to lost+found
disconnected inode 137626787060, moving to lost+found
disconnected inode 137626787061, moving to lost+found
disconnected inode 137626787062, moving to lost+found
disconnected inode 137626787063, moving to lost+found
disconnected inode 137626787064, moving to lost+found
disconnected inode 137626787065, moving to lost+found
disconnected inode 137626787066, moving to lost+found
disconnected inode 137626787067, moving to lost+found
disconnected inode 137626787068, moving to lost+found
disconnected inode 137626787069, moving to lost+found
disconnected inode 137626787070, moving to lost+found
disconnected inode 137626787071, moving to lost+found
disconnected inode 137626787072, moving to lost+found
disconnected inode 137626787073, moving to lost+found
disconnected inode 137626787074, moving to lost+found
disconnected inode 137626787075, moving to lost+found
disconnected inode 137626787076, moving to lost+found
disconnected inode 137626787077, moving to lost+found
disconnected inode 137626787078, moving to lost+found
disconnected inode 137626787079, moving to lost+found
disconnected inode 137626787080, moving to lost+found
disconnected inode 137626787081, moving to lost+found
disconnected inode 137626787082, moving to lost+found
disconnected inode 137626787083, moving to lost+found
disconnected inode 137626787084, moving to lost+found
disconnected inode 137626787085, moving to lost+found
disconnected inode 137626787086, moving to lost+found
disconnected inode 137626787087, moving to lost+found
disconnected inode 137626787088, moving to lost+found
disconnected inode 137626787089, moving to lost+found
disconnected inode 137626787090, moving to lost+found
disconnected inode 137626787091, moving to lost+found
disconnected inode 137626787092, moving to lost+found
disconnected inode 137626787093, moving to lost+found
disconnected inode 137626787094, moving to lost+found
disconnected inode 137626787095, moving to lost+found
disconnected inode 137626787096, moving to lost+found
disconnected inode 137626787097, moving to lost+found
disconnected inode 137626787098, moving to lost+found
disconnected inode 137626787099, moving to lost+found
disconnected inode 137626787100, moving to lost+found
disconnected inode 137626787101, moving to lost+found
disconnected inode 137626787102, moving to lost+found
disconnected inode 137626787103, moving to lost+found
disconnected inode 137626787104, moving to lost+found
disconnected inode 137626787105, moving to lost+found
disconnected inode 137626787106, moving to lost+found
disconnected inode 137626787107, moving to lost+found
disconnected inode 137626787108, moving to lost+found
disconnected inode 137626787109, moving to lost+found
disconnected inode 137626787110, moving to lost+found
disconnected inode 137626787111, moving to lost+found
disconnected inode 137626787112, moving to lost+found
disconnected inode 137626787113, moving to lost+found
disconnected inode 137626787114, moving to lost+found
disconnected inode 137626787115, moving to lost+found
disconnected inode 137626787116, moving to lost+found
disconnected inode 137626787117, moving to lost+found
disconnected inode 137626787118, moving to lost+found
disconnected inode 137626787119, moving to lost+found
disconnected inode 137626787120, moving to lost+found
disconnected inode 137626787121, moving to lost+found
disconnected inode 137626787122, moving to lost+found
disconnected inode 137626787123, moving to lost+found
disconnected inode 137626787124, moving to lost+found
disconnected inode 137626787125, moving to lost+found
disconnected inode 137626787126, moving to lost+found
disconnected inode 137626787127, moving to lost+found
disconnected inode 137626787128, moving to lost+found
disconnected inode 137626787129, moving to lost+found
disconnected inode 137626787130, moving to lost+found
disconnected inode 137626787131, moving to lost+found
disconnected inode 137626787132, moving to lost+found
disconnected inode 137626787133, moving to lost+found
disconnected inode 137626787134, moving to lost+found
disconnected inode 137626787135, moving to lost+found
disconnected inode 137626787136, moving to lost+found
disconnected inode 137626787137, moving to lost+found
disconnected inode 137626787138, moving to lost+found
disconnected inode 137626787139, moving to lost+found
disconnected inode 137626787140, moving to lost+found
disconnected inode 137626787141, moving to lost+found
disconnected inode 137626787142, moving to lost+found
disconnected inode 137626787143, moving to lost+found
disconnected inode 137626787144, moving to lost+found
disconnected inode 137626787145, moving to lost+found
disconnected inode 137626787146, moving to lost+found
disconnected inode 137626787147, moving to lost+found
disconnected inode 137626787148, moving to lost+found
disconnected inode 137626787149, moving to lost+found
disconnected inode 137626787150, moving to lost+found
disconnected inode 137626787151, moving to lost+found
disconnected inode 137626787152, moving to lost+found
disconnected inode 137626787153, moving to lost+found
disconnected inode 137626787154, moving to lost+found
disconnected inode 137626787155, moving to lost+found
disconnected inode 137626787156, moving to lost+found
disconnected inode 137626787157, moving to lost+found
disconnected inode 137626787158, moving to lost+found
disconnected inode 137626787159, moving to lost+found
disconnected inode 137626787160, moving to lost+found
disconnected inode 137626787161, moving to lost+found
disconnected inode 137626787162, moving to lost+found
disconnected inode 137626787163, moving to lost+found
disconnected inode 137626787164, moving to lost+found
disconnected inode 137626787165, moving to lost+found
disconnected inode 137626787166, moving to lost+found
disconnected inode 137626787167, moving to lost+found
disconnected inode 137626787200, moving to lost+found
disconnected inode 137626787201, moving to lost+found
disconnected inode 137626787202, moving to lost+found
disconnected inode 137626787203, moving to lost+found
disconnected inode 137626787204, moving to lost+found
disconnected inode 137626787205, moving to lost+found
disconnected inode 137626787206, moving to lost+found
disconnected inode 137626787207, moving to lost+found
disconnected inode 137626787208, moving to lost+found
disconnected inode 137626787209, moving to lost+found
disconnected inode 137626787210, moving to lost+found
disconnected inode 137626787211, moving to lost+found
disconnected inode 137626787212, moving to lost+found
disconnected inode 137626787213, moving to lost+found
disconnected inode 137626787214, moving to lost+found
disconnected inode 137626787215, moving to lost+found
disconnected inode 137626787216, moving to lost+found
disconnected dir inode 141733920918, moving to lost+found
disconnected dir inode 146028889164, moving to lost+found
disconnected dir inode 150323855505, moving to lost+found
Phase 7 - verify and correct link counts...
resetting inode 4294866029 nlinks from 2 to 5
resetting inode 150323855504 nlinks from 13 to 12
Metadata corruption detected at block 0x10809dc640/0x1000
libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
Metadata corruption detected at block 0x10809dc640/0x1000
libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
done



-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-18 15:40           ` Emmanuel Florac
@ 2014-12-18 22:58             ` Dave Chinner
  2014-12-19 11:57               ` Emmanuel Florac
  0 siblings, 1 reply; 22+ messages in thread
From: Dave Chinner @ 2014-12-18 22:58 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs

On Thu, Dec 18, 2014 at 04:40:42PM +0100, Emmanuel Florac wrote:
> Le Wed, 17 Dec 2014 06:58:15 +1100
> Dave Chinner <david@fromorbit.com> écrivait:
> 
> > > 
> > > The firmware is the latest available. How do I turn logging to 11
> > > please ?  
> > 
> > # echo 11 > /proc/sys/fs/xfs/error_level
> 
> OK, so now I've set the error level up, I've rerun my test without
> using LVM, and the FS crashed again, this time more seriously. Here's
> the significant exerpt from /var/log/messages:
> 
> Dec 18 03:56:05 TEST-ADAPTEC -- MARK --
> Dec 18 04:00:04 TEST-ADAPTEC kernel: CPU: 0 PID: 1738 Comm: kworker/0:1H Not tainted 3.16.7-storiq64-opteron #1
> Dec 18 04:00:04 TEST-ADAPTEC kernel: Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0a       05/07/2013
> Dec 18 04:00:04 TEST-ADAPTEC kernel: Workqueue: xfslogd xfs_buf_iodone_work
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  0000000000000000 ffff88040e2d5080 ffffffff814ca287 ffff88040e2d5120
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  ffffffff811fbb0d ffff8800df925940 ffff88040e2d5120 ffff8800df925940
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  ffffffff810705a4 0000000000013f00 000000000deed450 ffff88040deed450
> Dec 18 04:00:04 TEST-ADAPTEC kernel: Call Trace:
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814ca287>] ? dump_stack+0x41/0x51
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff811fbb0d>] ? xfs_buf_iodone_work+0x8d/0xb0
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff810705a4>] ? process_one_work+0x174/0x420
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff81070c4b>] ? worker_thread+0x10b/0x500
> Dec 18 04:00:04 TEST-ADAPTEC kernel:  [<ffffffff814cc290>] ? __schedule+0x2e0/0x750

Where's the XFS error output? This is just the output from the
dump_stack() call in the xfs error message code...

Still, that's implying a write IO error being reporte din IO
completion, not a read error, and that's different to the previous
issue you've reported. It's also indicative of an error coming from
the storage, not XFS...

Do these problems *only* happen during or after a RAID rebuild?

> Phase 7 - verify and correct link counts...
> resetting inode 4294866029 nlinks from 2 to 5
> resetting inode 150323855504 nlinks from 13 to 12
> Metadata corruption detected at block 0x10809dc640/0x1000
> libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> Metadata corruption detected at block 0x10809dc640/0x1000
> libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> done

I'd suggest you should be upgrading xfsprogs, because that's an
error that shouldn't happen at the end of a repair. If the latest
version (3.2.2) doesn't fix this problem, then please send me a
compressed metadump so I can work out what corruption xfs_repair
isn't fixing properly.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-18 22:58             ` Dave Chinner
@ 2014-12-19 11:57               ` Emmanuel Florac
  2014-12-19 23:06                 ` Dave Chinner
  0 siblings, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2014-12-19 11:57 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Le Fri, 19 Dec 2014 09:58:43 +1100 vous écriviez:

> 
> Where's the XFS error output? This is just the output from the
> dump_stack() call in the xfs error message code...

Where is it supposed to display its errors? I thought it to be
in /var/log/messages....

> Still, that's implying a write IO error being reporte din IO
> completion, not a read error, and that's different to the previous
> issue you've reported. It's also indicative of an error coming from
> the storage, not XFS...
>
> Do these problems *only* happen during or after a RAID rebuild?

Only while the rebuild process is running. All works fine afterwards.
 
> > Phase 7 - verify and correct link counts...
> > resetting inode 4294866029 nlinks from 2 to 5
> > resetting inode 150323855504 nlinks from 13 to 12
> > Metadata corruption detected at block 0x10809dc640/0x1000
> > libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> > Metadata corruption detected at block 0x10809dc640/0x1000
> > libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> > done
> 
> I'd suggest you should be upgrading xfsprogs, because that's an
> error that shouldn't happen at the end of a repair. If the latest
> version (3.2.2) doesn't fix this problem, then please send me a
> compressed metadump so I can work out what corruption xfs_repair
> isn't fixing properly.

It was 3.2.2 this time. However it seems to have fixe it anyway;
running it a second time displays nothing special.
Is a metadump of a clean filesystem of any use?

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-19 11:57               ` Emmanuel Florac
@ 2014-12-19 23:06                 ` Dave Chinner
  0 siblings, 0 replies; 22+ messages in thread
From: Dave Chinner @ 2014-12-19 23:06 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs

On Fri, Dec 19, 2014 at 12:57:20PM +0100, Emmanuel Florac wrote:
> Le Fri, 19 Dec 2014 09:58:43 +1100 vous écriviez:
> 
> > 
> > Where's the XFS error output? This is just the output from the
> > dump_stack() call in the xfs error message code...
> 
> Where is it supposed to display its errors? I thought it to be
> in /var/log/messages....

Depends on how you system is configured. have you turned down the
dmesg error level?

> > Still, that's implying a write IO error being reporte din IO
> > completion, not a read error, and that's different to the previous
> > issue you've reported. It's also indicative of an error coming from
> > the storage, not XFS...
> >
> > Do these problems *only* happen during or after a RAID rebuild?
> 
> Only while the rebuild process is running. All works fine afterwards.

Which pretty much points to a RAID controller rebuild bug.

> > > Phase 7 - verify and correct link counts...
> > > resetting inode 4294866029 nlinks from 2 to 5
> > > resetting inode 150323855504 nlinks from 13 to 12
> > > Metadata corruption detected at block 0x10809dc640/0x1000
> > > libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> > > Metadata corruption detected at block 0x10809dc640/0x1000
> > > libxfs_writebufr: write verifer failed on bno 0x10809dc640/0x1000
> > > done
> > 
> > I'd suggest you should be upgrading xfsprogs, because that's an
> > error that shouldn't happen at the end of a repair. If the latest
> > version (3.2.2) doesn't fix this problem, then please send me a
> > compressed metadump so I can work out what corruption xfs_repair
> > isn't fixing properly.
> 
> It was 3.2.2 this time. However it seems to have fixe it anyway;
> running it a second time displays nothing special.
> Is a metadump of a clean filesystem of any use?

Not really.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2014-12-11 11:39 easily reproducible filesystem crash on rebuilding array Emmanuel Florac
  2014-12-11 15:52 ` Eric Sandeen
  2014-12-15 12:07 ` Emmanuel Florac
@ 2015-01-13 11:21 ` Emmanuel Florac
  2015-01-13 13:59   ` Emmanuel Florac
  2 siblings, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2015-01-13 11:21 UTC (permalink / raw)
  To: xfs

Le Thu, 11 Dec 2014 12:39:36 +0100
Emmanuel Florac <eflorac@intellique.com> écrivait:

> Here's the setup: hardware RAID controller (Adaptec 7xx5 series,
> latest firmware), RAID-6 array (problem occured with different RAID
> width, sizes, and disk configuration), and different kernels from
> 3.2.x to 3.16.x.
> 
> What happens: while the array is rebuilding, simultaneously reading
> and writing is a sure way to break the filesystem and at times,
> corrupt data.
> 
> If the array is NOT rebuilding, nothing ever happens. When using the
> array in read-only mode while it rebuilds, nothing ever happens.
> However, while the array is rebuilding, relatively heavy IO almost
> certainly brings up something as follows [snip]

So here's where I am at the moment:

* XFS v4 on rebuilding adaptec RAID fails under heavy IO, with or
  without LVM, with kernels 3.2.xx up to 3.17.7.

* Today I've run the same test with ext4 : no problem whatsoever. I'm
  rechecking md5 of all files to get sure, but it looks OK so far after
  testing several terabytes. 

I don't understand how the RAID firmware could send back bad data
(corrupted metadata AND data) to XFS but not to ext4...

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2015-01-13 11:21 ` easily reproducible filesystem crash on rebuilding array Emmanuel Florac
@ 2015-01-13 13:59   ` Emmanuel Florac
  0 siblings, 0 replies; 22+ messages in thread
From: Emmanuel Florac @ 2015-01-13 13:59 UTC (permalink / raw)
  To: xfs

Le Tue, 13 Jan 2015 12:21:08 +0100
Emmanuel Florac <eflorac@intellique.com> écrivait:

> * XFS v4 on rebuilding adaptec RAID fails under heavy IO, with or
>   without LVM, with kernels 3.2.xx up to 3.17.7.
> 
> * Today I've run the same test with ext4 : no problem whatsoever. I'm
>   rechecking md5 of all files to get sure, but it looks OK so far
> after testing several terabytes. 
> 
> I don't understand how the RAID firmware could send back bad data
> (corrupted metadata AND data) to XFS but not to ext4...
> 

Beep, the md5 are wrong. So we see why we love XFS : when the RAID
controller spits nonsense, it barks and cries and dies noisily :)

ext4 didn't notice any problem, though the data is mangled. This is
bad...

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
       [not found] <CAH-PCH7W4yTDRhAiKQwN_wQJMx2sTitQYrLNPcLYHvJRucXBjA@mail.gmail.com>
@ 2015-09-17  6:17 ` Emmanuel Florac
  2015-09-17  7:21   ` Ferenc Kovacs
  0 siblings, 1 reply; 22+ messages in thread
From: Emmanuel Florac @ 2015-09-17  6:17 UTC (permalink / raw)
  To: Ferenc Kovacs, xfs

Le Wed, 16 Sep 2015 15:50:13 +0200 vous écriviez:

> Hi,
> 
> have you found a resolution to your problem?
> we are facing a similar issue(fs corruption every time when
> rebuilding the array) with an Adaptec Series 8 Raid controller.

The problem didn't occur anymore after I deactivated
individual disks write caching in the "controller settings". This is
only adjustable through the RAID BIOS though, not through arcconf
apparently.
This seems to apply to all of 5xx5, 6xx5, 7xx5 series. I never rebuilt
an array with a 8xx5 yet (though I have some). I don't know why this is
not the default settings (default keeps the disks write cache active
instead). Adaptec support and engineering deny that the problem exists,
though I've sent many logs and defined a test procedure that allow to
reproduce the problem reliably enough.
Thanks god for XFS. ext4 is completely unaware of the corruptions, but
they're present nonetheless (md5summing files before, during and
after rebuild)...

BTW I'm curious, where did you find this address? This isn't the one
I'm using on the XFS mailing list...

cheers
-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2015-09-17  6:17 ` Emmanuel Florac
@ 2015-09-17  7:21   ` Ferenc Kovacs
  2015-09-17 11:17     ` Emmanuel Florac
  0 siblings, 1 reply; 22+ messages in thread
From: Ferenc Kovacs @ 2015-09-17  7:21 UTC (permalink / raw)
  To: Emmanuel Florac; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 1493 bytes --]

2015. szept. 17. 8:17 ezt írta ("Emmanuel Florac" <eflorac@intellique.com>):
>
> Le Wed, 16 Sep 2015 15:50:13 +0200 vous écriviez:
>
> > Hi,
> >
> > have you found a resolution to your problem?
> > we are facing a similar issue(fs corruption every time when
> > rebuilding the array) with an Adaptec Series 8 Raid controller.
>
> The problem didn't occur anymore after I deactivated
> individual disks write caching in the "controller settings". This is
> only adjustable through the RAID BIOS though, not through arcconf
> apparently.
> This seems to apply to all of 5xx5, 6xx5, 7xx5 series. I never rebuilt
> an array with a 8xx5 yet (though I have some). I don't know why this is
> not the default settings (default keeps the disks write cache active
> instead). Adaptec support and engineering deny that the problem exists,
> though I've sent many logs and defined a test procedure that allow to
> reproduce the problem reliably enough.
> Thanks god for XFS. ext4 is completely unaware of the corruptions, but
> they're present nonetheless (md5summing files before, during and
> after rebuild)...
>

Thanks for the info!

> BTW I'm curious, where did you find this address? This isn't the one
> I'm using on the XFS mailing list...

I was using the gmane archives to read your thread and the there the domain
part is truncated for privacy(even from your signature) so I googled your
mail handle with your full name and this address was the first result.

[-- Attachment #1.2: Type: text/html, Size: 1843 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: easily reproducible filesystem crash on rebuilding array
  2015-09-17  7:21   ` Ferenc Kovacs
@ 2015-09-17 11:17     ` Emmanuel Florac
  0 siblings, 0 replies; 22+ messages in thread
From: Emmanuel Florac @ 2015-09-17 11:17 UTC (permalink / raw)
  To: Ferenc Kovacs; +Cc: xfs

Le Thu, 17 Sep 2015 09:21:07 +0200
Ferenc Kovacs <tyra3l@gmail.com> écrivait:

> > Thanks god for XFS. ext4 is completely unaware of the corruptions,
> > but they're present nonetheless (md5summing files before, during and
> > after rebuild)...
> >  
> 
> Thanks for the info!
> 

Be careful anyway, in one occurrence I was able to get a slight
corruption even in this case. For some reason (cache flushing?) it's
the very end of the rebuild process (the last 15 minutes or so) which
is sensitive to corruption under heavy IOs.

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-09-17 11:17 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-12-11 11:39 easily reproducible filesystem crash on rebuilding array Emmanuel Florac
2014-12-11 15:52 ` Eric Sandeen
2014-12-15 12:07 ` Emmanuel Florac
2014-12-15 12:25   ` Emmanuel Florac
2014-12-15 20:10     ` Dave Chinner
2014-12-16  7:56       ` Christoph Hellwig
2014-12-16 11:38         ` Emmanuel Florac
2014-12-16 17:21           ` Emmanuel Florac
2014-12-16 11:34       ` Emmanuel Florac
2014-12-16 19:58         ` Dave Chinner
2014-12-17 11:21           ` Emmanuel Florac
2014-12-18 15:40           ` Emmanuel Florac
2014-12-18 22:58             ` Dave Chinner
2014-12-19 11:57               ` Emmanuel Florac
2014-12-19 23:06                 ` Dave Chinner
2014-12-16 11:08     ` easily reproducible filesystem crash on rebuilding array [XFS bug in my book] Emmanuel Florac
2014-12-16 20:04       ` Dave Chinner
2015-01-13 11:21 ` easily reproducible filesystem crash on rebuilding array Emmanuel Florac
2015-01-13 13:59   ` Emmanuel Florac
     [not found] <CAH-PCH7W4yTDRhAiKQwN_wQJMx2sTitQYrLNPcLYHvJRucXBjA@mail.gmail.com>
2015-09-17  6:17 ` Emmanuel Florac
2015-09-17  7:21   ` Ferenc Kovacs
2015-09-17 11:17     ` Emmanuel Florac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox