public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* 3.2.9 and locking problem
@ 2012-03-09 19:28 Arkadiusz Miśkiewicz
  2012-03-12  0:53 ` Dave Chinner
  0 siblings, 1 reply; 7+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-03-09 19:28 UTC (permalink / raw)
  To: xfs


Are there any bugs in area visible in tracebacks below? I have a system where one operation
(upgrade of single rpm package) causes rpm process to hang in D-state, sysrq-w below:

[  400.755253] SysRq : Show Blocked State
[  400.758507]   task                        PC stack   pid father
[  400.758507] rpm             D 0000000100005781     0  8732   8698 0x00000000
[  400.758507]  ffff88021657dc48 0000000000000086 ffff880200000000 ffff88025126f480
[  400.758507]  ffff880252276630 ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8
[  400.758507]  ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0
[  400.758507] Call Trace:
[  400.758507]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
[  400.758507]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
[  400.758507]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
[  400.758507]  [<ffffffff8148b78a>] schedule+0x3a/0x50
[  400.758507]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
[  400.758507]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
[  400.758507]  [<ffffffff812652a3>] call_rwsem_down_write_failed+0x13/0x20
[  400.758507]  [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
[  400.758507]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
[  400.758507]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0 [xfs]
[  400.758507]  [<ffffffff81265502>] ? __strncpy_from_user+0x22/0x60
[  400.758507]  [<ffffffffa00d52ab>] xfs_vn_setattr+0x1b/0x40 [xfs]
[  400.758507]  [<ffffffff8117c1a2>] notify_change+0x1a2/0x340
[  400.758507]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
[  400.758507]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
[  400.758507]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b
[  603.456635] INFO: task rpm:8732 blocked for more than 120 seconds.
[  603.456638] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  603.456642] rpm             D 0000000100005781     0  8732   8698 0x00000000
[  603.456649]  ffff88021657dc48 0000000000000086 ffff880200000000 ffff88025126f480
[  603.456655]  ffff880252276630 ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8
[  603.456660]  ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0
[  603.456666] Call Trace:
[  603.456678]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
[  603.456683]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
[  603.456728]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
[  603.456735]  [<ffffffff8148b78a>] schedule+0x3a/0x50
[  603.456739]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
[  603.456744]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
[  603.456750]  [<ffffffff812652a3>] call_rwsem_down_write_failed+0x13/0x20
[  603.456754]  [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
[  603.456774]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
[  603.456794]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0 [xfs]
[  603.456799]  [<ffffffff81265502>] ? __strncpy_from_user+0x22/0x60
[  603.456819]  [<ffffffffa00d52ab>] xfs_vn_setattr+0x1b/0x40 [xfs]
[  603.456824]  [<ffffffff8117c1a2>] notify_change+0x1a2/0x340
[  603.456829]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
[  603.456834]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
[  603.456839]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b
[  723.456628] INFO: task rpm:8732 blocked for more than 120 seconds.
[  723.456632] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  723.456636] rpm             D 0000000100005781     0  8732   8698 0x00000000
[  723.456643]  ffff88021657dc48 0000000000000086 ffff880200000000 ffff88025126f480
[  723.456649]  ffff880252276630 ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8
[  723.456654]  ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0
[  723.456660] Call Trace:
[  723.456673]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
[  723.456677]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
[  723.456722]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
[  723.456729]  [<ffffffff8148b78a>] schedule+0x3a/0x50
[  723.456734]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
[  723.456738]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
[  723.456744]  [<ffffffff812652a3>] call_rwsem_down_write_failed+0x13/0x20
[  723.456749]  [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
[  723.456768]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
[  723.456789]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0 [xfs]
[  723.456794]  [<ffffffff81265502>] ? __strncpy_from_user+0x22/0x60
[  723.456814]  [<ffffffffa00d52ab>] xfs_vn_setattr+0x1b/0x40 [xfs]
[  723.456819]  [<ffffffff8117c1a2>] notify_change+0x1a2/0x340
[  723.456824]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
[  723.456828]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
[  723.456833]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b
[  776.256220] SysRq : Show Blocked State
[  776.259443]   task                        PC stack   pid father
[  776.259443] rpm             D 0000000100005781     0  8732   8698 0x00000000
[  776.259443]  ffff88021657dc48 0000000000000086 ffff880200000000 ffff88025126f480
[  776.259443]  ffff880252276630 ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8
[  776.259443]  ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0
[  776.259443] Call Trace:
[  776.259443]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
[  776.259443]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
[  776.259443]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
[  776.259443]  [<ffffffff8148b78a>] schedule+0x3a/0x50
[  776.259443]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
[  776.259443]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
[  776.259443]  [<ffffffff812652a3>] call_rwsem_down_write_failed+0x13/0x20
[  776.259443]  [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
[  776.259443]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
[  776.259443]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0 [xfs]
[  776.259443]  [<ffffffff81265502>] ? __strncpy_from_user+0x22/0x60
[  776.259443]  [<ffffffffa00d52ab>] xfs_vn_setattr+0x1b/0x40 [xfs]
[  776.259443]  [<ffffffff8117c1a2>] notify_change+0x1a2/0x340
[  776.259443]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
[  776.259443]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
[  776.259443]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b
[  843.456611] INFO: task rpm:8732 blocked for more than 120 seconds.
[  843.456616] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  843.456619] rpm             D 0000000100005781     0  8732   8698 0x00000000
[  843.456626]  ffff88021657dc48 0000000000000086 ffff880200000000 ffff88025126f480
[  843.456632]  ffff880252276630 ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8
[  843.456637]  ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0
[  843.456643] Call Trace:
[  843.456655]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
[  843.456660]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
[  843.456705]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
[  843.456712]  [<ffffffff8148b78a>] schedule+0x3a/0x50
[  843.456716]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
[  843.456721]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
[  843.456727]  [<ffffffff812652a3>] call_rwsem_down_write_failed+0x13/0x20
[  843.456731]  [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
[  843.456751]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
[  843.456771]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0 [xfs]
[  843.456776]  [<ffffffff81265502>] ? __strncpy_from_user+0x22/0x60
[  843.456796]  [<ffffffffa00d52ab>] xfs_vn_setattr+0x1b/0x40 [xfs]
[  843.456801]  [<ffffffff8117c1a2>] notify_change+0x1a2/0x340
[  843.456806]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
[  843.456810]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
[  843.456816]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b

-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 3.2.9 and locking problem
  2012-03-09 19:28 3.2.9 and locking problem Arkadiusz Miśkiewicz
@ 2012-03-12  0:53 ` Dave Chinner
  2012-03-12 13:43   ` Arkadiusz Miśkiewicz
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2012-03-12  0:53 UTC (permalink / raw)
  To: Arkadiusz Miśkiewicz; +Cc: xfs

On Fri, Mar 09, 2012 at 08:28:47PM +0100, Arkadiusz Miśkiewicz wrote:
> 
> Are there any bugs in area visible in tracebacks below? I have a system where one operation
> (upgrade of single rpm package) causes rpm process to hang in D-state, sysrq-w below:
> 
> [  400.755253] SysRq : Show Blocked State
> [  400.758507]   task                        PC stack   pid father
> [  400.758507] rpm             D 0000000100005781     0  8732   8698 0x00000000
> [  400.758507]  ffff88021657dc48 0000000000000086 ffff880200000000 ffff88025126f480
> [  400.758507]  ffff880252276630 ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8
> [  400.758507]  ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0
> [  400.758507] Call Trace:
> [  400.758507]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
> [  400.758507]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
> [  400.758507]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
> [  400.758507]  [<ffffffff8148b78a>] schedule+0x3a/0x50
> [  400.758507]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
> [  400.758507]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
> [  400.758507]  [<ffffffff812652a3>] call_rwsem_down_write_failed+0x13/0x20
> [  400.758507]  [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
> [  400.758507]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
> [  400.758507]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0 [xfs]
> [  400.758507]  [<ffffffff81265502>] ? __strncpy_from_user+0x22/0x60
> [  400.758507]  [<ffffffffa00d52ab>] xfs_vn_setattr+0x1b/0x40 [xfs]
> [  400.758507]  [<ffffffff8117c1a2>] notify_change+0x1a2/0x340
> [  400.758507]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
> [  400.758507]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
> [  400.758507]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b

I can't see why we'd get a task stuck here - it's waiting on the
XFS_ILOCK_EXCL. The only reason for this is if we leaked an unlock
somewhere. It appears you can reproduce this fairly quickly, so
running an event trace via trace-cmd for all the xfs_ilock trace
points and posting the report output might tell us what inode is
blocked and where we leaked (if that is the cause).

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 3.2.9 and locking problem
  2012-03-12  0:53 ` Dave Chinner
@ 2012-03-12 13:43   ` Arkadiusz Miśkiewicz
  2012-03-12 16:14     ` Christoph Hellwig
  0 siblings, 1 reply; 7+ messages in thread
From: Arkadiusz Miśkiewicz @ 2012-03-12 13:43 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

On Monday 12 of March 2012, Dave Chinner wrote:
> On Fri, Mar 09, 2012 at 08:28:47PM +0100, Arkadiusz Miśkiewicz wrote:
> > Are there any bugs in area visible in tracebacks below? I have a system
> > where one operation (upgrade of single rpm package) causes rpm process
> > to hang in D-state, sysrq-w below:
> > 
> > [  400.755253] SysRq : Show Blocked State
> > [  400.758507]   task                        PC stack   pid father
> > [  400.758507] rpm             D 0000000100005781     0  8732   8698
> > 0x00000000 [  400.758507]  ffff88021657dc48 0000000000000086
> > ffff880200000000 ffff88025126f480 [  400.758507]  ffff880252276630
> > ffff88021657dfd8 ffff88021657dfd8 ffff88021657dfd8 [  400.758507] 
> > ffff880252074af0 ffff880252276630 ffff88024cb0d005 ffff88021657dcb0 [ 
> > 400.758507] Call Trace:
> > [  400.758507]  [<ffffffff8114b22a>] ? kmem_cache_free+0x2a/0x110
> > [  400.758507]  [<ffffffff8114d2ed>] ? kmem_cache_alloc+0x11d/0x140
> > [  400.758507]  [<ffffffffa00df3c7>] ? kmem_zone_alloc+0x67/0xe0 [xfs]
> > [  400.758507]  [<ffffffff8148b78a>] schedule+0x3a/0x50
> > [  400.758507]  [<ffffffff8148d25d>] rwsem_down_failed_common+0xbd/0x150
> > [  400.758507]  [<ffffffff8148d303>] rwsem_down_write_failed+0x13/0x20
> > [  400.758507]  [<ffffffff812652a3>]
> > call_rwsem_down_write_failed+0x13/0x20 [  400.758507] 
> > [<ffffffff8148c8ed>] ? down_write+0x2d/0x40
> > [  400.758507]  [<ffffffffa00cf97c>] xfs_ilock+0xcc/0x120 [xfs]
> > [  400.758507]  [<ffffffffa00d4ace>] xfs_setattr_nonsize+0x1ce/0x5b0
> > [xfs] [  400.758507]  [<ffffffff81265502>] ?
> > __strncpy_from_user+0x22/0x60 [  400.758507]  [<ffffffffa00d52ab>]
> > xfs_vn_setattr+0x1b/0x40 [xfs] [  400.758507]  [<ffffffff8117c1a2>]
> > notify_change+0x1a2/0x340
> > [  400.758507]  [<ffffffff8115ed80>] chown_common+0xd0/0xf0
> > [  400.758507]  [<ffffffff8115fe4c>] sys_chown+0xac/0x1a0
> > [  400.758507]  [<ffffffff81495112>] system_call_fastpath+0x16/0x1b
> 
> I can't see why we'd get a task stuck here - it's waiting on the
> XFS_ILOCK_EXCL. The only reason for this is if we leaked an unlock
> somewhere. It appears you can reproduce this fairly quickly, 

linux vserver patch [1] seems to be messing with locking. Would be nice if you 
could make a quick look at it to see if it can be considered guilty part?

On the other hand I wasn't able to reproduce on 3.0.22. vserver patch for .22 
[2] is doing the same thing as vserver patch for 3.2.9.

> so
> running an event trace via trace-cmd for all the xfs_ilock trace
> points and posting the report output might tell us what inode is
> blocked and where we leaked (if that is the cause).

Will try to get more information but this will take some time (most likely 
weeks) to get this machine down for debugging.

> Cheers,
> Dave.

1. http://vserver.13thfloor.at/Experimental/patch-3.2.9-vs2.3.2.7.diff
2. http://vserver.13thfloor.at/Experimental/patch-3.0.22-vs2.3.2.3.diff
-- 
Arkadiusz Miśkiewicz        PLD/Linux Team
arekm / maven.pl            http://ftp.pld-linux.org/

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 3.2.9 and locking problem
  2012-03-12 13:43   ` Arkadiusz Miśkiewicz
@ 2012-03-12 16:14     ` Christoph Hellwig
  2012-03-12 17:49       ` Richard Ems
  2012-03-13  0:00       ` Dave Chinner
  0 siblings, 2 replies; 7+ messages in thread
From: Christoph Hellwig @ 2012-03-12 16:14 UTC (permalink / raw)
  To: Arkadiusz Mi??kiewicz; +Cc: xfs

In the 3.2 version xfs_sync_flags does a double unlock of the ilock -
change the last argument of xfs_trans_ijoin from XFS_ILOCK_EXCL to 0
will fix this.  The 3.0 version doesn't have this bug.

And btw, the additions to the on-disk format are gross - this stuff
broke before and will break again when we add new features.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 3.2.9 and locking problem
  2012-03-12 16:14     ` Christoph Hellwig
@ 2012-03-12 17:49       ` Richard Ems
  2012-03-12 18:02         ` Christoph Hellwig
  2012-03-13  0:00       ` Dave Chinner
  1 sibling, 1 reply; 7+ messages in thread
From: Richard Ems @ 2012-03-12 17:49 UTC (permalink / raw)
  To: xfs

Hi Christoph,

On 03/12/2012 05:14 PM, Christoph Hellwig wrote:
> In the 3.2 version xfs_sync_flags does a double unlock of the ilock -
> change the last argument of xfs_trans_ijoin from XFS_ILOCK_EXCL to 0
> will fix this.  The 3.0 version doesn't have this bug.

Is there already a fix or will there be one for 3.2.10 ?

I am already using 3.2.x on 4 servers in production ...

Thanks,
Richard


-- 
Richard Ems       mail: Richard.Ems@Cape-Horn-Eng.com

Cape Horn Engineering S.L.
C/ Dr. J.J. Dómine 1, 5º piso
46011 Valencia
Tel : +34 96 3242923 / Fax 924
http://www.cape-horn-eng.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 3.2.9 and locking problem
  2012-03-12 17:49       ` Richard Ems
@ 2012-03-12 18:02         ` Christoph Hellwig
  0 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2012-03-12 18:02 UTC (permalink / raw)
  To: Richard Ems; +Cc: xfs

On Mon, Mar 12, 2012 at 06:49:14PM +0100, Richard Ems wrote:
> Hi Christoph,
> 
> On 03/12/2012 05:14 PM, Christoph Hellwig wrote:
> > In the 3.2 version xfs_sync_flags does a double unlock of the ilock -
> > change the last argument of xfs_trans_ijoin from XFS_ILOCK_EXCL to 0
> > will fix this.  The 3.0 version doesn't have this bug.
> 
> Is there already a fix or will there be one for 3.2.10 ?
> 
> I am already using 3.2.x on 4 servers in production ...

It's code added by the vserver patch that doesn't exist in mainline.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: 3.2.9 and locking problem
  2012-03-12 16:14     ` Christoph Hellwig
  2012-03-12 17:49       ` Richard Ems
@ 2012-03-13  0:00       ` Dave Chinner
  1 sibling, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2012-03-13  0:00 UTC (permalink / raw)
  To: Christoph Hellwig; +Cc: xfs

On Mon, Mar 12, 2012 at 12:14:25PM -0400, Christoph Hellwig wrote:
> In the 3.2 version xfs_sync_flags does a double unlock of the ilock -
> change the last argument of xfs_trans_ijoin from XFS_ILOCK_EXCL to 0
> will fix this.  The 3.0 version doesn't have this bug.
> 
> And btw, the additions to the on-disk format are gross - this stuff
> broke before and will break again when we add new features.

Any idea why they aren't sending stuff like this upstream to us so
they can be implemented correctly, robustly and in a future-proof
manner? vserver users are going to be unhappy when their filesystems
get broken because they are using out-of-tree, incompatible on-disk
formats....

FWIW, are the vserver foilks distributing modified versions of
xfsprogs to support these changes?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2012-03-13  0:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-09 19:28 3.2.9 and locking problem Arkadiusz Miśkiewicz
2012-03-12  0:53 ` Dave Chinner
2012-03-12 13:43   ` Arkadiusz Miśkiewicz
2012-03-12 16:14     ` Christoph Hellwig
2012-03-12 17:49       ` Richard Ems
2012-03-12 18:02         ` Christoph Hellwig
2012-03-13  0:00       ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox