* Deadlock between block allocation and block truncation
@ 2017-04-12 15:51 Nikolay Borisov
2017-04-12 16:10 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Nikolay Borisov @ 2017-04-12 15:51 UTC (permalink / raw)
To: linux-xfs
Hello,
TL;DR
2 process - one doing truncation the other allocation. Truncate process
holds AGF0 buffer and waits on AGF1 buffer since it has to free
an extent from that AG, whereas allocation process
has failed doing an allocation the first time in xfs_bmap_btalloc
(the first xfs_alloc_vextent) for a blkno falling in AGF1, and has returned
with this buffer locked and is now trying a cyclic allocation
beginning with AGF0. But both processes deadlock due to each holding
the required AGF buffer.
---- Detailed crash analysis follows ----
I've been investigating a rather peculiar deadlock between block
allocation and block freeing. I've observed this on 3.12.72 as
well as on 4.4.0 but not on 4.11-rc6 and 4.8 run is still pending.
However I assume something in the code changed which made it
harder to hit this race rather than a patch purposefully
eliminating it. Essential running generic/299 sufficiently many
times results in the following deadlock between 2 processes:
ProcessA, doing allocation:
PID: 4459 TASK: ffff8800a553d340 CPU: 3 COMMAND: "fio"
#0 [ffff8800b81cf218] __schedule at ffffffff8165be8b
#1 [ffff8800b81cf270] schedule at ffffffff8165c80c
#2 [ffff8800b81cf288] schedule_timeout at ffffffff816608ef
#3 [ffff8800b81cf328] __down at ffffffff8165f791
#4 [ffff8800b81cf378] down at ffffffff8109a4a1
#5 [ffff8800b81cf398] xfs_buf_lock at ffffffff812e25e7
#6 [ffff8800b81cf3c0] _xfs_buf_find at ffffffff812e28f4
#7 [ffff8800b81cf410] xfs_buf_get_map at ffffffff812e2c8a
#8 [ffff8800b81cf450] xfs_buf_read_map at ffffffff812e3f30
#9 [ffff8800b81cf498] xfs_trans_read_buf_map at ffffffff81316a7c
#10 [ffff8800b81cf4d8] xfs_read_agf at ffffffff8129ee05
#11 [ffff8800b81cf538] xfs_alloc_read_agf at ffffffff8129ef54
#12 [ffff8800b81cf578] xfs_alloc_fix_freelist at ffffffff8129f3fc
#13 [ffff8800b81cf650] xfs_alloc_vextent at ffffffff8129f6e5
#14 [ffff8800b81cf6a8] xfs_bmap_btalloc at ffffffff812b38b4 <- Called from if (args.fsbno == NULLFSBLOCK && nullfb) block
#15 [ffff8800b81cf788] xfs_bmap_alloc at ffffffff812b3a8e
#16 [ffff8800b81cf798] xfs_bmapi_write at ffffffff812b4476
#17 [ffff8800b81cf8e0] xfs_iomap_write_direct at ffffffff812f2eea
#18 [ffff8800b81cf980] __xfs_get_blocks at ffffffff812da63b
#19 [ffff8800b81cfa18] xfs_get_blocks_direct at ffffffff812db1a7
#20 [ffff8800b81cfa28] __blockdev_direct_IO at ffffffff811e0384
#21 [ffff8800b81cfc58] xfs_vm_direct_IO at ffffffff812d962b
#22 [ffff8800b81cfca0] xfs_file_dio_aio_write at ffffffff812e9b6f
#23 [ffff8800b81cfd38] xfs_file_write_iter at ffffffff812ea167
#24 [ffff8800b81cfd68] aio_run_iocb at ffffffff811ef6a2
#25 [ffff8800b81cfe60] do_io_submit at ffffffff811f1076
#26 [ffff8800b81cff40] sys_io_submit at ffffffff811f13e0
#27 [ffff8800b81cff50] entry_SYSCALL_64_fastpath at ffffffff81661eb6
So what happened here is that this process was called with the following
xfs_bmalloca:
struct xfs_bmalloca {
firstblock = 0xffff8800b81cf930, # When read this address shows (NULLFSBLOCK): ffff8800b81cf930: ffffffffffffffff
flist = 0xffff8800b81cf940,
tp = 0xffff880133c951d0,
ip = 0xffff88013a313400,
prev = {
br_startoff = 1123392,
br_startblock = 88284,
br_blockcount = 32,
br_state = XFS_EXT_NORM
},
got = {
br_startoff = 1124352,
br_startblock = 1102322,
br_blockcount = 32,
br_state = XFS_EXT_NORM
},
offset = 1124320,
length = 32,
blkno = 1102290,
cur = 0x0,
idx = 1655,
nallocs = 0,
logflags = 0,
total = 36,
minlen = 1,
minleft = 3,
eof = false,
wasdel = false,
aeof = false,
conv = false,
userdata = 1 '\001',
flags = 8
}
Based on that what likely happened is that we first called xfs_alloc_vextent with
args->type = XFS_ALLOCTYPE_START_BNO set from xfs_bmap_btalloc_nullfb (this is
is called since nullfb is true). In xfs_alloc_vextent we set args->agno = 1
and args->type = XFS_ALLOCTYPE_THIS_AG and continue to call xfs_alloc_fix_freelist.
It in turns reads AGN1 xfs_buf and eventually returns with this buffer locked and
added to the transaction and args->agbp pointing to it. Subsequently we call
xfs_alloc_ag_vextent which returns without an error and this breaks the out of the
loop. However the following check in xfs_alloc_vextent triggers:
if (args->agbno == NULLAGBLOCK)
args->fsbno = NULLFSBLOCK;
So despite xfs_alloc_ag_vextent not returning an error apparently it couldn't
satisfy the block allocation. At this point we return to xfs_bmap_btalloc, which
goes on to retry the allocation, this time iterating through every AG. The following
command confirms this:
crash> dis -l ffffffff812b38b4
/home/nborisov/projects/kernel/source/fs/xfs/libxfs/xfs_bmap.c: 3850
Except this time xfs_alloc_vextent is entered with XFS_ALLOCTYPE_FIRST_AG and
as soon as it starts iterating the AG's it block on trying to acquire AGF0
xfs_buf. Here is the buf this process is actually waiting on:
crash> struct xfs_buf.b_bn ffff8800a5f1c280
b_bn = 1
At the same time it is holding AGF1 buf dirty. This is evident from the items in the
transaction item list:
crash> struct -ox xfs_trans.t_items 0xffff880133c951d0
struct xfs_trans {
[ffff880133c95290] struct list_head t_items;
}
crash> list -s xfs_log_item_desc -l xfs_log_item_desc.lid_trans -H ffff880133c95290
ffff8800ba9955a8
struct xfs_log_item_desc {
lid_item = 0xffff88013956d8e8,
lid_trans = {
next = 0xffff8800ba995f88,
prev = 0xffff880133c95290
},
lid_flags = 0 '\000'
}
ffff8800ba995f88
struct xfs_log_item_desc {
lid_item = 0xffff8800a60b1570,
lid_trans = {
next = 0xffff880133c95290,
prev = 0xffff8800ba9955a8
},
lid_flags = 0 '\000'
}
crash> struct xfs_buf_log_item.bli_buf,bli_item 0xffff8800a60b1570
bli_buf = 0xffff8800a5f1d900
bli_item = {
li_ail = {
next = 0xffff8800b5d14640,
prev = 0xffff88013956d8e8
},
li_lsn = 8589944450,
li_desc = 0xffff8800ba995f80,
li_mountp = 0xffff8800a4e8b000,
li_ailp = 0xffff880139ef6f00,
li_type = 4668,
li_flags = 1,
li_bio_list = 0x0,
li_cb = 0xffffffff8130cbc0 <xfs_buf_iodone>,
li_ops = 0xffffffff8183d940 <xfs_buf_item_ops>,
li_cil = {
next = 0xffff8800a60b15c0,
prev = 0xffff8800a60b15c0
},
li_lv = 0x0,
li_seq = 82
}
crash> struct xfs_buf.b_bn 0xffff8800a5f1d900
b_bn = 7680001
Checking what 7680001 corresponds to:
xfs_db> agf 1
xfs_db> daddr
current daddr is 7680001
So this process is trully holding AGF1 locked.
On the other hand I have the truncation process - ProcessB, whose
call stack is:
PID: 4532 TASK: ffff8800a5b03780 CPU: 3 COMMAND: "xfs_io"
#0 [ffff8800a4dbb808] __schedule at ffffffff8165be8b
#1 [ffff8800a4dbb860] schedule at ffffffff8165c80c
#2 [ffff8800a4dbb878] schedule_timeout at ffffffff816608ef
#3 [ffff8800a4dbb918] __down at ffffffff8165f791
#4 [ffff8800a4dbb980] xfs_buf_lock at ffffffff812e25e7
#5 [ffff8800a4dbb9a8] _xfs_buf_find at ffffffff812e28f4
#6 [ffff8800a4dbb9f8] xfs_buf_get_map at ffffffff812e2c8a
#7 [ffff8800a4dbba38] xfs_buf_read_map at ffffffff812e3f30
#8 [ffff8800a4dbba80] xfs_trans_read_buf_map at ffffffff81316a7c
#9 [ffff8800a4dbbac0] xfs_read_agf at ffffffff8129ee05
#10 [ffff8800a4dbbb20] xfs_alloc_read_agf at ffffffff8129ef54
#11 [ffff8800a4dbbb60] xfs_alloc_fix_freelist at ffffffff8129f3fc
#12 [ffff8800a4dbbc38] xfs_free_extent at ffffffff8129ff1d
#13 [ffff8800a4dbbcd8] xfs_trans_free_extent at ffffffff813177d6
#14 [ffff8800a4dbbd08] xfs_bmap_finish at ffffffff812dea09
#15 [ffff8800a4dbbd40] xfs_itruncate_extents at ffffffff812f903e
#16 [ffff8800a4dbbdd8] xfs_setattr_size at ffffffff812f541b
#17 [ffff8800a4dbbe28] xfs_vn_setattr at ffffffff812f54fb
#18 [ffff8800a4dbbe50] notify_change at ffffffff811bf152
#19 [ffff8800a4dbbe90] do_truncate at ffffffff8119fcd5
#20 [ffff8800a4dbbf00] do_sys_ftruncate.constprop.4 at ffffffff811a003b
#21 [ffff8800a4dbbf40] sys_ftruncate at ffffffff811a00ce
#22 [ffff8800a4dbbf50] entry_SYSCALL_64_fastpath at ffffffff81661eb6
So the first thing is inspecting what this process is waiting on:
crash> struct xfs_buf.b_bn ffff8800a5f1d900
b_bn = 7680001
So a buffer describing AGF1 which is already held ProcessA.
Looking at the extent list it wants to truncate things add up:
crash> struct xfs_bmap_free_t ffff8800a4dbbd98
struct xfs_bmap_free_t {
xbf_first = 0xffff8800ba9555e8,
xbf_count = 1,
xbf_low = 0
}
crash> struct xfs_bmap_free_item 0xffff8800ba9555e8
struct xfs_bmap_free_item {
xbfi_startblock = 1060329, # >> 20 = AGF1
xbfi_blockcount = 32,
xbfi_next = 0x0
}
Inspecting this process' transaction locked list I get the following
items:
crash> list -s xfs_log_item_desc.lid_item,lid_flags -l xfs_log_item_desc.lid_trans -H ffff880133c95f40
ffff8800ba995f08
lid_item = 0xffff8800b5d147d0
lid_flags = 1 '\001'
ffff8800ba995248
lid_item = 0xffff8801394ed2b8
lid_flags = 1 '\001'
ffff8800ba9958a8
lid_item = 0xffff8801394ed3a0
lid_flags = 1 '\001'
ffff8800ba995448
lid_item = 0xffff8801394ed488
lid_flags = 1 '\001'
Printing each item's li_ops member reveals what they hold:
crash> struct xfs_log_item.li_ops 0xffff8800b5d147d0
li_ops = 0xffffffff8183d980 <xfs_efd_item_ops>
crash> struct xfs_log_item.li_ops 0xffff8801394ed2b8
li_ops = 0xffffffff8183d940 <xfs_buf_item_ops>
crash> struct xfs_log_item.li_ops 0xffff8801394ed3a0
li_ops = 0xffffffff8183d940 <xfs_buf_item_ops>
crash> struct xfs_log_item.li_ops 0xffff8801394ed488
li_ops = 0xffffffff8183d940 <xfs_buf_item_ops>
So one xfs_efd_items_ops and 3 xfs_bufs. Printing each
xfs_buf from the respective items:
crash> struct xfs_buf_log_item.bli_buf 0xffff8801394ed2b8
bli_buf = 0xffff8800a5f1c280
crash> struct xfs_buf_log_item.bli_buf 0xffff8801394ed3a0
bli_buf = 0xffff8800b80a5180
crash> struct xfs_buf_log_item.bli_buf 0xffff8801394ed488
bli_buf = 0xffff8800a5f1e300
crash> struct xfs_buf.b_bn 0xffff8800a5f1c280
b_bn = 1 <= AGF0
crash> struct xfs_buf.b_bn 0xffff8800b80a5180
b_bn = 8
crash> struct xfs_buf.b_bn 0xffff8800a5f1e300
b_bn = 16
So we are holding on AGF0's buffer and 2 more buffers. At
that point both processes are stuck each waiting for a buffer, held
by the opposite party. The only thing I'm worried about in the allocating
process is why doesn't it free the args->agbp if it fails allocating from
that group. And under what conditions does the first execution of
xfs_alloc_vextent return with no error and args->agbno = NULLAGBLOCK such that
args->fsbno is set to NULLFSBLOCK.
On the other hand I haven't managed to figure out why the truncation process,
wanting to free a block residing in AGF1 holds AGF0 locked.
The only commit which contains anything remotely similar to this is
e04426b9202b ("xfs: move allocation stack switch up to xfs_bmapi_allocate")
But it dates waaayy back in time, and this is already fixed in 4.4.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-12 15:51 Deadlock between block allocation and block truncation Nikolay Borisov
@ 2017-04-12 16:10 ` Christoph Hellwig
2017-04-12 16:12 ` Nikolay Borisov
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Christoph Hellwig @ 2017-04-12 16:10 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: linux-xfs
Hi Nikolay,
I guess the culprit is that truncate can free up to two extents in
the same transaction and thus try to lock two different AGs without
requiring them to be in increasing order.
Does the one liner below fix the problem for you?
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index 7605d8396596..29f2cd5afb04 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -58,7 +58,7 @@ kmem_zone_t *xfs_inode_zone;
* Used in xfs_itruncate_extents(). This is the maximum number of extents
* freed from a file in a single transaction.
*/
-#define XFS_ITRUNC_MAX_EXTENTS 2
+#define XFS_ITRUNC_MAX_EXTENTS 1
STATIC int xfs_iflush_int(struct xfs_inode *, struct xfs_buf *);
STATIC int xfs_iunlink(struct xfs_trans *, struct xfs_inode *);
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-12 16:10 ` Christoph Hellwig
@ 2017-04-12 16:12 ` Nikolay Borisov
2017-04-12 16:20 ` Christoph Hellwig
2017-04-12 17:44 ` Nikolay Borisov
2017-04-13 13:52 ` Nikolay Borisov
2 siblings, 1 reply; 9+ messages in thread
From: Nikolay Borisov @ 2017-04-12 16:12 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-xfs
On 12.04.2017 19:10, Christoph Hellwig wrote:
> Hi Nikolay,
>
> I guess the culprit is that truncate can free up to two extents in
> the same transaction and thus try to lock two different AGs without
> requiring them to be in increasing order.
>
> Does the one liner below fix the problem for you?
>
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index 7605d8396596..29f2cd5afb04 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -58,7 +58,7 @@ kmem_zone_t *xfs_inode_zone;
> * Used in xfs_itruncate_extents(). This is the maximum number of extents
> * freed from a file in a single transaction.
> */
> -#define XFS_ITRUNC_MAX_EXTENTS 2
> +#define XFS_ITRUNC_MAX_EXTENTS 1
>
> STATIC int xfs_iflush_int(struct xfs_inode *, struct xfs_buf *);
> STATIC int xfs_iunlink(struct xfs_trans *, struct xfs_inode *);
>
I will apply this to 3.12 and 4.4 and run tests since I can reproduce
fairly reliably on those. You don't expect any fallout on older kernel,
yes ?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-12 16:12 ` Nikolay Borisov
@ 2017-04-12 16:20 ` Christoph Hellwig
0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2017-04-12 16:20 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: Christoph Hellwig, linux-xfs
On Wed, Apr 12, 2017 at 07:12:50PM +0300, Nikolay Borisov wrote:
> I will apply this to 3.12 and 4.4 and run tests since I can reproduce
> fairly reliably on those. You don't expect any fallout on older kernel,
> yes ?
No.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-12 16:10 ` Christoph Hellwig
2017-04-12 16:12 ` Nikolay Borisov
@ 2017-04-12 17:44 ` Nikolay Borisov
2017-04-12 17:57 ` Christoph Hellwig
2017-04-13 13:52 ` Nikolay Borisov
2 siblings, 1 reply; 9+ messages in thread
From: Nikolay Borisov @ 2017-04-12 17:44 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-xfs, darrick.wong
On 12.04.2017 19:10, Christoph Hellwig wrote:
> Hi Nikolay,
>
> I guess the culprit is that truncate can free up to two extents in
> the same transaction and thus try to lock two different AGs without
> requiring them to be in increasing order.
On the other hand Darrick suggested that the problem might be in the
allocation path due to it having a dirty buffer for AGF1 and proceeding
to lock AGF0, resulting in locking order violation. So the bli holding
AGF1 in the allocating task is:
crash> struct xfs_buf_log_item.bli_flags 0xffff8800a60b1570
bli_flags = 2
That's XFS_BLI_DIRTY. According to Darick's opinion here is what
*should* happen:
"
djwong: either agf1 is clean and it needs to release that before going
for agf0, or agf1 is dirty and thus it cannot go for agf0
"
In this case agf1 is dirty and allocation path continues to agf0 which
is clear lock order violation?
On the truncation side the bli's flags for agf0 :
crash> struct -x xfs_buf_log_item.bli_flags 0xffff8801394ed2b8
bli_flags = 0xa => BLI_DIRTY | BLI_LOGGED
And then it is proceeding to lock AGF1 (ascending order) correctly.
In spite of this your patch is likely to help this situation though I'm
not sure if it is modifying the right side of the violation.
Regards,
Nikolay
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-12 17:44 ` Nikolay Borisov
@ 2017-04-12 17:57 ` Christoph Hellwig
0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2017-04-12 17:57 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: Christoph Hellwig, linux-xfs, darrick.wong
On Wed, Apr 12, 2017 at 08:44:32PM +0300, Nikolay Borisov wrote:
> "
> djwong: either agf1 is clean and it needs to release that before going
> for agf0, or agf1 is dirty and thus it cannot go for agf0
> "
Yes. Older kernels had some bugs in this area due to busy extent
tracking, where xfs_alloc_ag_vextent would fail despite
xfs_alloc_fix_freelist picking an AG and possibly dirtying the AGF.
My busy extent tracking changes for the asynchronous discard code in
4.11-rc should have fixed that.
But even with that in place I think that locking two AGFs in any
order in the truncate path is wrong.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-12 16:10 ` Christoph Hellwig
2017-04-12 16:12 ` Nikolay Borisov
2017-04-12 17:44 ` Nikolay Borisov
@ 2017-04-13 13:52 ` Nikolay Borisov
2017-04-14 7:42 ` Christoph Hellwig
2 siblings, 1 reply; 9+ messages in thread
From: Nikolay Borisov @ 2017-04-13 13:52 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-xfs
On 12.04.2017 19:10, Christoph Hellwig wrote:
> Hi Nikolay,
>
> I guess the culprit is that truncate can free up to two extents in
> the same transaction and thus try to lock two different AGs without
> requiring them to be in increasing order.
>
> Does the one liner below fix the problem for you?
So after 200 runs of generic/299 I didn't hit the deadlock whereas
before it would hit in the first 30 or so. FWIW :
Tested-by: Nikolay Borisov <nborisov@suse.com>
On a different note - do you think that reducing the unmapped extents
from 2 to 1 would introduce any performance degradation during
truncation? Looking around the code this define is only used when doing
truncation, so perhaps a better thing to do would be to turn this
xfs_bunmapi arg to a boolean which signal whether we are doing
truncation or not. And if it is set to true have xfs_bunmapi unmap all
possible extents from only a single AG? I'm going to sift through the
git history to figure out where this requirement of maximum 2 extent
came to truncate, came.
Regards,
Nikolay
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-13 13:52 ` Nikolay Borisov
@ 2017-04-14 7:42 ` Christoph Hellwig
2017-04-25 7:36 ` Christoph Hellwig
0 siblings, 1 reply; 9+ messages in thread
From: Christoph Hellwig @ 2017-04-14 7:42 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: Christoph Hellwig, linux-xfs
On Thu, Apr 13, 2017 at 04:52:03PM +0300, Nikolay Borisov wrote:
> On a different note - do you think that reducing the unmapped extents
> from 2 to 1 would introduce any performance degradation during
> truncation?
There will be some. But now that we have the CIL it will just
additional in-kernel overhead instead of overhead in the on-disk
log.
> Looking around the code this define is only used when doing
> truncation, so perhaps a better thing to do would be to turn this
> xfs_bunmapi arg to a boolean which signal whether we are doing
> truncation or not. And if it is set to true have xfs_bunmapi unmap all
> possible extents from only a single AG? I'm going to sift through the
> git history to figure out where this requirement of maximum 2 extent
> came to truncate, came.
We have the problem with all transactions that could lock multiple AGF
headers, so that's not going to cut it.
I think we could do multiple transactions IFF in the same AG. I'll
need to check if that's worth it. And on top of that I have started
entirely reworking what is currently xfs_bunmapi, but that will have
to wait until after a fix for your issue.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Deadlock between block allocation and block truncation
2017-04-14 7:42 ` Christoph Hellwig
@ 2017-04-25 7:36 ` Christoph Hellwig
0 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2017-04-25 7:36 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: Christoph Hellwig, linux-xfs
FYI, I've been testing with the patch quite a bit and it seems to
be doing fine.
But I fear it's not the complete fix: if we do truncates, hole
punches or other complicated operations the bmap btree blocks
might be from different AGs than the data blocks and we could
still run into these issues in theory. So I fear we might have
to come up with a solution where we roll into a new chained transaction
everytime we encounter a new AG instead.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2017-04-25 7:36 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-04-12 15:51 Deadlock between block allocation and block truncation Nikolay Borisov
2017-04-12 16:10 ` Christoph Hellwig
2017-04-12 16:12 ` Nikolay Borisov
2017-04-12 16:20 ` Christoph Hellwig
2017-04-12 17:44 ` Nikolay Borisov
2017-04-12 17:57 ` Christoph Hellwig
2017-04-13 13:52 ` Nikolay Borisov
2017-04-14 7:42 ` Christoph Hellwig
2017-04-25 7:36 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox