* [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
@ 2013-09-22 8:25 Jeff Liu
2013-09-23 0:56 ` Dave Chinner
2013-10-01 22:33 ` Ben Myers
0 siblings, 2 replies; 7+ messages in thread
From: Jeff Liu @ 2013-09-22 8:25 UTC (permalink / raw)
To: xfs@oss.sgi.com
From: Jie Liu <jeff.liu@oracle.com>
At xfs_iext_realloc_direct(), the new_size is changed by adding
if_bytes if originally the extent records are stored at the inline
extent buffer, and we have to switch from it to a direct extent
list for those new allocated extents, this is wrong. e.g,
Create a file with three extents which was showing as following,
xfs_io -f -c "truncate 100m" /xfs/testme
for i in $(seq 0 5 10); do
offset=$(($i * $((1 << 20))))
xfs_io -c "pwrite $offset 1m" /xfs/testme
done
Inline
------
irec: if_bytes bytes_diff new_size
1st 0 16 16
2nd 16 16 32
Switching
--------- rnew_size
3rd 32 16 48 + 32 = 80 roundup=128
In this case, the desired value of new_size should be 48, and then
it will be roundup to 64 and be assigned to rnew_size.
However, this issue has been covered by resetting the if_bytes to
the new_size which is calculated at the begnning of xfs_iext_add()
before leaving out this function, and in turn make the rnew_size
correctly again. Hence, this can not be detected via xfstestes.
This patch fix above problem and revise the new_size comments at
xfs_iext_realloc_direct() to make it more readable. Also, fix the
comments while switching from the inline extent buffer to a direct
extent list to reflect this change.
Signed-off-by: Jie Liu <jeff.liu@oracle.com>
---
fs/xfs/xfs_inode_fork.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/fs/xfs/xfs_inode_fork.c b/fs/xfs/xfs_inode_fork.c
index dfb4226..7c6192a 100644
--- a/fs/xfs/xfs_inode_fork.c
+++ b/fs/xfs/xfs_inode_fork.c
@@ -1359,7 +1359,7 @@ xfs_iext_remove_indirect(
void
xfs_iext_realloc_direct(
xfs_ifork_t *ifp, /* inode fork pointer */
- int new_size) /* new size of extents */
+ int new_size) /* new size of extents after adding */
{
int rnew_size; /* real new size of extents */
@@ -1397,13 +1397,8 @@ xfs_iext_realloc_direct(
rnew_size - ifp->if_real_bytes);
}
}
- /*
- * Switch from the inline extent buffer to a direct
- * extent list. Be sure to include the inline extent
- * bytes in new_size.
- */
+ /* Switch from the inline extent buffer to a direct extent list */
else {
- new_size += ifp->if_bytes;
if (!is_power_of_2(new_size)) {
rnew_size = roundup_pow_of_two(new_size);
}
--
1.7.9.5
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
2013-09-22 8:25 [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct() Jeff Liu
@ 2013-09-23 0:56 ` Dave Chinner
2013-09-23 4:47 ` Jeff Liu
2013-10-01 22:33 ` Ben Myers
1 sibling, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2013-09-23 0:56 UTC (permalink / raw)
To: Jeff Liu; +Cc: xfs@oss.sgi.com
On Sun, Sep 22, 2013 at 04:25:15PM +0800, Jeff Liu wrote:
> From: Jie Liu <jeff.liu@oracle.com>
>
> At xfs_iext_realloc_direct(), the new_size is changed by adding
> if_bytes if originally the extent records are stored at the inline
> extent buffer, and we have to switch from it to a direct extent
> list for those new allocated extents, this is wrong. e.g,
>
> Create a file with three extents which was showing as following,
>
> xfs_io -f -c "truncate 100m" /xfs/testme
>
> for i in $(seq 0 5 10); do
> offset=$(($i * $((1 << 20))))
> xfs_io -c "pwrite $offset 1m" /xfs/testme
> done
>
> Inline
> ------
> irec: if_bytes bytes_diff new_size
> 1st 0 16 16
> 2nd 16 16 32
>
> Switching
> --------- rnew_size
> 3rd 32 16 48 + 32 = 80 roundup=128
>
> In this case, the desired value of new_size should be 48, and then
> it will be roundup to 64 and be assigned to rnew_size.
Ok, so it allocates 128 bytes instead of 64 bytes. It tracks that
allocation size correctly ifp->if_real_bytes, and all it means is
that there are 4 empty extra slots in the extent array. The code
already handles having empty slots in the direct extent array, so
what impact is there as a result of the oversized initial allocation
that is currently happening?
i.e. if fixing the oversized results in more memory allocations due
to resizing more regularly, then is there a benefit to changing this
code given that the rewrite of the ifp->if_bytes value in the case
where we do inline->direct conversion prevents this over-allocation
from being a problem...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
2013-09-23 0:56 ` Dave Chinner
@ 2013-09-23 4:47 ` Jeff Liu
2013-09-23 23:56 ` Dave Chinner
0 siblings, 1 reply; 7+ messages in thread
From: Jeff Liu @ 2013-09-23 4:47 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs@oss.sgi.com
Hi Dave,
On 09/23/2013 08:56 AM, Dave Chinner wrote:
> On Sun, Sep 22, 2013 at 04:25:15PM +0800, Jeff Liu wrote:
>> From: Jie Liu <jeff.liu@oracle.com>
>>
>> At xfs_iext_realloc_direct(), the new_size is changed by adding
>> if_bytes if originally the extent records are stored at the inline
>> extent buffer, and we have to switch from it to a direct extent
>> list for those new allocated extents, this is wrong. e.g,
>>
>> Create a file with three extents which was showing as following,
>>
>> xfs_io -f -c "truncate 100m" /xfs/testme
>>
>> for i in $(seq 0 5 10); do
>> offset=$(($i * $((1 << 20))))
>> xfs_io -c "pwrite $offset 1m" /xfs/testme
>> done
>>
>> Inline
>> ------
>> irec: if_bytes bytes_diff new_size
>> 1st 0 16 16
>> 2nd 16 16 32
>>
>> Switching
>> --------- rnew_size
>> 3rd 32 16 48 + 32 = 80 roundup=128
>>
>> In this case, the desired value of new_size should be 48, and then
>> it will be roundup to 64 and be assigned to rnew_size.
>
> Ok, so it allocates 128 bytes instead of 64 bytes. It tracks that
> allocation size correctly ifp->if_real_bytes, and all it means is
> that there are 4 empty extra slots in the extent array. The code
> already handles having empty slots in the direct extent array, so
> what impact is there as a result of the oversized initial allocation
> that is currently happening?
>
> i.e. if fixing the oversized results in more memory allocations due
> to resizing more regularly, then is there a benefit to changing this
> code given that the rewrite of the ifp->if_bytes value in the case
> where we do inline->direct conversion prevents this over-allocation
> from being a problem...
I guess my current patch subject/description mislead you. The result
of the oversized can be ignored since this can be handled in the direct
extent array as empty slots.
Actually, what I want to say is that we don't need to perform "new_size += ifp->if_bytes;"
again at xfs_iext_realloc_direct() because the new_size at xfs_iext_add()
already be the size of extents after adding, just as the variable comments
is mentioned.
Thanks,
-Jeff
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
2013-09-23 4:47 ` Jeff Liu
@ 2013-09-23 23:56 ` Dave Chinner
2013-09-24 12:57 ` Jeff Liu
0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2013-09-23 23:56 UTC (permalink / raw)
To: Jeff Liu; +Cc: xfs@oss.sgi.com
On Mon, Sep 23, 2013 at 12:47:23PM +0800, Jeff Liu wrote:
> Hi Dave,
>
> On 09/23/2013 08:56 AM, Dave Chinner wrote:
>
> > On Sun, Sep 22, 2013 at 04:25:15PM +0800, Jeff Liu wrote:
> >> From: Jie Liu <jeff.liu@oracle.com>
> >>
> >> At xfs_iext_realloc_direct(), the new_size is changed by adding
> >> if_bytes if originally the extent records are stored at the inline
> >> extent buffer, and we have to switch from it to a direct extent
> >> list for those new allocated extents, this is wrong. e.g,
> >>
> >> Create a file with three extents which was showing as following,
> >>
> >> xfs_io -f -c "truncate 100m" /xfs/testme
> >>
> >> for i in $(seq 0 5 10); do
> >> offset=$(($i * $((1 << 20))))
> >> xfs_io -c "pwrite $offset 1m" /xfs/testme
> >> done
> >>
> >> Inline
> >> ------
> >> irec: if_bytes bytes_diff new_size
> >> 1st 0 16 16
> >> 2nd 16 16 32
> >>
> >> Switching
> >> --------- rnew_size
> >> 3rd 32 16 48 + 32 = 80 roundup=128
> >>
> >> In this case, the desired value of new_size should be 48, and then
> >> it will be roundup to 64 and be assigned to rnew_size.
> >
> > Ok, so it allocates 128 bytes instead of 64 bytes. It tracks that
> > allocation size correctly ifp->if_real_bytes, and all it means is
> > that there are 4 empty extra slots in the extent array. The code
> > already handles having empty slots in the direct extent array, so
> > what impact is there as a result of the oversized initial allocation
> > that is currently happening?
> >
> > i.e. if fixing the oversized results in more memory allocations due
> > to resizing more regularly, then is there a benefit to changing this
> > code given that the rewrite of the ifp->if_bytes value in the case
> > where we do inline->direct conversion prevents this over-allocation
> > from being a problem...
>
> I guess my current patch subject/description mislead you. The result
> of the oversized can be ignored since this can be handled in the direct
> extent array as empty slots.
That's what I thought ;)
> Actually, what I want to say is that we don't need to perform
> "new_size += ifp->if_bytes;" again at xfs_iext_realloc_direct()
> because the new_size at xfs_iext_add() already be the size of
> extents after adding, just as the variable comments is mentioned.
Yes, I understand.
What I'm really asking is that whether there is any specific impact
you can measure as a result of changing the initial allocation size?
i.e. are there workloads where there is a measurable difference in
memory footprint or noticable performance impact of having to
reallocate the direct array more frequently as files grow and/or
shrink?
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
2013-09-23 23:56 ` Dave Chinner
@ 2013-09-24 12:57 ` Jeff Liu
2013-09-24 23:44 ` Dave Chinner
0 siblings, 1 reply; 7+ messages in thread
From: Jeff Liu @ 2013-09-24 12:57 UTC (permalink / raw)
To: Dave Chinner; +Cc: xfs@oss.sgi.com
On 09/24/2013 07:56 AM, Dave Chinner wrote:
> On Mon, Sep 23, 2013 at 12:47:23PM +0800, Jeff Liu wrote:
>> Hi Dave,
>>
>> On 09/23/2013 08:56 AM, Dave Chinner wrote:
>>
>>> On Sun, Sep 22, 2013 at 04:25:15PM +0800, Jeff Liu wrote:
>>>> From: Jie Liu <jeff.liu@oracle.com>
>>>>
>>>> At xfs_iext_realloc_direct(), the new_size is changed by adding
>>>> if_bytes if originally the extent records are stored at the inline
>>>> extent buffer, and we have to switch from it to a direct extent
>>>> list for those new allocated extents, this is wrong. e.g,
>>>>
>>>> Create a file with three extents which was showing as following,
>>>>
>>>> xfs_io -f -c "truncate 100m" /xfs/testme
>>>>
>>>> for i in $(seq 0 5 10); do
>>>> offset=$(($i * $((1 << 20))))
>>>> xfs_io -c "pwrite $offset 1m" /xfs/testme
>>>> done
>>>>
>>>> Inline
>>>> ------
>>>> irec: if_bytes bytes_diff new_size
>>>> 1st 0 16 16
>>>> 2nd 16 16 32
>>>>
>>>> Switching
>>>> --------- rnew_size
>>>> 3rd 32 16 48 + 32 = 80 roundup=128
>>>>
>>>> In this case, the desired value of new_size should be 48, and then
>>>> it will be roundup to 64 and be assigned to rnew_size.
>>>
>>> Ok, so it allocates 128 bytes instead of 64 bytes. It tracks that
>>> allocation size correctly ifp->if_real_bytes, and all it means is
>>> that there are 4 empty extra slots in the extent array. The code
>>> already handles having empty slots in the direct extent array, so
>>> what impact is there as a result of the oversized initial allocation
>>> that is currently happening?
>>>
>>> i.e. if fixing the oversized results in more memory allocations due
>>> to resizing more regularly, then is there a benefit to changing this
>>> code given that the rewrite of the ifp->if_bytes value in the case
>>> where we do inline->direct conversion prevents this over-allocation
>>> from being a problem...
>>
>> I guess my current patch subject/description mislead you. The result
>> of the oversized can be ignored since this can be handled in the direct
>> extent array as empty slots.
>
> That's what I thought ;)
>
>> Actually, what I want to say is that we don't need to perform
>> "new_size += ifp->if_bytes;" again at xfs_iext_realloc_direct()
>> because the new_size at xfs_iext_add() already be the size of
>> extents after adding, just as the variable comments is mentioned.
>
> Yes, I understand.
>
> What I'm really asking is that whether there is any specific impact
> you can measure as a result of changing the initial allocation size?
> i.e. are there workloads where there is a measurable difference in
> memory footprint or noticable performance impact of having to
> reallocate the direct array more frequently as files grow and/or
> shrink?
Not yet observed any performance matter, but IMO this problem can cause
difference in dynamic memory footprint for creating a large number of
files with 3 extents and with additional kmalloc/kfree overhead for 4
extents file.
For the first case, the current code will allocate buffers from
kmalloc-128 slab cache rather than kmalloc-64, hence it would occupy
more memory until being dropped from the cache, e.g,
# Create 10240 files with 3 extents
for ((i=0; i<10240; i++))
do
xfs_io -f -c 'truncate 10m' /xfs/test_$i
xfs_io -c 'pwrite 0 1' /xfs/test_$i 2>&1 >>/dev/null
xfs_io -c 'pwrite 1m 1' /xfs/test_$i 2>&1 >>/dev/null
xfs_io -c 'pwrite 5m 1' /xfs/test_$i 2>&1 >>/dev/null
done
# cat /proc/slab_info
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>...
# Non-patched -- before creating files
kmalloc-128 5391 6176 128 32 1
kmalloc-64 21852 25152 64 64 1
# After that -- the number of objects in 128 slab increased significantly, while
there basically no change in 64 slab
kmalloc-128 15381 15488 128 32 1
kmalloc-64 21958 25088 64 64 1
# patched -- before creating files
kmalloc-128 5751 7072 128 32 1
kmalloc-64 21420 24896 64 64 1
After after
kmalloc-128 6155 6688 128 32 1
kmalloc-64 30464 30464 64 64 1
With this patch, we can reduce the memory footprint for this particular scenario.
For the 2nd case, i.e, 4 extents file. It need to resize the direct extent list
to add the fourth extent because rnew_bytes is be re-initialized to 64 at the
beginning of xfs_iext_realloc_direct(), however the ifp->if_real_bytes is 128...
I can not think out a convenient approach(perf kmem not works on working laptop
for now) to demonstrate the consequence, but using ftrace to figure out the
different number of kmalloc. e.g,
# Creating 4096 files with 4 extents and fetch the # of kmalloc.
echo 0 > /sys/kernel/debug/tracing/events/kmem/kmalloc/enable
echo > /sys/kernel/debug/tracing/trace
for ((i=0; i<4096; i++))
do
xfs_io -f -c 'truncate 10m' /xfs/test_$i
xfs_io -c 'pwrite 0 1' /xfs/test_$i 2>&1 >>/dev/null
xfs_io -c 'pwrite 1m 1' /xfs/test_$i 2>&1 >>/dev/null
xfs_io -c 'pwrite 5m 1' /xfs/test_$i 2>&1 >>/dev/null
done
echo 1 > /sys/kernel/debug/tracing/events/kmem/kmalloc/enable
for ((i=0; i<4096; i++))
do
xfs_io -c 'pwrite 8m 1' /xfs/test_$i 2>&1 >>/dev/null
done
cat /sys/kernel/debug/tracing/trace|grep kmalloc|wc -l
# The number of kmalloc calls
Default Patched
110364 103471
Thanks,
-Jeff
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
2013-09-24 12:57 ` Jeff Liu
@ 2013-09-24 23:44 ` Dave Chinner
0 siblings, 0 replies; 7+ messages in thread
From: Dave Chinner @ 2013-09-24 23:44 UTC (permalink / raw)
To: Jeff Liu; +Cc: xfs@oss.sgi.com
On Tue, Sep 24, 2013 at 08:57:30PM +0800, Jeff Liu wrote:
> On 09/24/2013 07:56 AM, Dave Chinner wrote:
>
> > On Mon, Sep 23, 2013 at 12:47:23PM +0800, Jeff Liu wrote:
> >> Hi Dave,
> >>
> >> On 09/23/2013 08:56 AM, Dave Chinner wrote:
> >>
> >>> On Sun, Sep 22, 2013 at 04:25:15PM +0800, Jeff Liu wrote:
> >>>> From: Jie Liu <jeff.liu@oracle.com>
> >>>>
> >>>> At xfs_iext_realloc_direct(), the new_size is changed by adding
> >>>> if_bytes if originally the extent records are stored at the inline
> >>>> extent buffer, and we have to switch from it to a direct extent
> >>>> list for those new allocated extents, this is wrong. e.g,
....
> >> Actually, what I want to say is that we don't need to perform
> >> "new_size += ifp->if_bytes;" again at xfs_iext_realloc_direct()
> >> because the new_size at xfs_iext_add() already be the size of
> >> extents after adding, just as the variable comments is mentioned.
> >
> > Yes, I understand.
> >
> > What I'm really asking is that whether there is any specific impact
> > you can measure as a result of changing the initial allocation size?
> > i.e. are there workloads where there is a measurable difference in
> > memory footprint or noticable performance impact of having to
> > reallocate the direct array more frequently as files grow and/or
> > shrink?
>
> Not yet observed any performance matter, but IMO this problem can cause
> difference in dynamic memory footprint for creating a large number of
> files with 3 extents and with additional kmalloc/kfree overhead for 4
> extents file.
>
> For the first case, the current code will allocate buffers from
> kmalloc-128 slab cache rather than kmalloc-64, hence it would occupy
> more memory until being dropped from the cache, e.g,
>
> # Create 10240 files with 3 extents
> for ((i=0; i<10240; i++))
> do
> xfs_io -f -c 'truncate 10m' /xfs/test_$i
> xfs_io -c 'pwrite 0 1' /xfs/test_$i 2>&1 >>/dev/null
> xfs_io -c 'pwrite 1m 1' /xfs/test_$i 2>&1 >>/dev/null
> xfs_io -c 'pwrite 5m 1' /xfs/test_$i 2>&1 >>/dev/null
> done
>
> # cat /proc/slab_info
> # name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab>...
>
> # Non-patched -- before creating files
> kmalloc-128 5391 6176 128 32 1
> kmalloc-64 21852 25152 64 64 1
>
> # After that -- the number of objects in 128 slab increased significantly, while
> there basically no change in 64 slab
> kmalloc-128 15381 15488 128 32 1
> kmalloc-64 21958 25088 64 64 1
>
>
> # patched -- before creating files
> kmalloc-128 5751 7072 128 32 1
> kmalloc-64 21420 24896 64 64 1
>
> After after
> kmalloc-128 6155 6688 128 32 1
> kmalloc-64 30464 30464 64 64 1
>
> With this patch, we can reduce the memory footprint for this particular scenario.
Ok, so it's used the kmalloc-64 slab much more effectively and not
touched the kmalloc-128 slab. Ok, so thats a measurable difference ;)
>
> For the 2nd case, i.e, 4 extents file. It need to resize the direct extent list
> to add the fourth extent because rnew_bytes is be re-initialized to 64 at the
> beginning of xfs_iext_realloc_direct(), however the ifp->if_real_bytes is 128...
...
> # The number of kmalloc calls
> Default Patched
> 110364 103471
And that demonstrates the impact in that the array is downsized as
the array grows. Ok, I'm convinced there is a net win here :)
Reviewed-by: Dave Chinner <dchinner@redhat.com>
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct()
2013-09-22 8:25 [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct() Jeff Liu
2013-09-23 0:56 ` Dave Chinner
@ 2013-10-01 22:33 ` Ben Myers
1 sibling, 0 replies; 7+ messages in thread
From: Ben Myers @ 2013-10-01 22:33 UTC (permalink / raw)
To: Jeff Liu; +Cc: xfs@oss.sgi.com
On Sun, Sep 22, 2013 at 04:25:15PM +0800, Jeff Liu wrote:
> From: Jie Liu <jeff.liu@oracle.com>
>
> At xfs_iext_realloc_direct(), the new_size is changed by adding
> if_bytes if originally the extent records are stored at the inline
> extent buffer, and we have to switch from it to a direct extent
> list for those new allocated extents, this is wrong. e.g,
>
> Create a file with three extents which was showing as following,
>
> xfs_io -f -c "truncate 100m" /xfs/testme
>
> for i in $(seq 0 5 10); do
> offset=$(($i * $((1 << 20))))
> xfs_io -c "pwrite $offset 1m" /xfs/testme
> done
>
> Inline
> ------
> irec: if_bytes bytes_diff new_size
> 1st 0 16 16
> 2nd 16 16 32
>
> Switching
> --------- rnew_size
> 3rd 32 16 48 + 32 = 80 roundup=128
>
> In this case, the desired value of new_size should be 48, and then
> it will be roundup to 64 and be assigned to rnew_size.
>
> However, this issue has been covered by resetting the if_bytes to
> the new_size which is calculated at the begnning of xfs_iext_add()
> before leaving out this function, and in turn make the rnew_size
> correctly again. Hence, this can not be detected via xfstestes.
>
> This patch fix above problem and revise the new_size comments at
> xfs_iext_realloc_direct() to make it more readable. Also, fix the
> comments while switching from the inline extent buffer to a direct
> extent list to reflect this change.
>
> Signed-off-by: Jie Liu <jeff.liu@oracle.com>
Applied.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-10-01 22:33 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-22 8:25 [PATCH] xfs: fix the wrong new_size/rnew_size at xfs_iext_realloc_direct() Jeff Liu
2013-09-23 0:56 ` Dave Chinner
2013-09-23 4:47 ` Jeff Liu
2013-09-23 23:56 ` Dave Chinner
2013-09-24 12:57 ` Jeff Liu
2013-09-24 23:44 ` Dave Chinner
2013-10-01 22:33 ` Ben Myers
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox