public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* xfstests, bad generic tests 009 and 308
@ 2015-09-18 16:16 angelo
  0 siblings, 0 replies; 14+ messages in thread
From: angelo @ 2015-09-18 16:16 UTC (permalink / raw)
  To: xfs

Hi all,

working on arm (32bit arch), kernel 4.1.6.
Looking to find the reason of a bad result on xfstests,

-tests/generic/009
------------------
i get several "all holes" messages

generic/009    [  842.949643] run fstests generic/009 at 2015-09-18 
15:29:36
  - output mismatch (see 
/home/angelo/xfstests/results//generic/009.out.bad)
     --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
     +++ /home/angelo/xfstests/results//generic/009.out.bad 2015-09-18 
15:29:41.412784177 +0000
     @@ -1,79 +1,45 @@
      QA output created by 009
          1. into a hole
     -0: [0..7]: hole
     -1: [8..23]: unwritten
     -2: [24..39]: hole
     +0: [0..39]: hole
      daa100df6e6711906b61c9ab5aa16032

also some other tests are giving the same bad notices.


-tests/generic/308
------------------

I have now: CONFIG_LBDAF=y

In my target device this test creates a 16 Terabytes file 308.tempfile

-rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308

While "df" is not complaining about:

/dev/mmcblk0p5   8378368   45252   8333116   1% /media/p5

and next rm -f on it hands the cpu to 95%, forever.

This issue seems known from a long time, as it has been discussed in the 
thread:

http://oss.sgi.com/archives/xfs/2013-04/msg00273.html

I was wondering if there was any special reason why the Jeff patch has
never been finally applied.

I applied this patch anyway, and tests pass.


Best regards,
Angelo Dureghello

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* xfstests, bad generic tests 009 and 308
@ 2015-09-18 16:38 Angelo Dureghello
  2015-09-18 22:44 ` Dave Chinner
  0 siblings, 1 reply; 14+ messages in thread
From: Angelo Dureghello @ 2015-09-18 16:38 UTC (permalink / raw)
  To: xfs

Hi all,

working on arm (32bit arch), kernel 4.1.6.
Looking to find the reason of some bad results on xfstests,

-tests/generic/009
------------------
i get several "all holes" messages

generic/009    [  842.949643] run fstests generic/009 at 2015-09-18 
15:29:36
  - output mismatch (see 
/home/angelo/xfstests/results//generic/009.out.bad)
     --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
     +++ /home/angelo/xfstests/results//generic/009.out.bad 2015-09-18 
15:29:41.412784177 +0000
     @@ -1,79 +1,45 @@
      QA output created by 009
          1. into a hole
     -0: [0..7]: hole
     -1: [8..23]: unwritten
     -2: [24..39]: hole
     +0: [0..39]: hole
      daa100df6e6711906b61c9ab5aa16032

also some other tests are giving the same bad notices.


-tests/generic/308
------------------

I have now: CONFIG_LBDAF=y

In my target device this test creates a 16 Terabytes file 308.tempfile

-rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308

While "df" is not complaining about:

/dev/mmcblk0p5   8378368   45252   8333116   1% /media/p5

and next rm -f on it hands the cpu to 95%, forever.

This issue seems known from a long time, as it has been discussed in the 
thread:

http://oss.sgi.com/archives/xfs/2013-04/msg00273.html

I was wondering if there was any special reason why the Jeff patch has
never been finally applied.

I applied this patch anyway, and tests pass.

-- 
Best regards,
Angelo Dureghello

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-18 16:38 xfstests, bad generic tests 009 and 308 Angelo Dureghello
@ 2015-09-18 22:44 ` Dave Chinner
  2015-09-21 11:13   ` Angelo Dureghello
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2015-09-18 22:44 UTC (permalink / raw)
  To: Angelo Dureghello; +Cc: xfs

On Fri, Sep 18, 2015 at 06:38:38PM +0200, Angelo Dureghello wrote:
> Hi all,
> 
> working on arm (32bit arch), kernel 4.1.6.

Is this a new platform?

Also, we need to know what compiler you are using, because we know
that certain versions of gcc miscompile XFS kernel code on arm
(4.6, 4.7 and certain versions of 4.8 are suspect) due to a
combination of compiler mis-optimisations and kernel bugs in the
arm 64 bit division asm implementation.

As such, it would be worthwhile trying gcc-4.9 and a 4.3-rc1 kernel
to see if the problems still occur.

> Looking to find the reason of some bad results on xfstests,
> 
> -tests/generic/009
> ------------------
> i get several "all holes" messages
> 
> generic/009    [  842.949643] run fstests generic/009 at 2015-09-18
> 15:29:36
>  - output mismatch (see
> /home/angelo/xfstests/results//generic/009.out.bad)
>     --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
>     +++ /home/angelo/xfstests/results//generic/009.out.bad
> 2015-09-18 15:29:41.412784177 +0000
>     @@ -1,79 +1,45 @@
>      QA output created by 009
>          1. into a hole
>     -0: [0..7]: hole
>     -1: [8..23]: unwritten
>     -2: [24..39]: hole
>     +0: [0..39]: hole
>      daa100df6e6711906b61c9ab5aa16032
> 
> also some other tests are giving the same bad notices.

Can you attach the entire
/home/angelo/xfstests/results//generic/009.out.bad file? I'm not
sure which of the tests this output comes from, so I need to
confirm which specific operations are resulting in errors.

> -tests/generic/308
> ------------------
> 
> I have now: CONFIG_LBDAF=y
> 
> In my target device this test creates a 16 Terabytes file 308.tempfile
> 
> -rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308
> 
> While "df" is not complaining about:
> 
> /dev/mmcblk0p5   8378368   45252   8333116   1% /media/p5
> 
> and next rm -f on it hands the cpu to 95%, forever.
> 
> This issue seems known from a long time, as it has been discussed in
> the thread:
> 
> http://oss.sgi.com/archives/xfs/2013-04/msg00273.html
> 
> I was wondering if there was any special reason why the Jeff patch has
> never been finally applied.

MAX_LFS_FILESIZE on 32 bits is 8TB, whereas xfs supports 16TB file
size on 32 bit systems. The specific issue this test fixed was
committed in commit 8695d27 ("xfs: fix infinite loop at
xfs_vm_writepage on 32bit system")

http://oss.sgi.com/archives/xfs/2014-05/msg00447.html

And, as you may notice now, generic/308 is the test case for the
exact problem the above commit fixed. 

Can you find out exactly where the CPU is looping? sysrq-l will
help, as will running 'perf top -U -g' to show you the hot code
paths, and so on.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-18 22:44 ` Dave Chinner
@ 2015-09-21 11:13   ` Angelo Dureghello
  2015-09-21 11:18     ` Angelo Dureghello
  2015-09-21 22:52     ` Dave Chinner
  0 siblings, 2 replies; 14+ messages in thread
From: Angelo Dureghello @ 2015-09-21 11:13 UTC (permalink / raw)
  To: xfs

[-- Attachment #1: Type: text/plain, Size: 4355 bytes --]

Hi Dave,

many thanks for the support. Sorry for the double mail, after
first registering mails was not accepted, so i re-registered
with a company mail.


On 19/09/2015 00:44, Dave Chinner wrote:
> On Fri, Sep 18, 2015 at 06:38:38PM +0200, Angelo Dureghello wrote:
>> Hi all,
>>
>> working on arm (32bit arch), kernel 4.1.6.
> Is this a new platform?
>
> Also, we need to know what compiler you are using, because we know
> that certain versions of gcc miscompile XFS kernel code on arm
> (4.6, 4.7 and certain versions of 4.8 are suspect) due to a
> combination of compiler mis-optimisations and kernel bugs in the
> arm 64 bit division asm implementation.
>
> As such, it would be worthwhile trying gcc-4.9 and a 4.3-rc1 kernel
> to see if the problems still occur.

I am using actually gcc-linaro-4.9-2015.05-x86_64_arm-linux-gnueabihf

>> Looking to find the reason of some bad results on xfstests,
>>
>> -tests/generic/009
>> ------------------
>> i get several "all holes" messages
>>
>> generic/009    [  842.949643] run fstests generic/009 at 2015-09-18
>> 15:29:36
>>   - output mismatch (see
>> /home/angelo/xfstests/results//generic/009.out.bad)
>>      --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
>>      +++ /home/angelo/xfstests/results//generic/009.out.bad
>> 2015-09-18 15:29:41.412784177 +0000
>>      @@ -1,79 +1,45 @@
>>       QA output created by 009
>>           1. into a hole
>>      -0: [0..7]: hole
>>      -1: [8..23]: unwritten
>>      -2: [24..39]: hole
>>      +0: [0..39]: hole
>>       daa100df6e6711906b61c9ab5aa16032
>>
>> also some other tests are giving the same bad notices.
> Can you attach the entire
> /home/angelo/xfstests/results//generic/009.out.bad file? I'm not
> sure which of the tests this output comes from, so I need to
> confirm which specific operations are resulting in errors.
Sure, i completed the whole generic + shared + xfs tests.
In total i have 38 errors. And trying now one by one to understand the 
reason.
I attached the 009 output.

>> -tests/generic/308
>> ------------------
>>
>> I have now: CONFIG_LBDAF=y
>>
>> In my target device this test creates a 16 Terabytes file 308.tempfile
>>
>> -rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308
>>
>> While "df" is not complaining about:
>>
>> /dev/mmcblk0p5   8378368   45252   8333116   1% /media/p5
>>
>> and next rm -f on it hands the cpu to 95%, forever.
>>
>> This issue seems known from a long time, as it has been discussed in
>> the thread:
>>
>> http://oss.sgi.com/archives/xfs/2013-04/msg00273.html
>>
>> I was wondering if there was any special reason why the Jeff patch has
>> never been finally applied.
> MAX_LFS_FILESIZE on 32 bits is 8TB, whereas xfs supports 16TB file
> size on 32 bit systems. The specific issue this test fixed was
> committed in commit 8695d27 ("xfs: fix infinite loop at
> xfs_vm_writepage on 32bit system")
>
> http://oss.sgi.com/archives/xfs/2014-05/msg00447.html
>
> And, as you may notice now, generic/308 is the test case for the
> exact problem the above commit fixed.

I have recent git version of xfstests, but generic/308 shows

#! /bin/bash
# FS QA Test No. 308
#
# Regression test for commit:
# f17722f ext4: Fix max file size and logical block counting of extent 
format file

> Can you find out exactly where the CPU is looping? sysrq-l will
> help, as will running 'perf top -U -g' to show you the hot code
> paths, and so on.

Strangely, the patch 
http://oss.sgi.com/archives/xfs/2014-05/msg00447.html is already included
in the xfs that comes with this 4.1.6 kernel, while only applying previous

http://oss.sgi.com/archives/xfs/2013-04/msg00273.html patch from Jeff fix the issue and
test 308 get passed.


I have a 16MB partition, and wondering why xfs allows from test 308 to 
create a 16TB file.

-rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308


When at 308 test exit, rm is invoked, system get blocked in infinite loop.

root      5445  0.7  0.2   3760  3180 ttyS0    S+   10:53   0:00 
/bin/bash /home/angelo/xfstests/tests/generic/308
root      5674  100  0.0   1388   848 ttyS0    R+   10:53   0:27 rm -f 
/media/p5/testfile.308

Can't install actually perf-tools for some debian repos issue, but let 
me know, i will enable sysrq
if needed.

Best regards
Angelo


>
> Cheers,
>
> Dave.

-- 
Best regards,
Angelo Dureghello


[-- Attachment #2: 009.out.bad --]
[-- Type: text/plain, Size: 4743 bytes --]

QA output created by 009
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
daa100df6e6711906b61c9ab5aa16032
	4. hole -> data
0: [0..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	5. hole -> unwritten
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	6. data -> hole
0: [24..39]: hole
1b3779878366498b28c702ef88c4a773
	7. data -> unwritten
0: [32..39]: hole
1b3779878366498b28c702ef88c4a773
	8. unwritten -> hole
0: [24..39]: hole
daa100df6e6711906b61c9ab5aa16032
	9. unwritten -> data
0: [32..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	10. hole -> data -> hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
daa100df6e6711906b61c9ab5aa16032
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
daa100df6e6711906b61c9ab5aa16032
	4. hole -> data
0: [0..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	5. hole -> unwritten
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	6. data -> hole
0: [24..39]: hole
1b3779878366498b28c702ef88c4a773
	7. data -> unwritten
0: [32..39]: hole
1b3779878366498b28c702ef88c4a773
	8. unwritten -> hole
0: [24..39]: hole
daa100df6e6711906b61c9ab5aa16032
	9. unwritten -> data
0: [32..39]: hole
cc63069677939f69a6e8f68cae6a6dac
	10. hole -> data -> hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
daa100df6e6711906b61c9ab5aa16032
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
cc58a7417c2d7763adc45b6fcd3fa024
	4. hole -> data
cc58a7417c2d7763adc45b6fcd3fa024
	5. hole -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	6. data -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	7. data -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	8. unwritten -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	9. unwritten -> data
cc58a7417c2d7763adc45b6fcd3fa024
	10. hole -> data -> hole
f6aeca13ec49e5b266cd1c913cd726e3
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
f6aeca13ec49e5b266cd1c913cd726e3
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
	1. into a hole
0: [0..39]: hole
daa100df6e6711906b61c9ab5aa16032
	2. into allocated space
cc58a7417c2d7763adc45b6fcd3fa024
	3. into unwritten space
cc58a7417c2d7763adc45b6fcd3fa024
	4. hole -> data
cc58a7417c2d7763adc45b6fcd3fa024
	5. hole -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	6. data -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	7. data -> unwritten
cc58a7417c2d7763adc45b6fcd3fa024
	8. unwritten -> hole
cc58a7417c2d7763adc45b6fcd3fa024
	9. unwritten -> data
cc58a7417c2d7763adc45b6fcd3fa024
	10. hole -> data -> hole
f6aeca13ec49e5b266cd1c913cd726e3
	11. data -> hole -> data
f6aeca13ec49e5b266cd1c913cd726e3
	12. unwritten -> data -> unwritten
f6aeca13ec49e5b266cd1c913cd726e3
	13. data -> unwritten -> data
f6aeca13ec49e5b266cd1c913cd726e3
	14. data -> hole @ EOF
e1f024eedd27ea6b1c3e9b841c850404
	15. data -> hole @ 0
eecb7aa303d121835de05028751d301c
	16. data -> cache cold ->hole
eecb7aa303d121835de05028751d301c
	17. data -> hole in single block file
0000000 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*
0000200 0000 0000 0000 0000 0000 0000 0000 0000
*
0000400 cdcd cdcd cdcd cdcd cdcd cdcd cdcd cdcd
*

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-21 11:13   ` Angelo Dureghello
@ 2015-09-21 11:18     ` Angelo Dureghello
  2015-09-21 22:52     ` Dave Chinner
  1 sibling, 0 replies; 14+ messages in thread
From: Angelo Dureghello @ 2015-09-21 11:18 UTC (permalink / raw)
  To: xfs

Sry, in my previous mail please s/16M/16G.  Was a typo.

I am using an SD card with test partitions 8G and 16G.

Best regards
Angelo

On 21/09/2015 13:13, Angelo Dureghello wrote:
> Hi Dave,
>
> many thanks for the support. Sorry for the double mail, after
> first registering mails was not accepted, so i re-registered
> with a company mail.
>
>
> On 19/09/2015 00:44, Dave Chinner wrote:
>> On Fri, Sep 18, 2015 at 06:38:38PM +0200, Angelo Dureghello wrote:
>>> Hi all,
>>>
>>> working on arm (32bit arch), kernel 4.1.6.
>> Is this a new platform?
>>
>> Also, we need to know what compiler you are using, because we know
>> that certain versions of gcc miscompile XFS kernel code on arm
>> (4.6, 4.7 and certain versions of 4.8 are suspect) due to a
>> combination of compiler mis-optimisations and kernel bugs in the
>> arm 64 bit division asm implementation.
>>
>> As such, it would be worthwhile trying gcc-4.9 and a 4.3-rc1 kernel
>> to see if the problems still occur.
>
> I am using actually gcc-linaro-4.9-2015.05-x86_64_arm-linux-gnueabihf
>
>>> Looking to find the reason of some bad results on xfstests,
>>>
>>> -tests/generic/009
>>> ------------------
>>> i get several "all holes" messages
>>>
>>> generic/009    [  842.949643] run fstests generic/009 at 2015-09-18
>>> 15:29:36
>>>   - output mismatch (see
>>> /home/angelo/xfstests/results//generic/009.out.bad)
>>>      --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
>>>      +++ /home/angelo/xfstests/results//generic/009.out.bad
>>> 2015-09-18 15:29:41.412784177 +0000
>>>      @@ -1,79 +1,45 @@
>>>       QA output created by 009
>>>           1. into a hole
>>>      -0: [0..7]: hole
>>>      -1: [8..23]: unwritten
>>>      -2: [24..39]: hole
>>>      +0: [0..39]: hole
>>>       daa100df6e6711906b61c9ab5aa16032
>>>
>>> also some other tests are giving the same bad notices.
>> Can you attach the entire
>> /home/angelo/xfstests/results//generic/009.out.bad file? I'm not
>> sure which of the tests this output comes from, so I need to
>> confirm which specific operations are resulting in errors.
> Sure, i completed the whole generic + shared + xfs tests.
> In total i have 38 errors. And trying now one by one to understand the 
> reason.
> I attached the 009 output.
>
>>> -tests/generic/308
>>> ------------------
>>>
>>> I have now: CONFIG_LBDAF=y
>>>
>>> In my target device this test creates a 16 Terabytes file 308.tempfile
>>>
>>> -rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308
>>>
>>> While "df" is not complaining about:
>>>
>>> /dev/mmcblk0p5   8378368   45252   8333116   1% /media/p5
>>>
>>> and next rm -f on it hands the cpu to 95%, forever.
>>>
>>> This issue seems known from a long time, as it has been discussed in
>>> the thread:
>>>
>>> http://oss.sgi.com/archives/xfs/2013-04/msg00273.html
>>>
>>> I was wondering if there was any special reason why the Jeff patch has
>>> never been finally applied.
>> MAX_LFS_FILESIZE on 32 bits is 8TB, whereas xfs supports 16TB file
>> size on 32 bit systems. The specific issue this test fixed was
>> committed in commit 8695d27 ("xfs: fix infinite loop at
>> xfs_vm_writepage on 32bit system")
>>
>> http://oss.sgi.com/archives/xfs/2014-05/msg00447.html
>>
>> And, as you may notice now, generic/308 is the test case for the
>> exact problem the above commit fixed.
>
> I have recent git version of xfstests, but generic/308 shows
>
> #! /bin/bash
> # FS QA Test No. 308
> #
> # Regression test for commit:
> # f17722f ext4: Fix max file size and logical block counting of extent 
> format file
>
>> Can you find out exactly where the CPU is looping? sysrq-l will
>> help, as will running 'perf top -U -g' to show you the hot code
>> paths, and so on.
>
> Strangely, the patch 
> http://oss.sgi.com/archives/xfs/2014-05/msg00447.html is already included
> in the xfs that comes with this 4.1.6 kernel, while only applying 
> previous
>
> http://oss.sgi.com/archives/xfs/2013-04/msg00273.html patch from Jeff 
> fix the issue and
> test 308 get passed.
>
>
> I have a 16MB partition, and wondering why xfs allows from test 308 to 
> create a 16TB file.
>
> -rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308
>
>
> When at 308 test exit, rm is invoked, system get blocked in infinite 
> loop.
>
> root      5445  0.7  0.2   3760  3180 ttyS0    S+   10:53   0:00 
> /bin/bash /home/angelo/xfstests/tests/generic/308
> root      5674  100  0.0   1388   848 ttyS0    R+   10:53   0:27 rm -f 
> /media/p5/testfile.308
>
> Can't install actually perf-tools for some debian repos issue, but let 
> me know, i will enable sysrq
> if needed.
>
> Best regards
> Angelo
>
>
>>
>> Cheers,
>>
>> Dave.
>
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

-- 
Best regards,
Angelo Dureghello

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-21 11:13   ` Angelo Dureghello
  2015-09-21 11:18     ` Angelo Dureghello
@ 2015-09-21 22:52     ` Dave Chinner
  2015-09-22 12:41       ` Angelo Dureghello
  2015-09-23 10:43       ` Yann Dupont - Veille Techno
  1 sibling, 2 replies; 14+ messages in thread
From: Dave Chinner @ 2015-09-21 22:52 UTC (permalink / raw)
  To: Angelo Dureghello; +Cc: xfs

On Mon, Sep 21, 2015 at 01:13:41PM +0200, Angelo Dureghello wrote:
> Hi Dave,
> 
> many thanks for the support. Sorry for the double mail, after
> first registering mails was not accepted, so i re-registered
> with a company mail.
> 
> 
> On 19/09/2015 00:44, Dave Chinner wrote:
> >On Fri, Sep 18, 2015 at 06:38:38PM +0200, Angelo Dureghello wrote:
> >>Hi all,
> >>
> >>working on arm (32bit arch), kernel 4.1.6.
> >Is this a new platform?
> >
> >Also, we need to know what compiler you are using, because we know
> >that certain versions of gcc miscompile XFS kernel code on arm
> >(4.6, 4.7 and certain versions of 4.8 are suspect) due to a
> >combination of compiler mis-optimisations and kernel bugs in the
> >arm 64 bit division asm implementation.
> >
> >As such, it would be worthwhile trying gcc-4.9 and a 4.3-rc1 kernel
> >to see if the problems still occur.
> 
> I am using actually gcc-linaro-4.9-2015.05-x86_64_arm-linux-gnueabihf

So gcc-4.9 patched with a bunch of stuff from linaro and built as a
cross compiler from x86-64 to 32 bit arm? ISTR we had a bunch of
different compiler problems at one point that only showed up in
kernels build with a x86-64 to arm cross-compiler.  In case you
haven't guessed, XFS has a history of being bitten by ARM compiler
problems. There's been a lot more problems in the past couple of
years than the historical trend, though.

As it is, I highly recommend that you try a current 4.3 kernel, as
there are several code fixes in the XFS kernel code that work around
compiler issues we know about. AFAIA, the do_div() asm bug that
trips recent gcc optimisations isn't in the upstream kernel yet, but
that can be worked around by setting CONFIG_CC_OPTIMIZE_FOR_SIZE=y
in your build.

> >>generic/009    [  842.949643] run fstests generic/009 at 2015-09-18
> >>15:29:36
> >>  - output mismatch (see
> >>/home/angelo/xfstests/results//generic/009.out.bad)
> >>     --- tests/generic/009.out    2015-09-17 10:54:06.689071257 +0000
> >>     +++ /home/angelo/xfstests/results//generic/009.out.bad
> >>2015-09-18 15:29:41.412784177 +0000
> >>     @@ -1,79 +1,45 @@
> >>      QA output created by 009
> >>          1. into a hole
> >>     -0: [0..7]: hole
> >>     -1: [8..23]: unwritten
> >>     -2: [24..39]: hole
> >>     +0: [0..39]: hole
> >>      daa100df6e6711906b61c9ab5aa16032
> >>
> >>also some other tests are giving the same bad notices.
> >Can you attach the entire
> >/home/angelo/xfstests/results//generic/009.out.bad file? I'm not
> >sure which of the tests this output comes from, so I need to
> >confirm which specific operations are resulting in errors.
> Sure, i completed the whole generic + shared + xfs tests.
> In total i have 38 errors. And trying now one by one to understand
> the reason.
> I attached the 009 output.
> 
> >>-tests/generic/308
> >>------------------
....
> I have recent git version of xfstests, but generic/308 shows
> 
> #! /bin/bash
> # FS QA Test No. 308
> #
> # Regression test for commit:
> # f17722f ext4: Fix max file size and logical block counting of
> extent format file

More that one filesystem had problems with maximum file sizes on 32
bit systems. Compare the contents of the test; don't stop reading
because the summary of the test makes you think the rest of the test
is unrelated to the problem at hand.

> I have a 16MB partition, and wondering why xfs allows from test 308
> to create a 16TB file.
> 
> -rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308

https://en.wikipedia.org/wiki/Sparse_file

> QA output created by 009
> 	1. into a hole
> 0: [0..39]: hole
> daa100df6e6711906b61c9ab5aa16032
> 	2. into allocated space
> cc58a7417c2d7763adc45b6fcd3fa024
> 	3. into unwritten space
> daa100df6e6711906b61c9ab5aa16032

I don't need to look any further to see that something is badly
wrong here. This is telling me that no extents are being allocated
at all, which indicates either fiemap is broken, awk/sed is
broken or misbehaving (and hence mangling the output) or something
deep in the filesystem code is fundamentally broken in some
strange, silent way.

Can you create an xfs filesystem on your scratch device, and
manually run this command and post the output:

# mkfs.xfs -V
# mkfs.xfs <dev>
# mount <dev> /mnt/xfs
# xfs_io -f -c "pwrite 0 64k" -c sync \
	    -c "bmap -vp" -c "fiemap -v" \
	    -c "falloc 1024k 256k" -c sync \
	    -c "pwrite 1088k 64k" -c sync \
	    -c "bmap -vp" -c "fiemap -v" \
	    /mnt/xfs/testfile

and attach the entire output?

It would also be good if you can run this command under trace-cmd
and capture all the XFS events that occur during the test. See

http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F

for details, and attach the (compressed) report file.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-21 22:52     ` Dave Chinner
@ 2015-09-22 12:41       ` Angelo Dureghello
  2015-09-22 21:27         ` Dave Chinner
  2015-09-23 10:43       ` Yann Dupont - Veille Techno
  1 sibling, 1 reply; 14+ messages in thread
From: Angelo Dureghello @ 2015-09-22 12:41 UTC (permalink / raw)
  To: xfs

[-- Attachment #1: Type: text/plain, Size: 4187 bytes --]

Hi Dave,

still thanks for following.

>> I am using actually gcc-linaro-4.9-2015.05-x86_64_arm-linux-gnueabihf
> So gcc-4.9 patched with a bunch of stuff from linaro and built as a
> cross compiler from x86-64 to 32 bit arm? ISTR we had a bunch of
> different compiler problems at one point that only showed up in
> kernels build with a x86-64 to arm cross-compiler.  In case you
> haven't guessed, XFS has a history of being bitten by ARM compiler
> problems. There's been a lot more problems in the past couple of
> years than the historical trend, though.
>
> As it is, I highly recommend that you try a current 4.3 kernel, as
> there are several code fixes in the XFS kernel code that work around
> compiler issues we know about. AFAIA, the do_div() asm bug that
> trips recent gcc optimisations isn't in the upstream kernel yet, but
> that can be worked around by setting CONFIG_CC_OPTIMIZE_FOR_SIZE=y
> in your build.

Well, i updated to this toolchain recently, but built the kernel also
with an i686 4.9 toolchain, and had exactly same tests errors.
Yes i am always cross compiling for armhf btw.
I am using a 4.1.5-rt from TI,
will try possibly some more recent version and let you know.

>> I have recent git version of xfstests, but generic/308 shows
>>
>> #! /bin/bash
>> # FS QA Test No. 308
>> #
>> # Regression test for commit:
>> # f17722f ext4: Fix max file size and logical block counting of
>> extent format file
> More that one filesystem had problems with maximum file sizes on 32
> bit systems. Compare the contents of the test; don't stop reading
> because the summary of the test makes you think the rest of the test
> is unrelated to the problem at hand.
>
>> I have a 16MB partition, and wondering why xfs allows from test 308
>> to create a 16TB file.
>>
>> -rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308
> https://en.wikipedia.org/wiki/Sparse_file
>
>> QA output created by 009
>> 	1. into a hole
>> 0: [0..39]: hole
>> daa100df6e6711906b61c9ab5aa16032
>> 	2. into allocated space
>> cc58a7417c2d7763adc45b6fcd3fa024
>> 	3. into unwritten space
>> daa100df6e6711906b61c9ab5aa16032
> I don't need to look any further to see that something is badly
> wrong here. This is telling me that no extents are being allocated
> at all, which indicates either fiemap is broken, awk/sed is
> broken or misbehaving (and hence mangling the output) or something
> deep in the filesystem code is fundamentally broken in some
> strange, silent way.
>
> Can you create an xfs filesystem on your scratch device, and
> manually run this command and post the output:
>
> # mkfs.xfs -V
> # mkfs.xfs <dev>
> # mount <dev> /mnt/xfs
> # xfs_io -f -c "pwrite 0 64k" -c sync \
> 	    -c "bmap -vp" -c "fiemap -v" \
> 	    -c "falloc 1024k 256k" -c sync \
> 	    -c "pwrite 1088k 64k" -c sync \
> 	    -c "bmap -vp" -c "fiemap -v" \
> 	    /mnt/xfs/testfile
>
> and attach the entire output?

I attached the output. I can be completely wrong, but file system
seems quite reliable for rootfs operations until now. At least,
never had any issue after installing and removing several and several
debian packages.
Only issues i had are created from test 308 that, if left running too long,
it damages the fs.

> It would also be good if you can run this command under trace-cmd
> and capture all the XFS events that occur during the test. See

Ok, about test 308, the 2 xfs_io operations passes, it stops on the rm 
exiting
the tests, while trying to erase the 16t file.

# Create a sparse file with an extent lays at one block before old 
s_maxbytes
offset=$(((2**32 - 2) * $block_size))
$XFS_IO_PROG -f -c "pwrite $offset $block_size" -c fsync $testfile 
 >$seqres.full 2>&1

rm can remove remove correctly this file (17592186040320)

# Write to the block after the extent just created
offset=$(((2**32 - 1) * $block_size))
$XFS_IO_PROG -f -c "pwrite $offset $block_size" -c fsync $testfile 
 >>$seqres.full 2>&1

while rm hangs on removing this file (17592186044415)

Magic sysrq l or t is not helping, nothing useful comes out.
But i collected the strace log. Since the issue is at unlinkat().



Many thanks
Best regards
Angelo


-- 
Best regards,
Angelo Dureghello


[-- Attachment #2: mkfs_output.txt --]
[-- Type: text/plain, Size: 2702 bytes --]

root[249] ~
# mkfs.xfs -V
mkfs.xfs version 3.2.1
root[250] ~
# mkfs.xfs /dev/mmcblk0p6
mkfs.xfs: /dev/mmcblk0p6 appears to contain an existing filesystem (xfs).
mkfs.xfs: Use the -f option to force overwrite.
root[251] ~
# mkfs.xfs -f /dev/mmcblk0p6
meta-data=/dev/mmcblk0p6         isize=256    agcount=4, agsize=551008 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2204032, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root[252] ~
# mount /dev/mmcblk0p6 /media/p6                
[38946.599770] XFS (mmcblk0p6): Mounting V4 Filesystem
[38946.800708] XFS (mmcblk0p6): Ending clean mount

# xfs_io -f -c "pwrite 0 64k" -c sync \
                       -c "bmap -vp" -c "fiemap -v" \
                       -c "falloc 1024k 256k" -c sync \
                       -c "pwrite 1088k 64k" -c sync \
                       -c "bmap -vp" -c "fiemap -v" \
                       /media/p6/testfile
wrote 65536/65536 bytes at offset 0
64 KiB, 16 ops; 0.0000 sec (96.154 MiB/sec and 24615.3846 ops/sec)
command "sync" not found
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
   0: [0..127]:        96..223           0 (96..223)          128 00000
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        96..223            128   0x1
command "sync" not found
wrote 65536/65536 bytes at offset 1114112
64 KiB, 16 ops; 0.0000 sec (112.208 MiB/sec and 28725.3142 ops/sec)
command "sync" not found
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
   0: [0..127]:        96..223           0 (96..223)          128 00000
   1: [128..2047]:     hole                                  1920
   2: [2048..2559]:    2144..2655        0 (2144..2655)       512 10000
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        96..223            128   0x0
   1: [128..2047]:     hole              1920
   2: [2048..2175]:    2144..2271         128 0x800
   3: [2176..2303]:    2272..2399         128   0x0
   4: [2304..2559]:    2400..2655         256 0x801

Step by step xfs_io
-------------------

Due to the "sync" is missing, i tried also command by command with
shell "sync" after each command. Output is exactly the same.



[-- Attachment #3: strace_rm_308.txt --]
[-- Type: text/plain, Size: 3618 bytes --]

-rw-------  1 root root 17592186040320 Sep 22 10:55 testfile.308


rm ok

-rw-------  1 root root 17592186044415 Sep 22 10:58 testfile.308

total 496
drwxr-xr-x 11 root root           4096 Sep 22 10:58 .
drwxr-xr-x  4 root root            101 Sep 22 08:40 ..
-rw-------  1 root root         131072 Sep 21 14:50 008.24410
-rw-------  1 root root         131072 Sep 21 14:52 008.2913
-rw-------  1 root root           4096 Sep 21 14:50 009.25001
-rw-------  1 root root           4096 Sep 21 14:52 009.3505
-rw-------  1 root root          81920 Sep 21 14:50 012.26531
-rw-------  1 root root          49152 Sep 21 14:52 012.5039
drwxr-xr-x  3 root root           4096 Sep 21 15:16 14536
drwxr-xr-x  3 root root             38 Sep 21 15:25 16802
drwxr-xr-x  3 root root           4096 Sep 21 15:26 18438
drwxr-xr-x  3 root root             38 Sep 21 15:27 21042
drwxr-xr-x  3 root root             38 Sep 21 14:50 27593
drwxr-xr-x  3 root root             70 Sep 21 15:08 8274
drwxr-xr-x  3 root root             15 Sep 21 14:53 fsstress.5653.1
drwxr-xr-x 22 root root           4096 Sep 21 14:53 fsstress.5653.2
-rw-------  1 root root 17592186044415 Sep 22 11:44 testfile.308
drwxr-xr-x  2 root root             20 Sep 21 15:12 tmp
root[252] vpc24 /media/p5
# rm testfile.308 

# strace rm testfile.308 
execve("/bin/rm", ["rm", "testfile.308"], [/* 16 vars */]) = 0
brk(0)                                  = 0x2a000
uname({sys="Linux", node="vpc24", ...}) = 0
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
mmap2(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6f36000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=34089, ...}) = 0
mmap2(NULL, 34089, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb6f2d000
close(3)                                = 0
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
open("/lib/arm-linux-gnueabihf/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\1\1\1\3\0\0\0\0\0\0\0\0\3\0(\0\1\0\0\0Mw\1\0004\0\0\0"..., 512) = 512
lseek(3, 899996, SEEK_SET)              = 899996
read(3, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 2880) = 2880
lseek(3, 896548, SEEK_SET)              = 896548
read(3, "A4\0\0\0aeabi\0\1*\0\0\0\0057-A\0\6\n\7A\10\1\t\2\n\3\f"..., 53) = 53
fstat64(3, {st_mode=S_IFREG|0755, st_size=902876, ...}) = 0
mmap2(NULL, 972200, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb6e23000
mprotect(0xb6efc000, 61440, PROT_NONE)  = 0
mmap2(0xb6f0b000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd8000) = 0xb6f0b000
mmap2(0xb6f0e000, 9640, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb6f0e000
close(3)                                = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb6f2c000
set_tls(0xb6f2c850, 0xb6f39050, 0xb6f2cf38, 0xb6f2c850, 0xb6f39050) = 0
mprotect(0xb6f0b000, 8192, PROT_READ)   = 0
mprotect(0x28000, 4096, PROT_READ)      = 0
mprotect(0xb6f38000, 4096, PROT_READ)   = 0
munmap(0xb6f2d000, 34089)               = 0
brk(0)                                  = 0x2a000
brk(0x4b000)                            = 0x4b000
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B115200 opost isig icanon echo ...}) = 0
newfstatat(AT_FDCWD, "testfile.308", {st_mode=S_IFREG|0600, st_size=17592186044415, ...}, AT_SYMLINK_NOFOLLOW) = 0
geteuid32()                             = 0
unlinkat(AT_FDCWD, "testfile.308", 0

[-- Attachment #4: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-22 12:41       ` Angelo Dureghello
@ 2015-09-22 21:27         ` Dave Chinner
  2015-09-23  9:15           ` Angelo Dureghello
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2015-09-22 21:27 UTC (permalink / raw)
  To: Angelo Dureghello; +Cc: xfs

On Tue, Sep 22, 2015 at 02:41:06PM +0200, Angelo Dureghello wrote:
> >>I have a 16MB partition, and wondering why xfs allows from test 308
> >>to create a 16TB file.
> >>
> >>-rw------- 1 root root  17592186044415 Sep 18 09:40 testfile.308
> >https://en.wikipedia.org/wiki/Sparse_file
> >
> >>QA output created by 009
> >>	1. into a hole
> >>0: [0..39]: hole
> >>daa100df6e6711906b61c9ab5aa16032
> >>	2. into allocated space
> >>cc58a7417c2d7763adc45b6fcd3fa024
> >>	3. into unwritten space
> >>daa100df6e6711906b61c9ab5aa16032
> >I don't need to look any further to see that something is badly
> >wrong here. This is telling me that no extents are being allocated
> >at all, which indicates either fiemap is broken, awk/sed is
> >broken or misbehaving (and hence mangling the output) or something
> >deep in the filesystem code is fundamentally broken in some
> >strange, silent way.
> >
> >Can you create an xfs filesystem on your scratch device, and
> >manually run this command and post the output:
> >
> ># mkfs.xfs -V
> ># mkfs.xfs <dev>
> ># mount <dev> /mnt/xfs
> ># xfs_io -f -c "pwrite 0 64k" -c sync \
> >	    -c "bmap -vp" -c "fiemap -v" \
> >	    -c "falloc 1024k 256k" -c sync \
> >	    -c "pwrite 1088k 64k" -c sync \
> >	    -c "bmap -vp" -c "fiemap -v" \
> >	    /mnt/xfs/testfile
> >
> >and attach the entire output?
> 
> I attached the output.

Urk, the command should be "fsync", not "sync". Regardless, the
last bmap/fiemap pair shows something interesting:

bmap-vp:

> /media/p6/testfile:
>  EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
>    0: [0..127]:        96..223           0 (96..223)          128 00000
>    1: [128..2047]:     hole                                  1920
>    2: [2048..2559]:    2144..2655        0 (2144..2655)       512 10000

fiemap -v:

> /media/p6/testfile:
>  EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>    0: [0..127]:        96..223            128   0x0
>    1: [128..2047]:     hole              1920
>    2: [2048..2175]:    2144..2271         128 0x800
>    3: [2176..2303]:    2272..2399         128   0x0
>    4: [2304..2559]:    2400..2655         256 0x801

Note that they are different - the former shows an unwritten extent
of 256k @ offset 1MB, the later shows that extent split by 64k of
data @ 1088k.

The bmap -vp output is incorrect - it is supposed to sync data first
and so should look the same as the fiemap output. Can you run this
test again, this time with s/sync/fsync so the files are clean when
bmap/fiemap are run? Can you run it a second time (umount/mkfs
again) but with fiemap run first? i.e '-c "fiemap -v" -c "bmap -vp" \' 
instead of the original order?

Next, can you compile your kernel with CONFIG_XFS_DEBUG=y and rerun
the tests? Does anything interesting appear in dmesg during the test
run?

> I can be completely wrong, but file system
> seems quite reliable for rootfs operations until now. At least,
> never had any issue after installing and removing several and several
> debian packages.

right, normal distro operation doesn't use preallocation or hole
punching, so you won't have seen issues with that.

> Ok, about test 308, the 2 xfs_io operations passes, it stops on the
> rm exiting
> the tests, while trying to erase the 16t file.
> 
> # Create a sparse file with an extent lays at one block before old
> s_maxbytes
> offset=$(((2**32 - 2) * $block_size))
> $XFS_IO_PROG -f -c "pwrite $offset $block_size" -c fsync $testfile
> >$seqres.full 2>&1
> 
> rm can remove remove correctly this file (17592186040320)

What is the large number here?

offset:		(2**32 - 2) * 4096 = 17592186036224
end file size:	(2**32 - 2) * 4096 + 4096 = 17592186040320

So it is the end file size.

> # Write to the block after the extent just created
> offset=$(((2**32 - 1) * $block_size))
> $XFS_IO_PROG -f -c "pwrite $offset $block_size" -c fsync $testfile
> >>$seqres.full 2>&1

This should fail with -EFBIG

> while rm hangs on removing this file (17592186044415)

offset:         (2**32 - 1) * 4096 = 17592186040320
end file size:	(2**32 - 1) * 4096 + 4096 = 17592186044416

Hmmm - that file is truncated by one byte. We set sb->s_maxbytes to
17592186044415 in xfs_max_file_offset() on 32 bit systems, so this
truncation is expected.

Most definitely need to run this under CONFIG_XFS_DEBUG=y - you
should enable this whenever running xfstests to check things are
working correctly as it enables all sorts of internal consistency
and constraint checking (i.e. checking things that "should never,
ever happen" haven't actually occurred).

> Magic sysrq l or t is not helping, nothing useful comes out.
> But i collected the strace log. Since the issue is at unlinkat().

Not actually useful - I need to know what is happening inside the
unlinkat() call.  I'm going to need a trace-cmd event dump of that
xfs_io command and the subsequent rm (at least for the first couple
of seconds of the rm). Please put the output file from the trace-cmd
record command on a tmpfs filesystem so it doesn't pollute the xfs
event trace ;)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-22 21:27         ` Dave Chinner
@ 2015-09-23  9:15           ` Angelo Dureghello
  2015-09-23 22:25             ` Dave Chinner
  0 siblings, 1 reply; 14+ messages in thread
From: Angelo Dureghello @ 2015-09-23  9:15 UTC (permalink / raw)
  To: xfs

[-- Attachment #1: Type: text/plain, Size: 2888 bytes --]

Hi Dave,

many thanks.

On 22/09/2015 23:27, Dave Chinner wrote:
> Urk, the command should be "fsync", not "sync". Regardless, the
> last bmap/fiemap pair shows something interesting:
>
> bmap-vp:
>
>> /media/p6/testfile:
>>   EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
>>     0: [0..127]:        96..223           0 (96..223)          128 00000
>>     1: [128..2047]:     hole                                  1920
>>     2: [2048..2559]:    2144..2655        0 (2144..2655)       512 10000
> fiemap -v:
>
>> /media/p6/testfile:
>>   EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
>>     0: [0..127]:        96..223            128   0x0
>>     1: [128..2047]:     hole              1920
>>     2: [2048..2175]:    2144..2271         128 0x800
>>     3: [2176..2303]:    2272..2399         128   0x0
>>     4: [2304..2559]:    2400..2655         256 0x801
> Note that they are different - the former shows an unwritten extent
> of 256k @ offset 1MB, the later shows that extent split by 64k of
> data @ 1088k.
>
> The bmap -vp output is incorrect - it is supposed to sync data first
> and so should look the same as the fiemap output. Can you run this
> test again, this time with s/sync/fsync so the files are clean when
> bmap/fiemap are run? Can you run it a second time (umount/mkfs
> again) but with fiemap run first? i.e '-c "fiemap -v" -c "bmap -vp" \'
> instead of the original order?
>
> Next, can you compile your kernel with CONFIG_XFS_DEBUG=y and rerun
> the tests? Does anything interesting appear in dmesg during the test
> run?

Done, see mkfs_output_2.txt attached

> Not actually useful - I need to know what is happening inside the
> unlinkat() call.  I'm going to need a trace-cmd event dump of that
> xfs_io command and the subsequent rm (at least for the first couple
> of seconds of the rm). Please put the output file from the trace-cmd
> record command on a tmpfs filesystem so it doesn't pollute the xfs
> event trace ;)
>
I set some traces inside fs/namei.c  do_unlinkat()

root[243] vpc24 (master) /home/angelo/xfstests
# ./start_xfs_test.sh
QA output created by 308
[  144.822616] XFS (mmcblk0p5): Mounting V4 Filesystem
[  145.074537] XFS (mmcblk0p5): Starting recovery (logdev: internal)
[  145.107298] XFS (mmcblk0p5): Ending recovery (logdev: internal)
Silence is golden
[  145.413606] do_unlinkat(): entering
[  145.417124] do_unlinkat(): retry
[  145.421156] do_unlinkat(): retry_delegate
[  145.425920] do_unlinkat(): vfs_unlink returns 0
[  145.430950] do_unlinkat(): exit2

At least that function "seems" to complete, but, as from my previous 
message
looks like strace was not showing nothig over it.

I captured about 10 seconds of events after the "hang" on 308. Hope they 
are
enough.

File is quite long, so you can read it from here:
http://sysam.it/~angelo/events.txt

bye,

-- 
Best regards,
Angelo Dureghello


[-- Attachment #2: mkfs_output_2.txt --]
[-- Type: text/plain, Size: 4048 bytes --]

# # xfs_io -f -c "pwrite 0 64k" -c fsync \
	    -c "bmap -vp" -c "fiemap -v" \
	    -c "falloc 1024k 256k" -c fsync \
	    -c "pwrite 1088k 64k" -c fsync \
	    -c "bmap -vp" -c "fiemap -v" \
	    /media/p6/testfile


# xfs_io -f -c "pwrite 0 64k" -c fsync \
>     -c "bmap -vp" -c "fiemap -v" \
>     -c "falloc 1024k 256k" -c fsync \
>     -c "pwrite 1088k 64k" -c fsync \
>     -c "bmap -vp" -c "fiemap -v" \
>     /media/p6/testfile
wrote 65536/65536 bytes at offset 0
64 KiB, 16 ops; 0.0000 sec (10.271 MiB/sec and 2629.4166 ops/sec)
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
   0: [0..127]:        96..223           0 (96..223)          128 00000
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        96..223            128   0x1
wrote 65536/65536 bytes at offset 1114112
64 KiB, 16 ops; 0.0000 sec (11.857 MiB/sec and 3035.4771 ops/sec)
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
   0: [0..127]:        96..223           0 (96..223)          128 00000
   1: [128..2047]:     hole                                  1920
   2: [2048..2175]:    2144..2271        0 (2144..2271)       128 10000
   3: [2176..2303]:    2272..2399        0 (2272..2399)       128 00000
   4: [2304..2559]:    2400..2655        0 (2400..2655)       256 10000
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        96..223            128   0x0
   1: [128..2047]:     hole              1920
   2: [2048..2175]:    2144..2271         128 0x800
   3: [2176..2303]:    2272..2399         128   0x0
   4: [2304..2559]:    2400..2655         256 0x801


root[251] host ~
# mkfs.xfs -f /dev/mmcblk0p6 
meta-data=/dev/mmcblk0p6         isize=256    agcount=4, agsize=551008 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2204032, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
root[252] host ~
# mount /dev/mmcblk0p6 /media/p6
[ 1000.531021] XFS (mmcblk0p6): Mounting V4 Filesystem
[ 1000.813339] XFS (mmcblk0p6): Ending clean mount
root[253] host ~
# xfs_io -f -c "pwrite 0 64k" -c fsync \
>      -c "bmap -vp" -c "fiemap -v" \
>      -c "falloc 1024k 256k" -c fsync \
>      -c "pwrite 1088k 64k" -c fsync \
>      -c "fiemap -v" -c "bmap -vp" \
>      /media/p6/testfile
wrote 65536/65536 bytes at offset 0
64 KiB, 16 ops; 0.0000 sec (10.126 MiB/sec and 2592.3526 ops/sec)
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
   0: [0..127]:        96..223           0 (96..223)          128 00000
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        96..223            128   0x1
wrote 65536/65536 bytes at offset 1114112
64 KiB, 16 ops; 0.0000 sec (11.519 MiB/sec and 2948.7652 ops/sec)
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
   0: [0..127]:        96..223            128   0x0
   1: [128..2047]:     hole              1920
   2: [2048..2175]:    2144..2271         128 0x800
   3: [2176..2303]:    2272..2399         128   0x0
   4: [2304..2559]:    2400..2655         256 0x801
/media/p6/testfile:
 EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
   0: [0..127]:        96..223           0 (96..223)          128 00000
   1: [128..2047]:     hole                                  1920
   2: [2048..2175]:    2144..2271        0 (2144..2271)       128 10000
   3: [2176..2303]:    2272..2399        0 (2272..2399)       128 00000
   4: [2304..2559]:    2400..2655        0 (2400..2655)       256 10000
root[254] host ~


[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-21 22:52     ` Dave Chinner
  2015-09-22 12:41       ` Angelo Dureghello
@ 2015-09-23 10:43       ` Yann Dupont - Veille Techno
  2015-09-23 22:04         ` Dave Chinner
  1 sibling, 1 reply; 14+ messages in thread
From: Yann Dupont - Veille Techno @ 2015-09-23 10:43 UTC (permalink / raw)
  To: xfs

Le 22/09/2015 00:52, Dave Chinner a écrit :
> As it is, I highly recommend that you try a current 4.3 kernel, as 
> there are several code fixes in the XFS kernel code that work around 
> compiler issues we know about. AFAIA, the do_div() asm bug that trips 
> recent gcc optimisations isn't in the upstream kernel yet, but that 
> can be worked around by setting CONFIG_CC_OPTIMIZE_FOR_SIZE=y in your 
> build. 

Hi dave,

I can confirm that CONFIG_CC_OPTIMIZE_FOR_SIZE=y is (was ?) the only way 
for me to have reliable XFS kernel code on different arm platforms 
(Marvell kirkwood, Allwinner A20, Amlogic S805), no matter what recent 
gcc version I've been using.

I must admit I was cross-compiling from X86-64 too, but I think (not 
sure) that it was also the case with native gcc.

I must also admit that I didn't tried since some months, because 
CONFIG_CC_OPTIMIZE_FOR_SIZE=y was the silver bullet for arm xfs kernel 
crashes. This crash was difficult to understand because it occurs quite 
randomly (I.e it can take several hours to trigger)

If there's a patch floating around for gcc (or kernel), I'm interested 
to test.

Cheers,


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-23 10:43       ` Yann Dupont - Veille Techno
@ 2015-09-23 22:04         ` Dave Chinner
  2015-09-24  8:20           ` Yann Dupont - Veille Techno
  0 siblings, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2015-09-23 22:04 UTC (permalink / raw)
  To: Yann Dupont - Veille Techno; +Cc: xfs

On Wed, Sep 23, 2015 at 12:43:21PM +0200, Yann Dupont - Veille Techno wrote:
> Le 22/09/2015 00:52, Dave Chinner a écrit :
> >As it is, I highly recommend that you try a current 4.3 kernel, as
> >there are several code fixes in the XFS kernel code that work
> >around compiler issues we know about. AFAIA, the do_div() asm bug
> >that trips recent gcc optimisations isn't in the upstream kernel
> >yet, but that can be worked around by setting
> >CONFIG_CC_OPTIMIZE_FOR_SIZE=y in your build.
> 
> Hi dave,
> 
> I can confirm that CONFIG_CC_OPTIMIZE_FOR_SIZE=y is (was ?) the only
> way for me to have reliable XFS kernel code on different arm
> platforms (Marvell kirkwood, Allwinner A20, Amlogic S805), no matter
> what recent gcc version I've been using.
> 
> I must admit I was cross-compiling from X86-64 too, but I think (not
> sure) that it was also the case with native gcc.
> 
> I must also admit that I didn't tried since some months, because
> CONFIG_CC_OPTIMIZE_FOR_SIZE=y was the silver bullet for arm xfs
> kernel crashes. This crash was difficult to understand because it
> occurs quite randomly (I.e it can take several hours to trigger)
> 
> If there's a patch floating around for gcc (or kernel), I'm
> interested to test.

See this subthread from august:

http://oss.sgi.com/archives/xfs/2015-08/msg00234.html

AFAICT, the do_div patch to fix the problem has not yet been picked
up - it's not in the 4.3-rc2 kernel...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-23  9:15           ` Angelo Dureghello
@ 2015-09-23 22:25             ` Dave Chinner
  0 siblings, 0 replies; 14+ messages in thread
From: Dave Chinner @ 2015-09-23 22:25 UTC (permalink / raw)
  To: Angelo Dureghello; +Cc: xfs

On Wed, Sep 23, 2015 at 11:15:28AM +0200, Angelo Dureghello wrote:
> Hi Dave,
> 
> many thanks.
> 
> On 22/09/2015 23:27, Dave Chinner wrote:
> >Urk, the command should be "fsync", not "sync". Regardless, the
> >last bmap/fiemap pair shows something interesting:
> >
> >bmap-vp:
> >
> >>/media/p6/testfile:
> >>  EXT: FILE-OFFSET      BLOCK-RANGE      AG AG-OFFSET        TOTAL FLAGS
> >>    0: [0..127]:        96..223           0 (96..223)          128 00000
> >>    1: [128..2047]:     hole                                  1920
> >>    2: [2048..2559]:    2144..2655        0 (2144..2655)       512 10000
> >fiemap -v:
....
> >and so should look the same as the fiemap output. Can you run this
> >test again, this time with s/sync/fsync so the files are clean when

Ok, so a preceding fsync results in bmap displaying all data ranges
being written. Hmmm - I'll need to look into that, it's likely not
a problem but just a longstanding bmap wart that fiemap doesn't have...

> >Next, can you compile your kernel with CONFIG_XFS_DEBUG=y and rerun
> >the tests? Does anything interesting appear in dmesg during the test
> >run?

Nothing in dmesg?

> >Not actually useful - I need to know what is happening inside the
> >unlinkat() call.  I'm going to need a trace-cmd event dump of that
> >xfs_io command and the subsequent rm (at least for the first couple
> >of seconds of the rm). Please put the output file from the trace-cmd
> >record command on a tmpfs filesystem so it doesn't pollute the xfs
> >event trace ;)
> >
> I set some traces inside fs/namei.c  do_unlinkat()
> 
> root[243] vpc24 (master) /home/angelo/xfstests
> # ./start_xfs_test.sh
> QA output created by 308
> [  144.822616] XFS (mmcblk0p5): Mounting V4 Filesystem
> [  145.074537] XFS (mmcblk0p5): Starting recovery (logdev: internal)
> [  145.107298] XFS (mmcblk0p5): Ending recovery (logdev: internal)
> Silence is golden
> [  145.413606] do_unlinkat(): entering
> [  145.417124] do_unlinkat(): retry
> [  145.421156] do_unlinkat(): retry_delegate
> [  145.425920] do_unlinkat(): vfs_unlink returns 0
> [  145.430950] do_unlinkat(): exit2

I think you'll find it's the deferred __fput() run from
task_work_run() that does all the work of freeing the extents in
the file. task_work_run() is executed before the process returns
to userspace....

> At least that function "seems" to complete, but, as from my previous
> message
> looks like strace was not showing nothig over it.
> 
> I captured about 10 seconds of events after the "hang" on 308. Hope
> they are
> enough.

I need to see the events that lead up to the hang, so you need to
start tracing before you run the test script, then stop tracing once
the hang has occurred. If the trace doesn't have events from the
processes the test runs, then you haven't captured the right
events...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-23 22:04         ` Dave Chinner
@ 2015-09-24  8:20           ` Yann Dupont - Veille Techno
  2015-09-27  0:40             ` Angelo Dureghello
  0 siblings, 1 reply; 14+ messages in thread
From: Yann Dupont - Veille Techno @ 2015-09-24  8:20 UTC (permalink / raw)
  To: Dave Chinner; +Cc: xfs

Le 24/09/2015 00:04, Dave Chinner a écrit :
> On Wed, Sep 23, 2015 at 12:43:21PM +0200, Yann Dupont - Veille Techno wrote:
>> Le 22/09/2015 00:52, Dave Chinner a écrit :
>>> As it is, I highly recommend that you try a current 4.3 kernel, as
>>> there are several code fixes in the XFS kernel code that work
>>> around compiler issues we know about. AFAIA, the do_div() asm bug
>>> that trips recent gcc optimisations isn't in the upstream kernel
>>> yet, but that can be worked around by setting
>>> CONFIG_CC_OPTIMIZE_FOR_SIZE=y in your build.
>> Hi dave,
>>
>> I can confirm that CONFIG_CC_OPTIMIZE_FOR_SIZE=y is (was ?) the only
>> way for me to have reliable XFS kernel code on different arm
>> platforms (Marvell kirkwood, Allwinner A20, Amlogic S805), no matter
>> what recent gcc version I've been using.
>>
>> I must admit I was cross-compiling from X86-64 too, but I think (not
>> sure) that it was also the case with native gcc.
>>
>> I must also admit that I didn't tried since some months, because
>> CONFIG_CC_OPTIMIZE_FOR_SIZE=y was the silver bullet for arm xfs
>> kernel crashes. This crash was difficult to understand because it
>> occurs quite randomly (I.e it can take several hours to trigger)
>>
>> If there's a patch floating around for gcc (or kernel), I'm
>> interested to test.
> See this subthread from august:
>
> http://oss.sgi.com/archives/xfs/2015-08/msg00234.html

Oh, missed this thread.

Thanks a lot for the pointer, will try this patch !
Cheers,

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: xfstests, bad generic tests 009 and 308
  2015-09-24  8:20           ` Yann Dupont - Veille Techno
@ 2015-09-27  0:40             ` Angelo Dureghello
  0 siblings, 0 replies; 14+ messages in thread
From: Angelo Dureghello @ 2015-09-27  0:40 UTC (permalink / raw)
  To: xfs

[-- Attachment #1: Type: text/plain, Size: 2414 bytes --]

Hi Dave and all,

The 99% cpu loop on tests/generic/308 (on "rm") happens also on
i686 (32bit), kernel 4.2.0 (gcc 4.9.1)

So, we can exclude it is a cross-compilation issue, or an ARM specific 
issue.
It should just be a 32-bit rch wide related issue.

I hardly found out the reason, at my opinion it doesn't have to be
fixed in xfs. I proposed this patch.

http://marc.info/?l=linux-kernel&m=144330858305518&w=2

Let's see if the list reply.


Couldn't proceed still on the other "all hole" errors, will look into
that. As far as i know, tests as 009 seems to give same errors to
non arm users too.

Will investigate further.

Best regards,
Angelo Dureghello




On 24/09/2015 10:20, Yann Dupont - Veille Techno wrote:
> Le 24/09/2015 00:04, Dave Chinner a écrit :
>> On Wed, Sep 23, 2015 at 12:43:21PM +0200, Yann Dupont - Veille Techno 
>> wrote:
>>> Le 22/09/2015 00:52, Dave Chinner a écrit :
>>>> As it is, I highly recommend that you try a current 4.3 kernel, as
>>>> there are several code fixes in the XFS kernel code that work
>>>> around compiler issues we know about. AFAIA, the do_div() asm bug
>>>> that trips recent gcc optimisations isn't in the upstream kernel
>>>> yet, but that can be worked around by setting
>>>> CONFIG_CC_OPTIMIZE_FOR_SIZE=y in your build.
>>> Hi dave,
>>>
>>> I can confirm that CONFIG_CC_OPTIMIZE_FOR_SIZE=y is (was ?) the only
>>> way for me to have reliable XFS kernel code on different arm
>>> platforms (Marvell kirkwood, Allwinner A20, Amlogic S805), no matter
>>> what recent gcc version I've been using.
>>>
>>> I must admit I was cross-compiling from X86-64 too, but I think (not
>>> sure) that it was also the case with native gcc.
>>>
>>> I must also admit that I didn't tried since some months, because
>>> CONFIG_CC_OPTIMIZE_FOR_SIZE=y was the silver bullet for arm xfs
>>> kernel crashes. This crash was difficult to understand because it
>>> occurs quite randomly (I.e it can take several hours to trigger)
>>>
>>> If there's a patch floating around for gcc (or kernel), I'm
>>> interested to test.
>> See this subthread from august:
>>
>> http://oss.sgi.com/archives/xfs/2015-08/msg00234.html
>
> Oh, missed this thread.
>
> Thanks a lot for the pointer, will try this patch !
> Cheers,
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs

-- 
Best regards,
Angelo Dureghello


[-- Attachment #2: ftrace_rm_308.txt --]
[-- Type: text/plain, Size: 15971 bytes --]

# cat /sys/kernel/debug/tracing/trace | head -1000
# tracer: function_graph
#
# CPU  DURATION                  FUNCTION CALLS
# |     |   |                     |   |   |   |
 1)   0.814 us    |  xfs_file_open();
 0)               |  xfs_xattr_get() {
 0)   0.650 us    |    xfs_attr_get();
 0)   5.856 us    |  }
 0)               |  xfs_file_read_iter() {
 0)   1.301 us    |    xfs_ilock();
 0)   1.139 us    |    xfs_iunlock();
 0) + 19.032 us   |  }
 0)               |  xfs_file_read_iter() {
 0)   0.814 us    |    xfs_ilock();
 0)   0.813 us    |    xfs_iunlock();
 0) + 15.942 us   |  }
 0)               |  xfs_file_read_iter() {
 0)   0.488 us    |    xfs_ilock();
 0)   0.813 us    |    xfs_iunlock();
 0) + 12.363 us   |  }
 0)               |  xfs_vn_follow_link() {
 0)               |    xfs_readlink() {
 0)   0.814 us    |      xfs_ilock();
 0)   0.651 us    |      xfs_iunlock();
 0)   9.109 us    |    }
 0) + 14.315 us   |  }
 0)   0.488 us    |  xfs_file_open();
 0)               |  xfs_file_read_iter() {
 0)   0.813 us    |    xfs_ilock();
 0)   0.814 us    |    xfs_iunlock();
 0) + 13.501 us   |  }
 0)               |  xfs_file_read_iter() {
 0)   0.651 us    |    xfs_ilock();
 0)   0.651 us    |    xfs_iunlock();
 0) + 12.200 us   |  }
 0)   0.976 us    |  xfs_file_mmap();
 0)   0.651 us    |  xfs_file_mmap();
 0)               |  xfs_filemap_fault() {
 0)   0.813 us    |    xfs_ilock();
 0)   0.813 us    |    xfs_iunlock();
 0) + 11.550 us   |  }
 0)   0.813 us    |  xfs_file_mmap();
 0)   0.813 us    |  xfs_file_mmap();
 0)               |  xfs_filemap_fault() {
 0)   0.813 us    |    xfs_ilock();
 0)   0.651 us    |    xfs_iunlock();
 0) + 10.410 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.650 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.976 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.488 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.326 us    |                xfs_bmbt_get_startoff();
 0)   0.650 us    |                xfs_bmbt_get_blockcount();
 0) + 76.291 us   |              }
 0)   0.488 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.488 us    |                __xfs_bmbt_get_all();
 0)   4.067 us    |              }
 0) + 92.070 us   |            }
 0) + 95.648 us   |          }
 0) ! 104.270 us  |        }
 0)   0.650 us    |        xfs_iunlock();
 0) ! 117.608 us  |      }
 0) ! 128.832 us  |    }
 0) ! 132.899 us  |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.651 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.976 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.325 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.325 us    |                xfs_bmbt_get_startoff();
 0)   0.326 us    |                xfs_bmbt_get_blockcount();
 0)   7.808 us    |              }
 0)   0.488 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.488 us    |                __xfs_bmbt_get_all();
 0)   4.067 us    |              }
 0) + 22.286 us   |            }
 0) + 26.027 us   |          }
 0) + 33.347 us   |        }
 0)   0.651 us    |        xfs_iunlock();
 0) + 49.613 us   |      }
 0) + 60.186 us   |    }
 0) + 64.090 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.651 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.650 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.326 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.325 us    |                xfs_bmbt_get_startoff();
 0)   0.326 us    |                xfs_bmbt_get_blockcount();
 0)   7.645 us    |              }
 0)   0.325 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.325 us    |                __xfs_bmbt_get_all();
 0)   4.067 us    |              }
 0) + 21.960 us   |            }
 0) + 25.376 us   |          }
 0) + 32.696 us   |        }
 0)   0.650 us    |        xfs_iunlock();
 0) + 44.245 us   |      }
 0) + 54.005 us   |    }
 0) + 57.747 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.488 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.813 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.325 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.326 us    |                xfs_bmbt_get_startoff();
 0)   0.325 us    |                xfs_bmbt_get_blockcount();
 0)   7.645 us    |              }
 0)   0.488 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.326 us    |                __xfs_bmbt_get_all();
 0)   4.229 us    |              }
 0) + 22.285 us   |            }
 0) + 25.701 us   |          }
 0) + 32.696 us   |        }
 0)   0.651 us    |        xfs_iunlock();
 0) + 44.733 us   |      }
 0) + 54.330 us   |    }
 0) + 57.909 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.651 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.650 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.162 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.325 us    |                xfs_bmbt_get_startoff();
 0)   0.325 us    |                xfs_bmbt_get_blockcount();
 0)   9.110 us    |              }
 0)   0.326 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.325 us    |                __xfs_bmbt_get_all();
 0)   4.067 us    |              }
 0) + 23.750 us   |            }
 0) + 27.328 us   |          }
 0) + 34.648 us   |        }
 0)   0.651 us    |        xfs_iunlock();
 0) + 46.522 us   |      }
 0) + 56.283 us   |    }
 0) + 59.861 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.488 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.814 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.325 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.326 us    |                xfs_bmbt_get_startoff();
 0)   0.488 us    |                xfs_bmbt_get_blockcount();
 0)   7.808 us    |              }
 0)   0.325 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.488 us    |                __xfs_bmbt_get_all();
 0)   3.904 us    |              }
 0) + 21.960 us   |            }
 0) + 25.376 us   |          }
 0) + 32.696 us   |        }
 0)   0.651 us    |        xfs_iunlock();
 0) + 44.896 us   |      }
 0) + 54.656 us   |    }
 0) + 58.072 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.488 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.651 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.325 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.163 us    |                xfs_bmbt_get_startoff();
 0)   0.325 us    |                xfs_bmbt_get_blockcount();
 0)   7.808 us    |              }
 0)   0.325 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.325 us    |                __xfs_bmbt_get_all();
 0)   3.742 us    |              }
 0) + 21.960 us   |            }
 0) + 25.539 us   |          }
 0) + 32.696 us   |        }
 0)   0.651 us    |        xfs_iunlock();
 0) + 44.733 us   |      }
 0) + 54.493 us   |    }
 0) + 57.909 us   |  }
 0)   0.814 us    |  xfs_file_open();
 0)   0.976 us    |  xfs_vn_getattr();
 0)   1.139 us    |  xfs_file_mmap();
 0)               |  xfs_vn_follow_link() {
 0)               |    xfs_readlink() {
 0)   0.651 us    |      xfs_ilock();
 0)   0.650 us    |      xfs_iunlock();
 0)   9.435 us    |    }
 0) + 14.477 us   |  }
 0)   0.651 us    |  xfs_file_open();
 0)               |  xfs_file_read_iter() {
 0)   0.814 us    |    xfs_ilock();
 0)   0.813 us    |    xfs_iunlock();
 0) + 17.731 us   |  }
 0)   1.138 us    |  xfs_file_llseek();
 0)               |  xfs_file_read_iter() {
 0)   0.650 us    |    xfs_ilock();
 0)   0.814 us    |    xfs_iunlock();
 0) + 18.707 us   |  }
 0)   0.488 us    |  xfs_file_llseek();
 0)               |  xfs_file_read_iter() {
 0)   0.651 us    |    xfs_ilock();
 0)   0.650 us    |    xfs_iunlock();
 0) + 12.200 us   |  }
 0)   0.651 us    |  xfs_vn_getattr();
 0)   0.651 us    |  xfs_file_mmap();
 0)   0.976 us    |  xfs_file_mmap();
 0)               |  xfs_filemap_fault() {
 0)   0.813 us    |    xfs_ilock();
 0)   0.650 us    |    xfs_iunlock();
 0) + 10.736 us   |  }
 0)               |  xfs_file_release() {
 0)               |    xfs_release() {
 0)   0.488 us    |      xfs_can_free_eofblocks();
 0)               |      xfs_free_eofblocks() {
 0)   0.813 us    |        xfs_ilock();
 0)               |        xfs_bmapi_read() {
 0)   0.326 us    |          xfs_isilocked();
 0)               |          xfs_bmap_search_extents() {
 0)               |            xfs_bmap_search_multi_extents() {
 0)               |              xfs_iext_bno_to_ext() {
 0)   0.326 us    |                xfs_bmbt_get_startoff();
 0)   0.325 us    |                xfs_bmbt_get_blockcount();
 0)   8.133 us    |              }
 0)   0.325 us    |              xfs_iext_get_ext();
 0)               |              xfs_bmbt_get_all() {
 0)   0.326 us    |                __xfs_bmbt_get_all();
 0)   3.904 us    |              }
 0) + 22.285 us   |            }
 0) + 25.864 us   |          }
 0) + 33.672 us   |        }
 0)   0.650 us    |        xfs_iunlock();
 0) + 46.523 us   |      }
 0) + 57.421 us   |    }
 0) + 61.488 us   |  }
 0)   1.138 us    |  xfs_vn_getattr();
 0)               |  xfs_vn_unlink() {
 0)               |    xfs_remove() {
 0)               |      xfs_trans_alloc() {
 0)   1.464 us    |        _xfs_trans_alloc();
 0)   6.669 us    |      }
 0)               |      xfs_trans_reserve() {
 0)   1.464 us    |        xfs_mod_fdblocks();
 0)               |        xfs_log_reserve() {
 0)   0.650 us    |          xfs_log_calc_unit_res();
 0) + 13.501 us   |        }
 0) + 23.262 us   |      }
 0)   0.976 us    |      xfs_ilock();
 0)               |      xfs_lock_two_inodes() {
 0)   0.651 us    |        xfs_ilock();
 0)   0.976 us    |        xfs_ilock_nowait();
 0)   9.110 us    |      }
 0)   0.326 us    |      xfs_isilocked();
 0)   1.464 us    |      xfs_trans_add_item();
 0)   0.488 us    |      xfs_isilocked();
 0)   1.301 us    |      xfs_trans_add_item();
 0)   0.488 us    |      xfs_isilocked();
 0)   0.163 us    |      xfs_isilocked();
 0)               |      xfs_droplink() {
 0)   0.325 us    |        xfs_isilocked();
 0)   0.326 us    |        xfs_isilocked();
 0)               |        xfs_iunlink() {
 0)               |          xfs_read_agi() {
 0)               |            xfs_buf_read_map() {
 0)               |              xfs_buf_get_map() {
 0)               |                _xfs_buf_find() {
 0)   1.301 us    |                  xfs_perag_get();
 0)   0.651 us    |                  xfs_perag_put();
 0)   1.302 us    |                  xfs_buf_trylock();
 0) + 16.754 us   |                }
 0) + 20.984 us   |              }
 0) + 25.051 us   |            }
 0)   1.301 us    |            xfs_trans_add_item();
 0) + 34.974 us   |          }
 0) + 40.992 us   |        }
 0) + 54.656 us   |      }
 0)               |      xfs_dir_removename() {
 0)               |        xfs_default_hashname() {
 0)   0.325 us    |          xfs_da_hashname();
 0)   4.392 us    |        }
 0)               |        xfs_dir2_sf_removename() {
 0)   0.651 us    |          xfs_da_compname();
 0)               |          xfs_dir2_sfe_get_ino() {
 0)   0.326 us    |            xfs_dir2_sf_get_ino.isra.8();
 0)   4.066 us    |          }
 0)   0.488 us    |          xfs_dir2_sf_entsize();
 0)   0.651 us    |          xfs_idata_realloc();
 0)               |          xfs_dir2_sf_check.isra.6() {
 0)               |            xfs_dir2_sf_get_parent_ino() {
 0)   0.325 us    |              xfs_dir2_sf_get_ino.isra.8();
 0)   3.904 us    |            }
 0)   7.971 us    |          }
 0)   0.326 us    |          xfs_isilocked();
 0) + 36.600 us   |        }
 0) + 53.030 us   |      }
 0)   0.488 us    |      xfs_bmap_finish();
 0)               |      xfs_trans_commit() {
 0)   0.326 us    |        xfs_isilocked();
 0)               |        xfs_iextents_copy() {
 0)   0.488 us    |          xfs_isilocked();
 0)   0.488 us    |          xfs_bmap_trace_exlist();
 0)   0.325 us    |          xfs_iext_get_ext();
 0)   0.488 us    |          xfs_bmbt_get_startblock();
 0)               |          xfs_validate_extents() {
 0)   0.326 us    |            xfs_iext_get_ext();
 0)               |            xfs_bmbt_get_all() {
 0)   0.325 us    |              __xfs_bmbt_get_all();
 0)   3.904 us    |            }
 0) + 11.224 us   |          }
 0) + 30.744 us   |        }
 0)   0.163 us    |        xfs_isilocked();
 0)   0.488 us    |        xfs_next_bit();
 0)   0.326 us    |        xfs_next_bit();
 0)   0.325 us    |        xfs_next_bit();
 0)   0.325 us    |        xfs_next_bit();
 0)   0.488 us    |        xfs_buf_offset();
 0)               |        xfs_log_done() {
 0)   0.488 us    |          xfs_log_space_wake();
 0)   1.302 us    |          xfs_log_ticket_put();
 0) + 12.525 us   |        }
 0)               |        xfs_trans_unreserve_and_mod_sb() {
 0)   1.138 us    |          xfs_mod_fdblocks();
 0)   5.043 us    |        }
 0)               |        xfs_trans_free_items() {
 0)   0.488 us    |          xfs_isilocked();
 0)   1.302 us    |          xfs_iunlock();
 0)   1.302 us    |          xfs_trans_free_item_desc();
 0)   0.488 us    |          xfs_isilocked();
 0)   0.651 us    |          xfs_iunlock();
 0)   1.301 us    |          xfs_trans_free_item_desc();
 0)   1.301 us    |          xfs_buf_unlock();
 0)   0.814 us    |          xfs_buf_rele();
 0)   0.976 us    |          xfs_trans_free_item_desc();
 0) ! 135.176 us  |        }
 0)               |        xfs_trans_free() {
 0)   0.325 us    |          xfs_extent_busy_clear();
 0)   5.694 us    |        }
 0) ! 247.090 us  |      }
 0) ! 452.702 us  |    }
 0) ! 457.256 us  |  }
 0)   0.488 us    |  xfs_fs_drop_inode();
 0)               |  xfs_fs_evict_inode() {



[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2015-09-27  0:40 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-09-18 16:38 xfstests, bad generic tests 009 and 308 Angelo Dureghello
2015-09-18 22:44 ` Dave Chinner
2015-09-21 11:13   ` Angelo Dureghello
2015-09-21 11:18     ` Angelo Dureghello
2015-09-21 22:52     ` Dave Chinner
2015-09-22 12:41       ` Angelo Dureghello
2015-09-22 21:27         ` Dave Chinner
2015-09-23  9:15           ` Angelo Dureghello
2015-09-23 22:25             ` Dave Chinner
2015-09-23 10:43       ` Yann Dupont - Veille Techno
2015-09-23 22:04         ` Dave Chinner
2015-09-24  8:20           ` Yann Dupont - Veille Techno
2015-09-27  0:40             ` Angelo Dureghello
  -- strict thread matches above, loose matches on Subject: below --
2015-09-18 16:16 angelo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox