* Performance impact of mkfs.xfs vs mkfs.xfs -f
@ 2015-08-25 20:32 Shrinand Javadekar
2015-08-25 21:24 ` Shrinand Javadekar
2015-08-25 21:44 ` Eric Sandeen
0 siblings, 2 replies; 11+ messages in thread
From: Shrinand Javadekar @ 2015-08-25 20:32 UTC (permalink / raw)
To: xfs
Hi,
I have 23 disks formatted with XFS on a single server. The workload is
Openstack Swift. See this email from a few months ago about the
details:
http://oss.sgi.com/archives/xfs/2015-06/msg00108.html
I am observing some strange behavior and would like to get some
feedback about why this is happening.
I formatted the disks with xfs (mkfs.xfs) and deployed Openstack Swift
on it. Writing 100GB of data into Swift in batches of 20GB each gave
us the following throughput:
20 GB: 93MB/s
40 GB: 65MB/s
60 GB: 52MB/s
80 GB: 50MB/s
100 GB: 48MB/s
I then re-formatted the disks with mkfs.xfs -f and ran the experiment
again. This time I got the following throughput:
20 GB: 118MB/s
40 GB: 95MB/s
60 GB: 74MB/s
80 GB: 68MB/s
100 GB: 63MB/s
I've seen similar results twice.
Any ideas why this might be happening?
Thanks in advance.
-Shri
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-25 20:32 Performance impact of mkfs.xfs vs mkfs.xfs -f Shrinand Javadekar
@ 2015-08-25 21:24 ` Shrinand Javadekar
2015-08-25 21:44 ` Eric Sandeen
1 sibling, 0 replies; 11+ messages in thread
From: Shrinand Javadekar @ 2015-08-25 21:24 UTC (permalink / raw)
To: xfs
Does previously existing data on disk affect fragmentation?
On Tue, Aug 25, 2015 at 1:32 PM, Shrinand Javadekar
<shrinand@maginatics.com> wrote:
> Hi,
>
> I have 23 disks formatted with XFS on a single server. The workload is
> Openstack Swift. See this email from a few months ago about the
> details:
>
> http://oss.sgi.com/archives/xfs/2015-06/msg00108.html
>
> I am observing some strange behavior and would like to get some
> feedback about why this is happening.
>
> I formatted the disks with xfs (mkfs.xfs) and deployed Openstack Swift
> on it. Writing 100GB of data into Swift in batches of 20GB each gave
> us the following throughput:
>
> 20 GB: 93MB/s
> 40 GB: 65MB/s
> 60 GB: 52MB/s
> 80 GB: 50MB/s
> 100 GB: 48MB/s
>
> I then re-formatted the disks with mkfs.xfs -f and ran the experiment
> again. This time I got the following throughput:
>
> 20 GB: 118MB/s
> 40 GB: 95MB/s
> 60 GB: 74MB/s
> 80 GB: 68MB/s
> 100 GB: 63MB/s
>
> I've seen similar results twice.
>
> Any ideas why this might be happening?
>
> Thanks in advance.
> -Shri
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-25 20:32 Performance impact of mkfs.xfs vs mkfs.xfs -f Shrinand Javadekar
2015-08-25 21:24 ` Shrinand Javadekar
@ 2015-08-25 21:44 ` Eric Sandeen
2015-08-25 23:09 ` Shrinand Javadekar
1 sibling, 1 reply; 11+ messages in thread
From: Eric Sandeen @ 2015-08-25 21:44 UTC (permalink / raw)
To: Shrinand Javadekar, xfs
On 8/25/15 3:32 PM, Shrinand Javadekar wrote:
> Hi,
>
> I have 23 disks formatted with XFS on a single server. The workload is
> Openstack Swift. See this email from a few months ago about the
> details:
>
> http://oss.sgi.com/archives/xfs/2015-06/msg00108.html
>
> I am observing some strange behavior and would like to get some
> feedback about why this is happening.
>
> I formatted the disks with xfs (mkfs.xfs) and deployed Openstack Swift
> on it. Writing 100GB of data into Swift in batches of 20GB each gave
> us the following throughput:
>
> 20 GB: 93MB/s
> 40 GB: 65MB/s
> 60 GB: 52MB/s
> 80 GB: 50MB/s
> 100 GB: 48MB/s
>
> I then re-formatted the disks with mkfs.xfs -f and ran the experiment
> again. This time I got the following throughput:
>
> 20 GB: 118MB/s
> 40 GB: 95MB/s
> 60 GB: 74MB/s
> 80 GB: 68MB/s
> 100 GB: 63MB/s
>
> I've seen similar results twice.
How did you do the above twice, out of curiosity? If it's the same set of disks,
the 3rd mkfs would require "-f" to overwrite the old format.
> Any ideas why this might be happening?
With the paucity of information you've provided, nope!
What version of xfsprogs are you using?
What was the output of mkfs.xfs each time; did the geometry differ?
-f sets force_overwrite, which only does 3 things:
1) overwrite existing filesystem signatures
3) zeros out old xfs structures on disk
2) allow mkfs to proceed on a misaligned device
I don't see why any of those behaviors would change runtime behavior.
Maybe you have other variables in your performance testing, and two
tests isn't enough to sort out noise?
-Eric
> Thanks in advance.
> -Shri
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-25 21:44 ` Eric Sandeen
@ 2015-08-25 23:09 ` Shrinand Javadekar
2015-08-25 23:43 ` Dave Chinner
0 siblings, 1 reply; 11+ messages in thread
From: Shrinand Javadekar @ 2015-08-25 23:09 UTC (permalink / raw)
To: Eric Sandeen; +Cc: xfs
Thanks for the reply Eric. Please see my responses inline:
On Tue, Aug 25, 2015 at 2:44 PM, Eric Sandeen <sandeen@sandeen.net> wrote:
> On 8/25/15 3:32 PM, Shrinand Javadekar wrote:
>> Hi,
>>
>> I have 23 disks formatted with XFS on a single server. The workload is
>> Openstack Swift. See this email from a few months ago about the
>> details:
>>
>> http://oss.sgi.com/archives/xfs/2015-06/msg00108.html
>>
>> I am observing some strange behavior and would like to get some
>> feedback about why this is happening.
>>
>> I formatted the disks with xfs (mkfs.xfs) and deployed Openstack Swift
>> on it. Writing 100GB of data into Swift in batches of 20GB each gave
>> us the following throughput:
>>
>> 20 GB: 93MB/s
>> 40 GB: 65MB/s
>> 60 GB: 52MB/s
>> 80 GB: 50MB/s
>> 100 GB: 48MB/s
>>
>> I then re-formatted the disks with mkfs.xfs -f and ran the experiment
>> again. This time I got the following throughput:
>>
>> 20 GB: 118MB/s
>> 40 GB: 95MB/s
>> 60 GB: 74MB/s
>> 80 GB: 68MB/s
>> 100 GB: 63MB/s
>>
>> I've seen similar results twice.
>
> How did you do the above twice, out of curiosity? If it's the same set of disks,
> the 3rd mkfs would require "-f" to overwrite the old format.
I did this on 2 different setups.
Formatted the new disks with mkfs.xfs. Ran the workload.
Reformatted the disks with mkfs.xfs -f. Ran the workload.
>
>> Any ideas why this might be happening?
>
> With the paucity of information you've provided, nope!
Apologies. What more information can I provide?
>
> What version of xfsprogs are you using?
# xfs_repair -V
xfs_repair version 3.1.9
> What was the output of mkfs.xfs each time; did the geometry differ?
I have the output of xfs_info /mount/point from the first experiment
and that of mkfs.xfs -f. One difference I see is that reformatting
adds projid32bit=0 for the inode section.
>
> -f sets force_overwrite, which only does 3 things:
>
> 1) overwrite existing filesystem signatures
> 3) zeros out old xfs structures on disk
> 2) allow mkfs to proceed on a misaligned device
>
> I don't see why any of those behaviors would change runtime behavior.
>
> Maybe you have other variables in your performance testing, and two
> tests isn't enough to sort out noise?
We have seen this again on a third setup of one of my colleagues.
What more data can I look at for identifying the differences?
>
> -Eric
>
>> Thanks in advance.
>> -Shri
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
>>
>
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-25 23:09 ` Shrinand Javadekar
@ 2015-08-25 23:43 ` Dave Chinner
2015-08-26 0:39 ` Carlos E. R.
2015-08-26 17:48 ` Shrinand Javadekar
0 siblings, 2 replies; 11+ messages in thread
From: Dave Chinner @ 2015-08-25 23:43 UTC (permalink / raw)
To: Shrinand Javadekar; +Cc: Eric Sandeen, xfs
On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar wrote:
> I did this on 2 different setups.
Details?
> Formatted the new disks with mkfs.xfs. Ran the workload.
> Reformatted the disks with mkfs.xfs -f. Ran the workload.
>
> >
> >> Any ideas why this might be happening?
> >
> > With the paucity of information you've provided, nope!
>
> Apologies. What more information can I provide?
http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> > What version of xfsprogs are you using?
>
> # xfs_repair -V
> xfs_repair version 3.1.9
That's pretty old.
> > What was the output of mkfs.xfs each time; did the geometry differ?
>
> I have the output of xfs_info /mount/point from the first experiment
> and that of mkfs.xfs -f. One difference I see is that reformatting
> adds projid32bit=0 for the inode section.
xfs_info didn't get projid32bit status output until 3.2.0.
Anyway, please post the output so we can see the differences for
ourselves. What we need is mkfs output in both cases, and xfs_info
output in both cases after mount.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-25 23:43 ` Dave Chinner
@ 2015-08-26 0:39 ` Carlos E. R.
2015-08-26 1:09 ` Dave Chinner
2015-08-26 17:48 ` Shrinand Javadekar
1 sibling, 1 reply; 11+ messages in thread
From: Carlos E. R. @ 2015-08-26 0:39 UTC (permalink / raw)
To: XFS mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
On 2015-08-26 01:43, Dave Chinner wrote:
> On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar
> wrote:
>> Formatted the new disks with mkfs.xfs. Ran the workload.
>> Reformatted the disks with mkfs.xfs -f. Ran the workload.
> Anyway, please post the output so we can see the differences for
> ourselves. What we need is mkfs output in both cases, and xfs_info
> output in both cases after mount.
Suggestion (for the OP):
To reformat a third time without "-f", you can reformat as ext4, then
format a second time as xfs. But to imitate a new disk, you have to
zero it with dd.
Then you can replay the test and obtain the requested data :-)
- --
Cheers / Saludos,
Carlos E. R.
(from 13.1 x86_64 "Bottle" (Minas Tirith))
- --
Cheers / Saludos,
Carlos E. R.
(from 13.1 x86_64 "Bottle" (Minas Tirith))
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iF4EAREIAAYFAlXdCq8ACgkQja8UbcUWM1zCBgEAoMjVMIAlp0fKEO3CZKtZ/HNY
Ek7joAO+gCVO++IJ2boBAIuVMiLnOug7fG46s1vkFEUhWsvUYQbPoqbhCNyYki/u
=LcBk
-----END PGP SIGNATURE-----
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-26 0:39 ` Carlos E. R.
@ 2015-08-26 1:09 ` Dave Chinner
2015-08-26 7:25 ` Martin Steigerwald
0 siblings, 1 reply; 11+ messages in thread
From: Dave Chinner @ 2015-08-26 1:09 UTC (permalink / raw)
To: Carlos E. R.; +Cc: XFS mailing list
On Wed, Aug 26, 2015 at 02:39:11AM +0200, Carlos E. R. wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> On 2015-08-26 01:43, Dave Chinner wrote:
> > On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar
> > wrote:
>
> >> Formatted the new disks with mkfs.xfs. Ran the workload.
> >> Reformatted the disks with mkfs.xfs -f. Ran the workload.
>
>
> > Anyway, please post the output so we can see the differences for
> > ourselves. What we need is mkfs output in both cases, and xfs_info
> > output in both cases after mount.
>
> Suggestion (for the OP):
>
> To reformat a third time without "-f", you can reformat as ext4, then
> format a second time as xfs.
That doesn't work - mkfs.xfs detects that the device has an ext4
filesystem on it, and demands you use -f to overwrite it.
> But to imitate a new disk, you have to
> zero it with dd.
Only the first MB or so - enough for blkid not to be able to see a
filesystem signature on it.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-26 1:09 ` Dave Chinner
@ 2015-08-26 7:25 ` Martin Steigerwald
0 siblings, 0 replies; 11+ messages in thread
From: Martin Steigerwald @ 2015-08-26 7:25 UTC (permalink / raw)
To: xfs; +Cc: Carlos E. R.
Am Mittwoch, 26. August 2015, 11:09:23 schrieb Dave Chinner:
> On Wed, Aug 26, 2015 at 02:39:11AM +0200, Carlos E. R. wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > On 2015-08-26 01:43, Dave Chinner wrote:
> > > On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar
> > >
> > > wrote:
> > >> Formatted the new disks with mkfs.xfs. Ran the workload.
> > >> Reformatted the disks with mkfs.xfs -f. Ran the workload.
> > >
> > > Anyway, please post the output so we can see the differences for
> > > ourselves. What we need is mkfs output in both cases, and xfs_info
> > >
> > > output in both cases after mount.
> >
> > Suggestion (for the OP):
> >
> > To reformat a third time without "-f", you can reformat as ext4, then
> > format a second time as xfs.
>
> That doesn't work - mkfs.xfs detects that the device has an ext4
> filesystem on it, and demands you use -f to overwrite it.
>
> > But to imitate a new disk, you have to
> > zero it with dd.
>
> Only the first MB or so - enough for blkid not to be able to see a
> filesystem signature on it.
wipefs command.
Thanks,
--
Martin
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-25 23:43 ` Dave Chinner
2015-08-26 0:39 ` Carlos E. R.
@ 2015-08-26 17:48 ` Shrinand Javadekar
2015-08-26 18:44 ` Eric Sandeen
2015-08-26 19:04 ` Eric Sandeen
1 sibling, 2 replies; 11+ messages in thread
From: Shrinand Javadekar @ 2015-08-26 17:48 UTC (permalink / raw)
To: Dave Chinner; +Cc: Eric Sandeen, xfs
Please see my responses inline. I am seeing this behavior again.
On Tue, Aug 25, 2015 at 4:43 PM, Dave Chinner <david@fromorbit.com> wrote:
> On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar wrote:
>> I did this on 2 different setups.
>
> Details?
[Shri] On hardware box 1:
1. # of disks: 23
2. Type: Rotational disks
3. Ran mkfs.xfs and mounted disks
4. Installed Swift
5. Ran benchmark
6. Stopped Swift
7. unmounted disks
8. mkfs.xfs -f on all 23 disks
9. mounted disks
10. Installed Swift
11. Ran benchmark
Benchmark #s are as reported earlier.
The same steps mentioned above were performed on hardware box #2.
>
>> Formatted the new disks with mkfs.xfs. Ran the workload.
>> Reformatted the disks with mkfs.xfs -f. Ran the workload.
>>
>> >
>> >> Any ideas why this might be happening?
>> >
>> > With the paucity of information you've provided, nope!
>>
>> Apologies. What more information can I provide?
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
[Shri] I mentioned this in my first email. Here's the information:
http://oss.sgi.com/archives/xfs/2015-06/msg00108.html
>
>> > What version of xfsprogs are you using?
>>
>> # xfs_repair -V
>> xfs_repair version 3.1.9
>
> That's pretty old.
[Shri] We're using xfs progs version 3.1.9 whereas the kernel is newer
one: 3.16.0-38-generic. Does that matter?
For e.g. one of my colleagues found that the formatting with crc
enabled is only available in newer version of xfsprogs.
>
>> > What was the output of mkfs.xfs each time; did the geometry differ?
>>
>> I have the output of xfs_info /mount/point from the first experiment
>> and that of mkfs.xfs -f. One difference I see is that reformatting
>> adds projid32bit=0 for the inode section.
>
> xfs_info didn't get projid32bit status output until 3.2.0.
>
> Anyway, please post the output so we can see the differences for
> ourselves. What we need is mkfs output in both cases, and xfs_info
> output in both cases after mount.
Step 1: mkfs.xfs
Good formatting: http://pastie.org/private/new2zmwvdqvgm7h7coc4g
else:
meta-data=/dev/mapper/35000c50062e6a567-part2 isize=256 agcount=4,
agsize=183141504 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=732566016, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=357698, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Step 2: xfs_info
meta-data=/dev/mapper/35000c50062e6a567-part2 isize=256 agcount=4,
agsize=183141504 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=732566016, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=357698, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Step 3: Ran the benchmark (each run of 20GB)
Bandwidth (KB/s): 62294.9
Bandwidth (KB/s): 34407.7
Bandwidth (KB/s): 26949.8
Step 4: mkfs.xfs -f
Good formatting: http://pastie.org/private/bmzfateuuneddwg1zgymq
else:
meta-data=/dev/mapper/35000c50062e6a567-part2 isize=256 agcount=4,
agsize=183141504 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=732566016, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=357698, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Step 5: xfs_info
meta-data=/dev/mapper/35000c50062e6a567-part2 isize=256 agcount=4,
agsize=183141504 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=732566016, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=357698, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Step 6: Ran the same Benchmark (each run of 20GB):
Bandwidth (KB/s): 97061.6
Bandwidth (KB/s): 42811.7
Bandwidth (KB/s): 32111.7
-Shri
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-26 17:48 ` Shrinand Javadekar
@ 2015-08-26 18:44 ` Eric Sandeen
2015-08-26 19:04 ` Eric Sandeen
1 sibling, 0 replies; 11+ messages in thread
From: Eric Sandeen @ 2015-08-26 18:44 UTC (permalink / raw)
To: Shrinand Javadekar, Dave Chinner; +Cc: xfs
On 8/26/15 12:48 PM, Shrinand Javadekar wrote:
> Please see my responses inline. I am seeing this behavior again.
>
> On Tue, Aug 25, 2015 at 4:43 PM, Dave Chinner <david@fromorbit.com> wrote:
>> On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar wrote:
>>> I did this on 2 different setups.
>>
>> Details?
>
> [Shri] On hardware box 1:
>
> 1. # of disks: 23
> 2. Type: Rotational disks
> 3. Ran mkfs.xfs and mounted disks
> 4. Installed Swift
> 5. Ran benchmark
Details of "the benchmark?" (buffered or direct? IO sizes, file layout etc?)
> 6. Stopped Swift
> 7. unmounted disks
> 8. mkfs.xfs -f on all 23 disks
> 9. mounted disks
> 10. Installed Swift
> 11. Ran benchmark
<snip>
>>>> What version of xfsprogs are you using?
>>>
>>> # xfs_repair -V
>>> xfs_repair version 3.1.9
>>
>> That's pretty old.
>
> [Shri] We're using xfs progs version 3.1.9 whereas the kernel is newer
> one: 3.16.0-38-generic. Does that matter?
> For e.g. one of my colleagues found that the formatting with crc
> enabled is only available in newer version of xfsprogs.
It's fine to use xfsprogs 3.1.9 with kenrel 3.16. (In fact nothing
is going to be problematic, other than possibly running into unknown
features if one is too far out of sync with the other. In that case,
you'd just get a hard stop on the unknown feature, not a cryptic
behavior...)
>>
>>>> What was the output of mkfs.xfs each time; did the geometry differ?
>>>
>>> I have the output of xfs_info /mount/point from the first experiment
>>> and that of mkfs.xfs -f. One difference I see is that reformatting
>>> adds projid32bit=0 for the inode section.
>>
>> xfs_info didn't get projid32bit status output until 3.2.0.
>>
>> Anyway, please post the output so we can see the differences for
>> ourselves. What we need is mkfs output in both cases, and xfs_info
>> output in both cases after mount.
>
> Step 1: mkfs.xfs
<snip>
Ok, the mkfs output & xfs_info output is identical with and without
-f (as they should be).
What is your storage, i.e. what's behind
/dev/mapper/35000c50062e6a567-part2 ? Is it thinly-provisioned, or
anything else interesting like that? (thin provisioning probably
shouldn't matter, because in theory we discard the whole device
on mkfs anyway). But is it possible that the first benchmark
primed the storage in some way? To that end, what does:
1) mkfs.xfs, benchmark
2) benchmark
show? is the 2nd one faster as well?
Or, possibly:
1) mkfs.xfs, benchmark
2) mkfs.xfs -f, benchmark
3) wipefs, mkfs.xfs, benchmark
That would leave old xfs superblocks in place for the 3rd test, and
not wiped by mkfs itself, but I can't imagine why that would matter.
(mkfs should reinitialize them anyway, I think the call to
zero_old_xfs_structures() is just so that an xfs_repair search for
backups won't find old unrelated signatures from a prior different
geometry...)
Right now I'm actually wondering more about your storage, I guess.
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Performance impact of mkfs.xfs vs mkfs.xfs -f
2015-08-26 17:48 ` Shrinand Javadekar
2015-08-26 18:44 ` Eric Sandeen
@ 2015-08-26 19:04 ` Eric Sandeen
1 sibling, 0 replies; 11+ messages in thread
From: Eric Sandeen @ 2015-08-26 19:04 UTC (permalink / raw)
To: Shrinand Javadekar, Dave Chinner; +Cc: xfs
On 8/26/15 12:48 PM, Shrinand Javadekar wrote:
>>>>> >> >> Any ideas why this might be happening?
>>>> >> >
>>>> >> > With the paucity of information you've provided, nope!
>>> >>
>>> >> Apologies. What more information can I provide?
>> >
>> > http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> [Shri] I mentioned this in my first email. Here's the information:
> http://oss.sgi.com/archives/xfs/2015-06/msg00108.html
Just as an aside:
A 3 month old email under a different subject doesn't really help
us in this thread. ;)
You're the only one who knows if this is the same machine, whether you've
upgraded since, etc. So we had to ask...
For anyone reading:
If you're having a problem, please, please always start with as much information
about your environment as you can provide, as described in the xfs.org URL
above. It saves many email round-trips, and helps us help you better.
Thanks,
-Eric
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2015-08-26 19:04 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-25 20:32 Performance impact of mkfs.xfs vs mkfs.xfs -f Shrinand Javadekar
2015-08-25 21:24 ` Shrinand Javadekar
2015-08-25 21:44 ` Eric Sandeen
2015-08-25 23:09 ` Shrinand Javadekar
2015-08-25 23:43 ` Dave Chinner
2015-08-26 0:39 ` Carlos E. R.
2015-08-26 1:09 ` Dave Chinner
2015-08-26 7:25 ` Martin Steigerwald
2015-08-26 17:48 ` Shrinand Javadekar
2015-08-26 18:44 ` Eric Sandeen
2015-08-26 19:04 ` Eric Sandeen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox