public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Any way to slow down fragmentation ?
@ 2015-10-13 21:54 Cédric Lemarchand
  2015-10-13 22:04 ` Eric Sandeen
  0 siblings, 1 reply; 5+ messages in thread
From: Cédric Lemarchand @ 2015-10-13 21:54 UTC (permalink / raw)
  To: xfs

I think I actually have very bad fragmentation values, which
unfortunately involve performances drop by an order of magnitude of
3x/4x. A defrag is actually running, but it's really really slow, to the
point that I will need to constantly defrag the partition, which is not
optimal. There are approximatively 500Go written sequentially every day,
and almost 10/12T random writes every week due to backup files rotations.

The partition has been formated with default options, over LVM (one
VG/one LV).

Here are somes questions :

- is there mkfs.xfs or mounting options that could reduce the
fragmentation over the time ?
- the backup software writes use blocks size of ~4MB, as the previous
question, any options to optimize differents layers (LVM & XFS) ? The
underlaying FS could handle 1MB block size, should I set this value for
XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?

I admit that there are so many options that I am a bit lost.

Thanks,

Cédric

--
Some informations : VM running Debian Jessie, underlaying storage is
software raid (ZFS).


df -k
Filesystem            1K-blocks        Used   Available Use% Mounted on
/dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1

xfs_db -r /dev/VG2/LV1 -c frag
actual 4222, ideal 137, fragmentation factor 96.76%

xfs_info /dev/VG2/LV1
meta-data=/dev/mapper/VG2-LV1    isize=256    agcount=50,
agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=13421771776, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

ls -lh
total 26T
-rw-rw-rw- 1 root root  20M Oct 13 23:45 SG DAILY BACKUP.vbm
-rw-r--r-- 1 root root 550G Sep 20 09:08 SG DAILY
BACKUP2015-09-04T200116.vrb
-rw-r--r-- 1 root root 363G Oct 12 00:25 SG DAILY
BACKUP2015-09-07T200129.vrb
-rw-r--r-- 1 root root 156G Sep 12 22:40 SG DAILY
BACKUP2015-09-08T200100.vrb
-rw-r--r-- 1 root root 777G Oct 12 00:20 SG DAILY
BACKUP2015-09-09T200100.vrb
-rw-r--r-- 1 root root 472G Sep 19 09:11 SG DAILY
BACKUP2015-09-10T200113.vrb
-rw-r--r-- 1 root root 617G Sep 19 17:15 SG DAILY
BACKUP2015-09-11T200105.vrb
-rw-r--r-- 1 root root 484G Sep 20 01:14 SG DAILY
BACKUP2015-09-14T200056.vrb
-rw-r--r-- 1 root root 454G Sep 20 15:45 SG DAILY
BACKUP2015-09-15T200119.vrb
-rw-r--r-- 1 root root 374G Sep 20 15:48 SG DAILY
BACKUP2015-09-16T200101.vrb
-rw-r--r-- 1 root root 465G Sep 26 22:50 SG DAILY
BACKUP2015-09-17T200105.vrb
-rw-r--r-- 1 root root 626G Sep 27 08:43 SG DAILY
BACKUP2015-09-18T200110.vrb
-rw-r--r-- 1 root root 533G Sep 27 17:25 SG DAILY
BACKUP2015-09-21T200101.vrb
-rw-r--r-- 1 root root 459G Sep 28 02:36 SG DAILY
BACKUP2015-09-22T200059.vrb
-rw-r--r-- 1 root root 460G Sep 28 11:32 SG DAILY
BACKUP2015-09-23T200111.vrb
-rw-r--r-- 1 root root 516G Oct 12 00:27 SG DAILY
BACKUP2015-09-24T200058.vrb
-rw-r--r-- 1 root root 593G Oct  3 20:05 SG DAILY
BACKUP2015-09-25T200104.vrb
-rw-r--r-- 1 root root 482G Oct 12 00:20 SG DAILY
BACKUP2015-09-28T200108.vrb
-rw-r--r-- 1 root root 466G Oct 12 00:26 SG DAILY
BACKUP2015-09-29T200115.vrb
-rw-r--r-- 1 root root 481G Oct  4 23:41 SG DAILY
BACKUP2015-09-30T200109.vrb
-rw-r--r-- 1 root root 548G Oct 12 00:26 SG DAILY
BACKUP2015-10-01T200115.vrb
-rw-r--r-- 1 root root 703G Oct 11 07:59 SG DAILY
BACKUP2015-10-02T200055.vrb
-rw-r--r-- 1 root root 409G Oct 11 04:05 SG DAILY
BACKUP2015-10-05T200106.vrb
-rw-r--r-- 1 root root 384G Oct 11 10:14 SG DAILY
BACKUP2015-10-06T200059.vrb
-rw-r--r-- 1 root root 335G Oct 11 19:49 SG DAILY
BACKUP2015-10-07T104621.vrb
-rw-r--r-- 1 root root 552G Oct 12 00:27 SG DAILY
BACKUP2015-10-07T200123.vrb
-rw-r--r-- 1 root root  90G Oct 12 00:27 SG DAILY
BACKUP2015-10-08T200113.vrb
-rw-r--r-- 1 root root  13T Oct 12 00:27 SG DAILY
BACKUP2015-10-09T200112.vbk
-rw-r--r-- 1 root root 620G Oct 13 20:01 SG DAILY
BACKUP2015-10-12T200108.vib
-rw-r--r-- 1 root root 424G Oct 13 23:46 SG DAILY
BACKUP2015-10-13T200136.vib

pvdisplay /dev/sdc
  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               VG2
  PV Size               50.00 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              13107199
  Free PE               0
  Allocated PE          13107199
  PV UUID               nkbLG0-fUNx-StT7-htil-UksF-GC7i-amDIA9


vgdisplay VG2
  --- Volume group ---
  VG Name               VG2
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               50.00 TiB
  PE Size               4.00 MiB
  Total PE              13107199
  Alloc PE / Size       13107199 / 50.00 TiB
  Free  PE / Size       0 / 0
  VG UUID               sjZjgR-M58f-Shrg-jxKu-TwBq-qL3X-BMXYEn

lvdisplay /dev/VG2/LV1
  --- Logical volume ---
  LV Path                /dev/VG2/LV1
  LV Name                LV1
  VG Name                VG2
  LV UUID                rElcn9-cmsH-K3P5-nKOB-GcV0-fDf9-gozdgT
  LV Write Access        read/write
  LV Creation host, time SG-VREPO1.ixblue.corp, 2015-08-09 10:40:53 +0200
  LV Status              available
  # open                 2
  LV Size                50.00 TiB
  Current LE             13107199
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Any way to slow down fragmentation ?
  2015-10-13 21:54 Any way to slow down fragmentation ? Cédric Lemarchand
@ 2015-10-13 22:04 ` Eric Sandeen
  2015-10-14 18:29   ` Cédric Lemarchand
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2015-10-13 22:04 UTC (permalink / raw)
  To: xfs



On 10/13/15 4:54 PM, Cédric Lemarchand wrote:
> I think I actually have very bad fragmentation values, which
> unfortunately involve performances drop by an order of magnitude of
> 3x/4x. A defrag is actually running, but it's really really slow, to the
> point that I will need to constantly defrag the partition, which is not
> optimal. There are approximatively 500Go written sequentially every day,
> and almost 10/12T random writes every week due to backup files rotations.

Does anything besides the xfs_db "frag" command make you think that
fragmentation is a problem?  See below...

> The partition has been formated with default options, over LVM (one
> VG/one LV).
> 
> Here are somes questions :
> 
> - is there mkfs.xfs or mounting options that could reduce the
> fragmentation over the time ?
> - the backup software writes use blocks size of ~4MB, as the previous
> question, any options to optimize differents layers (LVM & XFS) ? The
> underlaying FS could handle 1MB block size, should I set this value for
> XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?
> 
> I admit that there are so many options that I am a bit lost.
> 
> Thanks,
> 
> Cédric
> 
> --
> Some informations : VM running Debian Jessie, underlaying storage is
> software raid (ZFS).
> 
> 
> df -k
> Filesystem            1K-blocks        Used   Available Use% Mounted on
> /dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1
> 
> xfs_db -r /dev/VG2/LV1 -c frag
> actual 4222, ideal 137, fragmentation factor 96.76%

http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F

So in 137 files, you have 4222 extents, or an average of
about 30 extents per file.

Or put another way, you have 39026 gigabytes used, in
4222 extents, for an average of 9 gigabytes per extent.

Those don't sound like problematic numbers.

xfs_bmap on an individual file will show you its mapping.
But for files of several hundred gigs, having several
very large extents really is not a problem.

I think the xfs_db frag command may be misleading you about
where the problem lies.

Of course it's possible that all but one of your files is
well laid out, and that last file is horribly, horribly
fragmented.  But the top-level numbers don't tell us whether
that might be the case.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Any way to slow down fragmentation ?
  2015-10-13 22:04 ` Eric Sandeen
@ 2015-10-14 18:29   ` Cédric Lemarchand
  2015-10-14 18:36     ` Eric Sandeen
  0 siblings, 1 reply; 5+ messages in thread
From: Cédric Lemarchand @ 2015-10-14 18:29 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs


[-- Attachment #1.1: Type: text/plain, Size: 3279 bytes --]

Well .. it seems I missed the most important part of the FAQ, thank for pointing it. As you stated, playing with xfs_bmap shows that the 13TB file is fragmented a lot, xfs_fsr is now working on it.

Any hints about sector size ? Regarding the workload, my point would be that use 4k could not hurts here.

Thanks,

Cédric

-- 
Cédric Lemarchand
IT Infrastructure Manager
iXBlue
34 rue de la Croix de Fer
78100 St Germain en Laye
France
Tel. +33 1 30 08 88 88
Mob. +33 6 37 23 40 93
Fax +33 1 30 08 88 00
www.ixblue.com <http://www.ixblue.com/>
> On 14 Oct 2015, at 00:04, Eric Sandeen <sandeen@sandeen.net> wrote:
> 
> 
> 
> On 10/13/15 4:54 PM, Cédric Lemarchand wrote:
>> I think I actually have very bad fragmentation values, which
>> unfortunately involve performances drop by an order of magnitude of
>> 3x/4x. A defrag is actually running, but it's really really slow, to the
>> point that I will need to constantly defrag the partition, which is not
>> optimal. There are approximatively 500Go written sequentially every day,
>> and almost 10/12T random writes every week due to backup files rotations.
> 
> Does anything besides the xfs_db "frag" command make you think that
> fragmentation is a problem?  See below...
> 
>> The partition has been formated with default options, over LVM (one
>> VG/one LV).
>> 
>> Here are somes questions :
>> 
>> - is there mkfs.xfs or mounting options that could reduce the
>> fragmentation over the time ?
>> - the backup software writes use blocks size of ~4MB, as the previous
>> question, any options to optimize differents layers (LVM & XFS) ? The
>> underlaying FS could handle 1MB block size, should I set this value for
>> XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?
>> 
>> I admit that there are so many options that I am a bit lost.
>> 
>> Thanks,
>> 
>> Cédric
>> 
>> --
>> Some informations : VM running Debian Jessie, underlaying storage is
>> software raid (ZFS).
>> 
>> 
>> df -k
>> Filesystem            1K-blocks        Used   Available Use% Mounted on
>> /dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1
>> 
>> xfs_db -r /dev/VG2/LV1 -c frag
>> actual 4222, ideal 137, fragmentation factor 96.76%
> 
> http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F
> 
> So in 137 files, you have 4222 extents, or an average of
> about 30 extents per file.
> 
> Or put another way, you have 39026 gigabytes used, in
> 4222 extents, for an average of 9 gigabytes per extent.
> 
> Those don't sound like problematic numbers.
> 
> xfs_bmap on an individual file will show you its mapping.
> But for files of several hundred gigs, having several
> very large extents really is not a problem.
> 
> I think the xfs_db frag command may be misleading you about
> where the problem lies.
> 
> Of course it's possible that all but one of your files is
> well laid out, and that last file is horribly, horribly
> fragmented.  But the top-level numbers don't tell us whether
> that might be the case.
> 
> -Eric
> 
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs


[-- Attachment #1.2: Type: text/html, Size: 6652 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Any way to slow down fragmentation ?
  2015-10-14 18:29   ` Cédric Lemarchand
@ 2015-10-14 18:36     ` Eric Sandeen
  2015-10-14 21:34       ` Dave Chinner
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Sandeen @ 2015-10-14 18:36 UTC (permalink / raw)
  To: Cédric Lemarchand; +Cc: xfs



On 10/14/15 1:29 PM, Cédric Lemarchand wrote:
> Well .. it seems I missed the most important part of the FAQ, thank
> for pointing it. As you stated, playing with xfs_bmap shows that the
> 13TB file is fragmented a lot, xfs_fsr is now working on it.

how much was "a lot?"

a 13TB file can have "a lot" of *very* large extents.

> Any hints about sector size ? Regarding the workload, my point would
> be that use 4k could not hurts here.

There's no real reason to change from the defaults mkfs.xfs
detects and uses for sector size.

Certainly sector size has nothing at all to do with fragmentation.

-Eric

> Thanks,
> 
> Cédric
> 
> -- 
> Cédric Lemarchand
> IT Infrastructure Manager
> iXBlue
> 34 rue de la Croix de Fer
> 78100 St Germain en Laye
> France
> Tel. +33 1 30 08 88 88
> Mob. +33 6 37 23 40 93
> Fax +33 1 30 08 88 00
> www.ixblue.com <http://www.ixblue.com>
> 
>> On 14 Oct 2015, at 00:04, Eric Sandeen <sandeen@sandeen.net <mailto:sandeen@sandeen.net>> wrote:
>>
>>
>>
>> On 10/13/15 4:54 PM, Cédric Lemarchand wrote:
>>> I think I actually have very bad fragmentation values, which
>>> unfortunately involve performances drop by an order of magnitude of
>>> 3x/4x. A defrag is actually running, but it's really really slow, to the
>>> point that I will need to constantly defrag the partition, which is not
>>> optimal. There are approximatively 500Go written sequentially every day,
>>> and almost 10/12T random writes every week due to backup files rotations.
>>
>> Does anything besides the xfs_db "frag" command make you think that
>> fragmentation is a problem?  See below...
>>
>>> The partition has been formated with default options, over LVM (one
>>> VG/one LV).
>>>
>>> Here are somes questions :
>>>
>>> - is there mkfs.xfs or mounting options that could reduce the
>>> fragmentation over the time ?
>>> - the backup software writes use blocks size of ~4MB, as the previous
>>> question, any options to optimize differents layers (LVM & XFS) ? The
>>> underlaying FS could handle 1MB block size, should I set this value for
>>> XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?
>>>
>>> I admit that there are so many options that I am a bit lost.
>>>
>>> Thanks,
>>>
>>> Cédric
>>>
>>> --
>>> Some informations : VM running Debian Jessie, underlaying storage is
>>> software raid (ZFS).
>>>
>>>
>>> df -k
>>> Filesystem            1K-blocks        Used   Available Use% Mounted on
>>> /dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1
>>>
>>> xfs_db -r /dev/VG2/LV1 -c frag
>>> actual 4222, ideal 137, fragmentation factor 96.76%
>>
>> http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F
>>
>> So in 137 files, you have 4222 extents, or an average of
>> about 30 extents per file.
>>
>> Or put another way, you have 39026 gigabytes used, in
>> 4222 extents, for an average of 9 gigabytes per extent.
>>
>> Those don't sound like problematic numbers.
>>
>> xfs_bmap on an individual file will show you its mapping.
>> But for files of several hundred gigs, having several
>> very large extents really is not a problem.
>>
>> I think the xfs_db frag command may be misleading you about
>> where the problem lies.
>>
>> Of course it's possible that all but one of your files is
>> well laid out, and that last file is horribly, horribly
>> fragmented.  But the top-level numbers don't tell us whether
>> that might be the case.
>>
>> -Eric
>>
>> _______________________________________________
>> xfs mailing list
>> xfs@oss.sgi.com
>> http://oss.sgi.com/mailman/listinfo/xfs
> 

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Any way to slow down fragmentation ?
  2015-10-14 18:36     ` Eric Sandeen
@ 2015-10-14 21:34       ` Dave Chinner
  0 siblings, 0 replies; 5+ messages in thread
From: Dave Chinner @ 2015-10-14 21:34 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: Cédric Lemarchand, xfs

On Wed, Oct 14, 2015 at 01:36:48PM -0500, Eric Sandeen wrote:
> 
> 
> On 10/14/15 1:29 PM, Cédric Lemarchand wrote:
> > Well .. it seems I missed the most important part of the FAQ, thank
> > for pointing it. As you stated, playing with xfs_bmap shows that the
> > 13TB file is fragmented a lot, xfs_fsr is now working on it.
> 
> how much was "a lot?"
> 
> a 13TB file can have "a lot" of *very* large extents.

With the maximum extent size being 8GB, a file that large going to
see "a lot" of extents even if it isn't fragmented. It will be
spread out over multiple AGs (being 1TB max in size), and that makes
it *appear* worse than it really is. Indeed, the best case is that
the BMBT will have ~1700 extent records in it for a file that
size, so it may appear to be fragmented when it really isn't.

Fragmentation is not measured by "having lots of extents in a file".
The extent layout of a file needs to be measured against the pattern
and size of the IOs the application does to that file - the file is
not fragmented if the size and/or packing of extents is optimal for
the access patterns of the application, regardless of the number of
extents in the file...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-10-14 21:35 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-10-13 21:54 Any way to slow down fragmentation ? Cédric Lemarchand
2015-10-13 22:04 ` Eric Sandeen
2015-10-14 18:29   ` Cédric Lemarchand
2015-10-14 18:36     ` Eric Sandeen
2015-10-14 21:34       ` Dave Chinner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox