public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Cédric Lemarchand" <cedric.lemarchand@ixblue.com>
To: xfs@oss.sgi.com
Subject: Any way to slow down fragmentation ?
Date: Tue, 13 Oct 2015 23:54:37 +0200	[thread overview]
Message-ID: <561D7D9D.4030409@ixblue.com> (raw)

I think I actually have very bad fragmentation values, which
unfortunately involve performances drop by an order of magnitude of
3x/4x. A defrag is actually running, but it's really really slow, to the
point that I will need to constantly defrag the partition, which is not
optimal. There are approximatively 500Go written sequentially every day,
and almost 10/12T random writes every week due to backup files rotations.

The partition has been formated with default options, over LVM (one
VG/one LV).

Here are somes questions :

- is there mkfs.xfs or mounting options that could reduce the
fragmentation over the time ?
- the backup software writes use blocks size of ~4MB, as the previous
question, any options to optimize differents layers (LVM & XFS) ? The
underlaying FS could handle 1MB block size, should I set this value for
XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?

I admit that there are so many options that I am a bit lost.

Thanks,

Cédric

--
Some informations : VM running Debian Jessie, underlaying storage is
software raid (ZFS).


df -k
Filesystem            1K-blocks        Used   Available Use% Mounted on
/dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1

xfs_db -r /dev/VG2/LV1 -c frag
actual 4222, ideal 137, fragmentation factor 96.76%

xfs_info /dev/VG2/LV1
meta-data=/dev/mapper/VG2-LV1    isize=256    agcount=50,
agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=13421771776, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

ls -lh
total 26T
-rw-rw-rw- 1 root root  20M Oct 13 23:45 SG DAILY BACKUP.vbm
-rw-r--r-- 1 root root 550G Sep 20 09:08 SG DAILY
BACKUP2015-09-04T200116.vrb
-rw-r--r-- 1 root root 363G Oct 12 00:25 SG DAILY
BACKUP2015-09-07T200129.vrb
-rw-r--r-- 1 root root 156G Sep 12 22:40 SG DAILY
BACKUP2015-09-08T200100.vrb
-rw-r--r-- 1 root root 777G Oct 12 00:20 SG DAILY
BACKUP2015-09-09T200100.vrb
-rw-r--r-- 1 root root 472G Sep 19 09:11 SG DAILY
BACKUP2015-09-10T200113.vrb
-rw-r--r-- 1 root root 617G Sep 19 17:15 SG DAILY
BACKUP2015-09-11T200105.vrb
-rw-r--r-- 1 root root 484G Sep 20 01:14 SG DAILY
BACKUP2015-09-14T200056.vrb
-rw-r--r-- 1 root root 454G Sep 20 15:45 SG DAILY
BACKUP2015-09-15T200119.vrb
-rw-r--r-- 1 root root 374G Sep 20 15:48 SG DAILY
BACKUP2015-09-16T200101.vrb
-rw-r--r-- 1 root root 465G Sep 26 22:50 SG DAILY
BACKUP2015-09-17T200105.vrb
-rw-r--r-- 1 root root 626G Sep 27 08:43 SG DAILY
BACKUP2015-09-18T200110.vrb
-rw-r--r-- 1 root root 533G Sep 27 17:25 SG DAILY
BACKUP2015-09-21T200101.vrb
-rw-r--r-- 1 root root 459G Sep 28 02:36 SG DAILY
BACKUP2015-09-22T200059.vrb
-rw-r--r-- 1 root root 460G Sep 28 11:32 SG DAILY
BACKUP2015-09-23T200111.vrb
-rw-r--r-- 1 root root 516G Oct 12 00:27 SG DAILY
BACKUP2015-09-24T200058.vrb
-rw-r--r-- 1 root root 593G Oct  3 20:05 SG DAILY
BACKUP2015-09-25T200104.vrb
-rw-r--r-- 1 root root 482G Oct 12 00:20 SG DAILY
BACKUP2015-09-28T200108.vrb
-rw-r--r-- 1 root root 466G Oct 12 00:26 SG DAILY
BACKUP2015-09-29T200115.vrb
-rw-r--r-- 1 root root 481G Oct  4 23:41 SG DAILY
BACKUP2015-09-30T200109.vrb
-rw-r--r-- 1 root root 548G Oct 12 00:26 SG DAILY
BACKUP2015-10-01T200115.vrb
-rw-r--r-- 1 root root 703G Oct 11 07:59 SG DAILY
BACKUP2015-10-02T200055.vrb
-rw-r--r-- 1 root root 409G Oct 11 04:05 SG DAILY
BACKUP2015-10-05T200106.vrb
-rw-r--r-- 1 root root 384G Oct 11 10:14 SG DAILY
BACKUP2015-10-06T200059.vrb
-rw-r--r-- 1 root root 335G Oct 11 19:49 SG DAILY
BACKUP2015-10-07T104621.vrb
-rw-r--r-- 1 root root 552G Oct 12 00:27 SG DAILY
BACKUP2015-10-07T200123.vrb
-rw-r--r-- 1 root root  90G Oct 12 00:27 SG DAILY
BACKUP2015-10-08T200113.vrb
-rw-r--r-- 1 root root  13T Oct 12 00:27 SG DAILY
BACKUP2015-10-09T200112.vbk
-rw-r--r-- 1 root root 620G Oct 13 20:01 SG DAILY
BACKUP2015-10-12T200108.vib
-rw-r--r-- 1 root root 424G Oct 13 23:46 SG DAILY
BACKUP2015-10-13T200136.vib

pvdisplay /dev/sdc
  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               VG2
  PV Size               50.00 TiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              13107199
  Free PE               0
  Allocated PE          13107199
  PV UUID               nkbLG0-fUNx-StT7-htil-UksF-GC7i-amDIA9


vgdisplay VG2
  --- Volume group ---
  VG Name               VG2
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  12
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               50.00 TiB
  PE Size               4.00 MiB
  Total PE              13107199
  Alloc PE / Size       13107199 / 50.00 TiB
  Free  PE / Size       0 / 0
  VG UUID               sjZjgR-M58f-Shrg-jxKu-TwBq-qL3X-BMXYEn

lvdisplay /dev/VG2/LV1
  --- Logical volume ---
  LV Path                /dev/VG2/LV1
  LV Name                LV1
  VG Name                VG2
  LV UUID                rElcn9-cmsH-K3P5-nKOB-GcV0-fDf9-gozdgT
  LV Write Access        read/write
  LV Creation host, time SG-VREPO1.ixblue.corp, 2015-08-09 10:40:53 +0200
  LV Status              available
  # open                 2
  LV Size                50.00 TiB
  Current LE             13107199
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           254:0


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

             reply	other threads:[~2015-10-13 21:54 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-13 21:54 Cédric Lemarchand [this message]
2015-10-13 22:04 ` Any way to slow down fragmentation ? Eric Sandeen
2015-10-14 18:29   ` Cédric Lemarchand
2015-10-14 18:36     ` Eric Sandeen
2015-10-14 21:34       ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=561D7D9D.4030409@ixblue.com \
    --to=cedric.lemarchand@ixblue.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox