public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Eric Sandeen <sandeen@sandeen.net>
To: xfs@oss.sgi.com
Subject: Re: Any way to slow down fragmentation ?
Date: Tue, 13 Oct 2015 17:04:59 -0500	[thread overview]
Message-ID: <561D800B.90307@sandeen.net> (raw)
In-Reply-To: <561D7D9D.4030409@ixblue.com>



On 10/13/15 4:54 PM, Cédric Lemarchand wrote:
> I think I actually have very bad fragmentation values, which
> unfortunately involve performances drop by an order of magnitude of
> 3x/4x. A defrag is actually running, but it's really really slow, to the
> point that I will need to constantly defrag the partition, which is not
> optimal. There are approximatively 500Go written sequentially every day,
> and almost 10/12T random writes every week due to backup files rotations.

Does anything besides the xfs_db "frag" command make you think that
fragmentation is a problem?  See below...

> The partition has been formated with default options, over LVM (one
> VG/one LV).
> 
> Here are somes questions :
> 
> - is there mkfs.xfs or mounting options that could reduce the
> fragmentation over the time ?
> - the backup software writes use blocks size of ~4MB, as the previous
> question, any options to optimize differents layers (LVM & XFS) ? The
> underlaying FS could handle 1MB block size, should I set this value for
> XFS too ? do I need to play with "su" and "sw" as stated in the FAQ ?
> 
> I admit that there are so many options that I am a bit lost.
> 
> Thanks,
> 
> Cédric
> 
> --
> Some informations : VM running Debian Jessie, underlaying storage is
> software raid (ZFS).
> 
> 
> df -k
> Filesystem            1K-blocks        Used   Available Use% Mounted on
> /dev/mapper/VG2-LV1 53685000192 40921853928 12763146264  77% /vrepo1
> 
> xfs_db -r /dev/VG2/LV1 -c frag
> actual 4222, ideal 137, fragmentation factor 96.76%

http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F

So in 137 files, you have 4222 extents, or an average of
about 30 extents per file.

Or put another way, you have 39026 gigabytes used, in
4222 extents, for an average of 9 gigabytes per extent.

Those don't sound like problematic numbers.

xfs_bmap on an individual file will show you its mapping.
But for files of several hundred gigs, having several
very large extents really is not a problem.

I think the xfs_db frag command may be misleading you about
where the problem lies.

Of course it's possible that all but one of your files is
well laid out, and that last file is horribly, horribly
fragmented.  But the top-level numbers don't tell us whether
that might be the case.

-Eric

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-10-13 22:05 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-13 21:54 Any way to slow down fragmentation ? Cédric Lemarchand
2015-10-13 22:04 ` Eric Sandeen [this message]
2015-10-14 18:29   ` Cédric Lemarchand
2015-10-14 18:36     ` Eric Sandeen
2015-10-14 21:34       ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=561D800B.90307@sandeen.net \
    --to=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox