From: Dave Chinner <david@fromorbit.com>
To: Richard Ems <richard.ems@cape-horn-eng.com>
Cc: xfs@oss.sgi.com
Subject: Re: cannot defrag volume, fragmentation factor 21.73%
Date: Tue, 19 Oct 2010 10:10:09 +1100 [thread overview]
Message-ID: <20101018231009.GL29677@dastard> (raw)
In-Reply-To: <4CBC3910.70806@cape-horn-eng.com>
On Mon, Oct 18, 2010 at 02:09:52PM +0200, Richard Ems wrote:
> Hi all,
>
> this is on openSUSE 11.3.
>
> # uname -a
> Linux fs1 2.6.34.7-0.3-default #1 SMP 2010-09-20 15:27:38 +0200
> x86_64 x86_64 x86_64 GNU/Linux
>
> # echo frag | xfs_db -r /dev/disk/by-label/data1
> xfs_db> actual 6451844, ideal 5050129, fragmentation factor 21.73%
>
> # xfs_db -V
> xfs_db version 3.1.2
>
> # xfs_fsr -V
> xfs_fsr version 3.1.2
>
> # df -h /dev/sdb1
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 17T 13T 4.3T 75% /data_1
>
> The volume is new, 12TB were rsync'ed from another volume, some new
> files came after the sync.
>
> I ran several times xfs_fsr, but the 21.73% factor stays there.
> There where some busy or modified files on which I started xfs_fsr
> later again, but this ones where small files and the 21.73% is still
> there.
Understand your numbers. What frag reports is how many extents there
are vs a perfect layout. It does not tell you how badly fragmented
your filesystem is. Extent-based filesystems can have
"fragmentation" like you see reported above, but not suffer at all
because the extents are large enough not to affect IO throughput.
e.g. If I have a 100GB file in 100x1GB extents, frag would report an
ideal of 17 extents and measure 100. That would give a frag factor
of 83%. Now, is that filesystem fragmented? Theoretically yes.
Practically, no.
Why? Because extents of 1GB are more than large enough for any IO to
that file reach full throughput. Therefore, while the file layout is
not perfect, the "fragmentation" has no impact on performance and
therefore the filesystem should not be considered fragmented.
So, for 13TB of data, having 20% of your files with two extents
rather than one is not a problem unless that causes you application
measurable performance issues...
IOWs, trying to reduce fragmentation without understanding what the
numbers tell you about the layout of your filesystem can be counter
productive. Especially as running xfs_fsr when you don't really need
to can have other side-effects that affect the long-term aging
characteristics of the filesystem (e.g. causing preamture free space
fragmentation).
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-10-18 23:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-18 12:09 cannot defrag volume, fragmentation factor 21.73% Richard Ems
2010-10-18 12:39 ` Michael Monnerie
2010-10-18 13:46 ` Richard Ems
2010-10-18 17:58 ` Michael Monnerie
2010-10-18 20:16 ` Stan Hoeppner
2010-10-18 23:10 ` Dave Chinner [this message]
2010-10-19 9:37 ` Richard Ems
2010-10-22 10:12 ` Richard Ems
2010-10-22 21:02 ` Michael Monnerie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101018231009.GL29677@dastard \
--to=david@fromorbit.com \
--cc=richard.ems@cape-horn-eng.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox