public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Markus Trippelsdorf <markus@trippelsdorf.de>, xfs@oss.sgi.com
Subject: Re: Performance decrease over time
Date: Fri, 02 Aug 2013 03:14:04 -0500	[thread overview]
Message-ID: <51FB6A4C.5040103@hardwarefreak.com> (raw)
In-Reply-To: <20130802022518.GZ7118@dastard>

On 8/1/2013 9:25 PM, Dave Chinner wrote:
...

> So really, the numbers only reflect a difference in layout of the
> files being tested. And using small direct IO means that the
> filesystem will tend to fill small free spaces close to the
> inode first, and so will fragment the file based on the locality of
> fragmented free space to the owner inode. In the case of the new
> filesystem, there is only large, contiguous free space near the
> inode....
...
>> What can be
>> done (as a user) to mitigate this effect? 
> 
> Buy faster disks ;)
> 
> Seriously, all filesystems age and get significantly slower as they
> get used. XFS is not really designed for single spindles - it's
> algorithms are designed to spread data out over the entire device
> and so be able to make use of many, many spindles that make up the
> device. The behaviour it has works extremely well for this sort of
> large scale scenario, but it's close to the worst case aging
> behaviour for a single, very slow spindle like you are using.  Hence
> once the filesystem is over the "we have pristine, contiguous
> freespace" hump on your hardware, it's all downhill and there's not
> much you can do about it....

Wouldn't the inode32 allocator yield somewhat better results with this
direct IO workload?  With Markus' single slow spindle?  It shouldn't
fragment free space quite as badly in the first place, nor suffer from
trying to use many small fragments surrounding the inode as in the case
above.

Whether or not inode32 would be beneficial to his real workload(s) I
don't know.  I tend to think it might make at least a small positive
difference.  However, given that XFS is trying to get away from inode32
altogether I can see why you wouldn't mention it, even if it might yield
some improvement in this case.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-08-02  8:14 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-01 20:21 Performance decrease over time Markus Trippelsdorf
2013-08-02  2:25 ` Dave Chinner
2013-08-02  8:14   ` Stan Hoeppner [this message]
2013-08-02 22:30     ` Dave Chinner
2013-08-02 23:00       ` aurfalien

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51FB6A4C.5040103@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=david@fromorbit.com \
    --cc=markus@trippelsdorf.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox