public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Hans-Peter Jansen <hpj@urpla.net>
To: xfs@oss.sgi.com
Cc: Dave Hall <kdhall@binghamton.edu>, stan@hardwarefreak.com
Subject: Re: xfs_fsr, sunit, and swidth
Date: Tue, 02 Apr 2013 12:34:53 +0200	[thread overview]
Message-ID: <1938112.G9K3FbV7Ck@xrated> (raw)
In-Reply-To: <20130331012231.GJ6369@dastard>

On Sonntag, 31. März 2013 12:22:31 Dave Chinner wrote:
> On Fri, Mar 29, 2013 at 03:59:46PM -0400, Dave Hall wrote:
> > Dave, Stan,
> > 
> > Here is the link for perf top -U:  http://pastebin.com/JYLXYWki.
> > The ag report is at http://pastebin.com/VzziSa4L.  Interestingly,
> > the backups ran fast a couple times this week.  Once under 9 hours.
> > Today it looks like it's running long again.
> 
>     12.38%  [xfs]     [k] xfs_btree_get_rec
>     11.65%  [xfs]     [k] _xfs_buf_find
>     11.29%  [xfs]     [k] xfs_btree_increment
>      7.88%  [xfs]     [k] xfs_inobt_get_rec
>      5.40%  [kernel]  [k] intel_idle
>      4.13%  [xfs]     [k] xfs_btree_get_block
>      4.09%  [xfs]     [k] xfs_dialloc
>      3.21%  [xfs]     [k] xfs_btree_readahead
>      2.00%  [xfs]     [k] xfs_btree_rec_offset
>      1.50%  [xfs]     [k] xfs_btree_rec_addr
> 
> Inode allocation searches, looking for an inode near to the parent
> directory.
> 
> Whatthis indicates is that you have lots of sparsely allocated inode
> chunks on disk. i.e. each 64 indoe chunk has some free inodes in it,
> and some used inodes. This is Likely due to random removal of inodes
> as you delete old backups and link counts drop to zero. Because we
> only index inodes on "allocated chunks", finding a chunk that has a
> free inode can be like finding a needle in a haystack. There are
> heuristics used to stop searches from consuming too much CPU, but it
> still can be quite slow when you repeatedly hit those paths....
> 
> I don't have an answer that will magically speed things up for
> you right now...

Hmm, unfortunately, this access pattern is pretty common, at least all "cp -al 
& rsync" based backup solutions will suffer from it after a while. I noticed, 
that the "removing old backups" part is also taking *ages* in this scenario. 

I had to manually remove parts of a backup (subtrees with a few million 
ordinary files, massively hardlinked as usual), that took 4-5 hours for each 
run on a Hitachi Ultrastar 7K4000 drive. For the 8 subtrees, that finally took 
one and a half day, freeing about 500 GB space. Oh well.

The question is: is it (logically) possible to reorganize the fragmented inode 
allocation space with a specialized tool (to be implemented), that lays out 
the allocation space in such a way, that matches XFS earliest "expectations", 
or does that violate some deeper FS logic, I'm not aware of? 

I have to mention, that I haven't made any tests with other file systems, as 
playing games with backups ranges very low on my scale of sensible tests, but 
experience has shown, that XFS usually sucks less than its alternatives, even 
if the access pattern don't match its primary optimization domain.

Hence, implementing such a tool makes sense, where "least sucking" should be 
aimed for.

Cheers,
Pete

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-04-02 10:35 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-13 18:11 xfs_fsr, sunit, and swidth Dave Hall
2013-03-13 23:57 ` Dave Chinner
2013-03-14  0:03 ` Stan Hoeppner
     [not found]   ` <514153ED.3000405@binghamton.edu>
2013-03-14 12:26     ` Stan Hoeppner
2013-03-14 12:55       ` Stan Hoeppner
2013-03-14 14:59         ` Dave Hall
2013-03-14 18:07           ` Stefan Ring
2013-03-15  5:14           ` Stan Hoeppner
2013-03-15 11:45             ` Dave Chinner
2013-03-16  4:47               ` Stan Hoeppner
2013-03-16  7:21                 ` Dave Chinner
2013-03-16 11:45                   ` Stan Hoeppner
2013-03-25 17:00                   ` Dave Hall
2013-03-27 21:16                     ` Stan Hoeppner
2013-03-29 19:59                       ` Dave Hall
2013-03-31  1:22                         ` Dave Chinner
2013-04-02 10:34                           ` Hans-Peter Jansen [this message]
2013-04-03 14:25                           ` Dave Hall
2013-04-12 17:25                             ` Dave Hall
2013-04-13  0:45                               ` Dave Chinner
2013-04-13  0:51                               ` Stan Hoeppner
2013-04-15 20:35                                 ` Dave Hall
2013-04-16  1:45                                   ` Stan Hoeppner
2013-04-16 16:18                                   ` Dave Chinner
2015-02-22 23:35                                     ` XFS/LVM/Multipath on a single RAID volume Dave Hall
2015-02-23 11:18                                       ` Emmanuel Florac
2015-02-24 22:04                                         ` Dave Hall
2015-02-24 22:33                                           ` Dave Chinner
     [not found]                                             ` <54ED01BC.6080302@binghamton.edu>
2015-02-24 23:33                                               ` Dave Chinner
2015-02-25 11:49                                             ` Emmanuel Florac
2015-02-25 11:21                                           ` Emmanuel Florac
2013-03-28  1:38                     ` xfs_fsr, sunit, and swidth Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1938112.G9K3FbV7Ck@xrated \
    --to=hpj@urpla.net \
    --cc=kdhall@binghamton.edu \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox