From: Dave Hall <kdhall@binghamton.edu>
To: stan@hardwarefreak.com
Cc: "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: xfs_fsr, sunit, and swidth
Date: Fri, 12 Apr 2013 13:25:22 -0400 [thread overview]
Message-ID: <51684382.50008@binghamton.edu> (raw)
In-Reply-To: <515C3BF3.60601@binghamton.edu>
Stan,
IDid this post get lost in the shuffle? Looking at it I think it could
have been a bit unclear. What I need to do anyways is have a second,
off-site copy of my backup data. So I'm going to be building a second
array. In copying, in order to preserve the hard link structure of the
source array I'd have to run a sequence of cp -al / rsync calls that
would mimic what rsnapshot did to get me to where I am right now. (Note
that I could also potentially use rsync --link-dest.)
So the question is how would the target xfs file system fare as far as
my inode fragmentation situation is concerned? I'm hoping that since
the target would be a fresh file system, and since during the 'copy'
phase I'd only be adding inodes, that the inode allocation would be more
compact and orderly than what I have on the source array since. What do
you think?
Thanks.
-Dave
Dave Hall
Binghamton University
kdhall@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 04/03/2013 10:25 AM, Dave Hall wrote:
> So, assuming entropy has reached critical mass and that there is no
> easy fix for this physical file system, what would happen if I
> replicated this data to a new disk array? When I say 'replicate', I'm
> not talking about xfs_dump. I'm talking about running a series of cp
> -al/rsync operations (or maybe rsync with --link-dest) that will
> precisely reproduce the linked data on my current array. All of the
> inodes would be re-allocated. There wouldn't be any (or at least not
> many) deletes.
>
> I am hoping that if I do this the inode fragmentation will be
> significantly reduced on the target as compared to the source. Of
> course over time it may re-fragment, but with two arrays I can always
> wipe one and reload it.
>
> -Dave
>
> Dave Hall
> Binghamton University
> kdhall@binghamton.edu
> 607-760-2328 (Cell)
> 607-777-4641 (Office)
>
>
> On 03/30/2013 09:22 PM, Dave Chinner wrote:
>> On Fri, Mar 29, 2013 at 03:59:46PM -0400, Dave Hall wrote:
>>> Dave, Stan,
>>>
>>> Here is the link for perf top -U: http://pastebin.com/JYLXYWki.
>>> The ag report is at http://pastebin.com/VzziSa4L. Interestingly,
>>> the backups ran fast a couple times this week. Once under 9 hours.
>>> Today it looks like it's running long again.
>> 12.38% [xfs] [k] xfs_btree_get_rec
>> 11.65% [xfs] [k] _xfs_buf_find
>> 11.29% [xfs] [k] xfs_btree_increment
>> 7.88% [xfs] [k] xfs_inobt_get_rec
>> 5.40% [kernel] [k] intel_idle
>> 4.13% [xfs] [k] xfs_btree_get_block
>> 4.09% [xfs] [k] xfs_dialloc
>> 3.21% [xfs] [k] xfs_btree_readahead
>> 2.00% [xfs] [k] xfs_btree_rec_offset
>> 1.50% [xfs] [k] xfs_btree_rec_addr
>>
>> Inode allocation searches, looking for an inode near to the parent
>> directory.
>>
>> Whatthis indicates is that you have lots of sparsely allocated inode
>> chunks on disk. i.e. each 64 indoe chunk has some free inodes in it,
>> and some used inodes. This is Likely due to random removal of inodes
>> as you delete old backups and link counts drop to zero. Because we
>> only index inodes on "allocated chunks", finding a chunk that has a
>> free inode can be like finding a needle in a haystack. There are
>> heuristics used to stop searches from consuming too much CPU, but it
>> still can be quite slow when you repeatedly hit those paths....
>>
>> I don't have an answer that will magically speed things up for
>> you right now...
>>
>> Cheers,
>>
>> Dave.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-04-12 17:25 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-13 18:11 xfs_fsr, sunit, and swidth Dave Hall
2013-03-13 23:57 ` Dave Chinner
2013-03-14 0:03 ` Stan Hoeppner
[not found] ` <514153ED.3000405@binghamton.edu>
2013-03-14 12:26 ` Stan Hoeppner
2013-03-14 12:55 ` Stan Hoeppner
2013-03-14 14:59 ` Dave Hall
2013-03-14 18:07 ` Stefan Ring
2013-03-15 5:14 ` Stan Hoeppner
2013-03-15 11:45 ` Dave Chinner
2013-03-16 4:47 ` Stan Hoeppner
2013-03-16 7:21 ` Dave Chinner
2013-03-16 11:45 ` Stan Hoeppner
2013-03-25 17:00 ` Dave Hall
2013-03-27 21:16 ` Stan Hoeppner
2013-03-29 19:59 ` Dave Hall
2013-03-31 1:22 ` Dave Chinner
2013-04-02 10:34 ` Hans-Peter Jansen
2013-04-03 14:25 ` Dave Hall
2013-04-12 17:25 ` Dave Hall [this message]
2013-04-13 0:45 ` Dave Chinner
2013-04-13 0:51 ` Stan Hoeppner
2013-04-15 20:35 ` Dave Hall
2013-04-16 1:45 ` Stan Hoeppner
2013-04-16 16:18 ` Dave Chinner
2015-02-22 23:35 ` XFS/LVM/Multipath on a single RAID volume Dave Hall
2015-02-23 11:18 ` Emmanuel Florac
2015-02-24 22:04 ` Dave Hall
2015-02-24 22:33 ` Dave Chinner
[not found] ` <54ED01BC.6080302@binghamton.edu>
2015-02-24 23:33 ` Dave Chinner
2015-02-25 11:49 ` Emmanuel Florac
2015-02-25 11:21 ` Emmanuel Florac
2013-03-28 1:38 ` xfs_fsr, sunit, and swidth Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51684382.50008@binghamton.edu \
--to=kdhall@binghamton.edu \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox