public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: xfs@oss.sgi.com, Ronnie Tartar <rtartar@host2max.com>
Subject: Re: Issues and new to the group
Date: Thu, 26 Sep 2013 18:46:10 -0500	[thread overview]
Message-ID: <5244C742.3080003@hardwarefreak.com> (raw)
In-Reply-To: <52444355.50904@sandeen.net>

On 9/26/2013 9:23 AM, Eric Sandeen wrote:
> On 9/26/13 8:30 AM, Ronnie Tartar wrote:
>> Stan, looks like I have directory fragmentation problem.
>>
>> xfs_db> frag -d
>> actual 65057, ideal 4680, fragmentation factor 92.81%
>>
>> What is the best way to fix this?
> 
> http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25._Is_that_bad.3F
> 
> We should just get rid of that command, TBH.
> 
> So your dirs are in an average of 65057/4680 or about 14 fragments each.
> Really not that bad, in the scope of things.
> 
> I'd imagine that this could be more of your problem:
> 
>> The
>> folders are image folders that have anywhere between 5 to 10 million images
>> in each folder.
> 
> at 10 million entries in a dir, you're going to start slowing down on inserts
> due to btree management.  But that probably doesn't account for multiple seconds for
> a single file.
> 
> So really,it's not clear *what* is slow.
> 
>> It takes about 2.5 to 3.5 seconds to write a single file.
> 
> strace with timing would be a very basic way to get a sense of what is slow;
> is it the file open/create?  How big is the file, are you doing buffered or
> direct IO?
> 
> On a more modern OS you could do some of the tracing suggested in
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
> 
> but some sort of profiling (oprofile, perhaps) might tell you where time is being spent in the kernel.
> 
> When you say suddenly started, was it after a kernel upgrade or other change?

Eric is an expert on this, much more knowledgeable than me.  And somehow
I missed the 5-10 million files per dir.  Maybe you have multiple issues
here adding up to large delays.  In addition to the steps Eric
recommends, it can't hurt to go ahead and take a look at the free space
map.  Depending on how the filesystem has aged this could be a factor,
such as being 90%+ full at one time, and then lots of files being deleted.

# xfs_db -r -c freesp /dev/[device]

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-09-26 23:46 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-26 11:47 Issues and new to the group Ronnie Tartar
2013-09-26 12:06 ` Stan Hoeppner
2013-09-26 13:12   ` Ronnie Tartar
2013-09-26 13:30     ` Ronnie Tartar
2013-09-26 14:23       ` Eric Sandeen
2013-09-26 23:46         ` Stan Hoeppner [this message]
2013-09-26 14:59     ` Joe Landman
2013-09-26 15:26       ` Jay Ashworth
2013-09-26 22:47         ` Dave Chinner
2013-09-26 22:16       ` Dave Chinner
2013-09-27  2:17         ` Joe Landman
2013-09-27  2:39       ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5244C742.3080003@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=rtartar@host2max.com \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox