public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Carsten Aulbert <Carsten.Aulbert@aei.mpg.de>
Cc: xfs@oss.sgi.com
Subject: Re: extremely slow file creation/deletion after xfs ran full
Date: Mon, 12 Jan 2015 11:37:50 -0500	[thread overview]
Message-ID: <20150112163749.GE25944@bfoster.bfoster> (raw)
In-Reply-To: <54B3F19D.6030307@aei.mpg.de>

On Mon, Jan 12, 2015 at 05:09:01PM +0100, Carsten Aulbert wrote:
> Hi Brian
> 
> On 01/12/2015 04:52 PM, Brian Foster wrote:
> > I can't see any symbols associated with the perf output. I suspect
> > because I'm not running on your kernel. It might be better to run 'perf
> > report -g' and copy/paste the stack trace for some of the larger
> > consumers.
> > 
> 
> Sorry, I rarely need to use perf and of course forgot that the
> intermediate output it tightly coupled to the running kernel. Attaching
> the output of perf report -g here.

It does look like we're spending most of the time down in
xfs_diallog_ag(), which is the algorithm for finding a free inode in the
btree for a particular AG when we know that existing records have some
free.

> > 
> > Sounds good. FWIW, something like the following should tell us how many
> > free inodes are available in each ag, and thus whether we have to search
> > for free inodes in existing records rather than allocate new ones:
> > 
> > for i in $(seq 0 15); do
> > 	xfs_db -c "agi $i" -c "p freecount" <dev>
> > done
> > 
> Another metric :)
> 
> freecount = 53795884
> freecount = 251
> freecount = 45
> freecount = 381
> freecount = 11009
> freecount = 6748
> freecount = 663
> freecount = 595
> freecount = 693
> freecount = 9089
> freecount = 37122
> freecount = 2657
> freecount = 60497
> freecount = 1790275
> freecount = 54544
> 
> That looks... not really uniform to me.
> 

No, but it does show that there are a bunch of free inodes scattered
throughout the existing records in most of the AGs. The finobt should
definitely help avoid the allocation latency when this occurs.

It is interesting that you have so many more free inodes in ag 0 (~53m
as opposed to several hundreds/thousands in others). What does 'p count'
show for each ag? Was this fs grown to the current size over time?

Brian

> Cheers
> 
> Carsten
> 
> 



_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2015-01-12 16:38 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-12  8:36 extremely slow file creation/deletion after xfs ran full Carsten Aulbert
2015-01-12 12:44 ` Brian Foster
2015-01-12 13:30 ` Carsten Aulbert
2015-01-12 15:52   ` Brian Foster
2015-01-12 16:09     ` Carsten Aulbert
2015-01-12 16:37       ` Brian Foster [this message]
2015-01-12 17:33         ` Carsten Aulbert
2015-01-13 20:06           ` Stan Hoeppner
2015-01-13 20:13             ` Carsten Aulbert
2015-01-13 20:43               ` Stan Hoeppner
2015-01-14  6:07                 ` Carsten Aulbert
2015-01-13 20:33           ` Dave Chinner
2015-01-14  6:12             ` Carsten Aulbert
2015-01-16 15:35               ` Carlos Maiolino

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150112163749.GE25944@bfoster.bfoster \
    --to=bfoster@redhat.com \
    --cc=Carsten.Aulbert@aei.mpg.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox