public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Mark Lord <kernel@teksavvy.com>
Cc: Linux Kernel <linux-kernel@vger.kernel.org>,
	xfs@oss.sgi.com, Christoph Hellwig <hch@infradead.org>,
	Justin Piszcz <jpiszcz@lucidpixels.com>,
	Alex Elder <aelder@sgi.com>,
	Stan Hoeppner <stan@hardwarefreak.com>
Subject: Re: xfs: very slow after mount, very slow at umount
Date: Sat, 29 Jan 2011 10:58:47 +1100	[thread overview]
Message-ID: <20110128235847.GY21311@dastard> (raw)
In-Reply-To: <4D42D39E.4080304@teksavvy.com>

On Fri, Jan 28, 2011 at 09:33:02AM -0500, Mark Lord wrote:
> On 11-01-28 02:31 AM, Dave Chinner wrote:
> >
> > A simple google search turns up discussions like this:
> > 
> > http://oss.sgi.com/archives/xfs/2009-01/msg01161.html
> 
> "in the long term we still expect fragmentation to degrade the performance of
> XFS file systems"

"so we intend to add an on-line file system defragmentation utility
to optimize the file system in the future"

You are quoting from the wrong link - that's from the 1996
whitepaper.  And sure, at the time that was written, nobody had any
real experience with long term aging of XFS filesystems so it was
still a guess at that point. XFS has had that online defragmentation
utility since 1998, IIRC, even though in most cases it is
unnecessary to use it.

> Other than that, no hints there about how changing agcount affects things.

If the reason given in the whitepaper for multiple AGs (i.e. they
are for increasing the concurrency of allocation) doesn't help you
understand why you'd want to increase the number of AGs in the
filesystem, then you haven't really thought about what you read.

As it is, from the same google search that found the above link
as #1 hit, this was #6:

http://oss.sgi.com/archives/xfs/2010-11/msg00497.html

| > AG count has a
| > direct relationship to the storage hardware, not the number of CPUs
| >  (cores) in the system
|
| Actually, I used 16 AGs because it's twice the number of CPU cores
| and I want to make sure that CPU parallel workloads (e.g. make -j 8)
| don't serialise on AG locks during allocation. IOWs, I laid it out
| that way precisely because of the number of CPUs in the system...
| 
| And to point out the not-so-obvious, this is the _default layout_
| that mkfs.xfs in the debian squeeze installer came up with. IOWs,
| mkfs.xfs did exactly what I wanted without me having to tweak
| _anything_."
| 
[...]
| 
| In that case, you are right. Single spindle SRDs go backwards in
| performance pretty quickly once you go over 4 AGs...

It seems to me that you haven't really done much looking for
information; there's lots of relevant advice in xfs mailing list
archives...

(and before you ask - SRD == Spinning Rust Disk)

> > Configuring XFS filesystems for optimal performance has always been
> > a black art because it requires you to understand your storage, your
> > application workload(s) and XFS from the ground up.  Most people
> > can't even tick one of those boxes, let alone all three....
> 
> Well, I've got 2/3 of those down just fine, thanks.
> But it's the "XFS" part that is still the "black art" part,
> because so little is written about *how* it works
> (as opposed to how it is laid out on disk).

If you want to know exactly how it works, there plenty of code to
read. I know, you're going to call that a cop out, but I've got more
important things to do than document 20,000 lines of allocation
code just for you.

In a world of infinite resources then everything would be documented
just the way you want, but we don't have infinite resources so it
remains documented by the code that implements it.  However, if you
want to go and understand it and document it all for us, then we'll
happily take the patches. :)

Cheers,,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-01-28 23:56 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <4D40C8D1.8090202@teksavvy.com>
2011-01-27  3:30 ` xfs: very slow after mount, very slow at umount Dave Chinner
2011-01-27  3:49   ` Mark Lord
2011-01-27  5:17     ` Stan Hoeppner
2011-01-27 15:12     ` Mark Lord
2011-01-27 15:40       ` Justin Piszcz
2011-01-27 16:03         ` Mark Lord
2011-01-27 19:40           ` Stan Hoeppner
2011-01-27 20:11             ` david
2011-01-27 23:53               ` Stan Hoeppner
2011-01-28  2:09                 ` david
2011-01-28 13:56                   ` Dave Chinner
2011-01-28 19:26                     ` david
2011-01-29  5:40                       ` Dave Chinner
2011-01-29  6:08                         ` david
2011-01-29  7:35                           ` Dave Chinner
2011-01-31 19:17                             ` Christoph Hellwig
2011-01-27 21:56             ` Mark Lord
2011-01-28  0:17               ` Dave Chinner
2011-01-28  1:22                 ` Mark Lord
2011-01-28  1:36                   ` Mark Lord
2011-01-28  4:14                   ` David Rees
2011-01-28 14:22                     ` Mark Lord
2011-01-28  7:31                   ` Dave Chinner
2011-01-28 14:33                     ` Mark Lord
2011-01-28 23:58                       ` Dave Chinner [this message]
2011-01-28 19:18             ` Martin Steigerwald
2011-01-27 20:24           ` John Stoffel
2011-01-27 23:41       ` Dave Chinner
2011-01-28  0:59         ` Mark Lord
2011-01-27 23:39     ` Dave Chinner
     [not found] ` <4D40CDCF.4010301@teksavvy.com>
2011-01-27  3:43   ` Dave Chinner
2011-01-27  3:53     ` Mark Lord
2011-01-27  4:54       ` Mark Lord
2011-01-27 23:34       ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110128235847.GY21311@dastard \
    --to=david@fromorbit.com \
    --cc=aelder@sgi.com \
    --cc=hch@infradead.org \
    --cc=jpiszcz@lucidpixels.com \
    --cc=kernel@teksavvy.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox