public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: David Chinner <dgc@sgi.com>
To: Andi Kleen <ak@suse.de>
Cc: David Chinner <dgc@sgi.com>, xfs@oss.sgi.com
Subject: Re: XFS thread inflation in 2.6.23rc
Date: Thu, 9 Aug 2007 21:03:22 +1000	[thread overview]
Message-ID: <20070809110322.GA12413810@sgi.com> (raw)
In-Reply-To: <200708081526.06860.ak@suse.de>

On Wed, Aug 08, 2007 at 03:26:06PM +0200, Andi Kleen wrote:
> On Wednesday 08 August 2007 15:14:04 David Chinner wrote:
> 
> > Memory allocation failure + dirty transaction == filesystem shutdown.
> 
> You mean if the workqueue creation would fail? 

Yeah.

> Surely not having the MRU cache is not a catastrophe and would
> allow the transaction to commit anyways? 

Right now the only errors that come from filestream association
are fatal errors. i.e. they should force a shutdown if the transaction
is dirty. One of these errors is already an ENOMEM.

Now we'd have a case where we'd had a "error that is not an error"
and handle that in some way. It gets complex and that's something
I try to avoid if at all possible.

> The other alternative would be to start it when a directory with 
> the flag is first seen. That should be before any transactions.

On lookup? No thanks.

On first create? Possible, but it's still painful and it introduces
overhead into every single create operation.

> > > > Besides, what's the point of having nice constructs like dedicated
> > > > workqueues
> > > It's a resource that shouldn't be overused.
> > A workqueue + thread uses, what, 10-15k of memory? That's the cost of about
> > 10 cached inodes. It is insignificant...
> 
> A little bloat here and a little bloat there and soon we're talking
> about serious memory. 
>
> e.g. on a dual core box in a standard configuration we're going towards
> ~50 kernel threads out of the box now and that's just too much IMNSHO.

My idle desktop machine has 180 processes running on it, 60 of them
kernel threads. It doesn't concern me one bit.

Andi, you are complaining to the wrong person about thread counts. I
live in the world of excessive parallelism and multithreaded I/O. I
regularly see 4-8p boxes running hundreds to thousands of I/O
threads on a single filesystem. These are the workloads XFS is
designed for and we optimise for. A single extra thread is noise....

> > Hmmm. I guess you are really not going to like the patch I
> > have that moves the AIL pushing to a new thread to solve
> > some of scalability issues in the transaction subsystem......
> 
> Per CPU or single?

Single.

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

  reply	other threads:[~2007-08-09 11:03 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-08 10:40 XFS thread inflation in 2.6.23rc Andi Kleen
2007-08-08 12:13 ` David Chinner
2007-08-08 12:22   ` Andi Kleen
2007-08-08 13:14     ` David Chinner
2007-08-08 13:26       ` Andi Kleen
2007-08-09 11:03         ` David Chinner [this message]
2007-08-10 19:34           ` Eric Sandeen
2007-08-10 23:49             ` David Chinner
2007-08-11  1:21               ` Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070809110322.GA12413810@sgi.com \
    --to=dgc@sgi.com \
    --cc=ak@suse.de \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox