From: Paul Jackson <pj@sgi.com>
To: Andi Kleen <ak@suse.de>
Cc: clameter@engr.sgi.com, dgc@sgi.com, steiner@sgi.com,
Simon.Derr@bull.net, linux-kernel@vger.kernel.org,
clameter@sgi.com
Subject: Re: [PATCH 01/02] cpuset memory spread slab cache filesys
Date: Wed, 1 Mar 2006 13:19:10 -0800 [thread overview]
Message-ID: <20060301131910.beb949be.pj@sgi.com> (raw)
In-Reply-To: <200603012159.42273.ak@suse.de>
> > No - having a single cpuset is the fastest path. All tasks
> > are in that root cpuset in that case, and all nodes allowed.
>
> Faster than no cpuset?
If CONFIG_CPUSET is enabled (which I thought was likely to become the
norm for most distros -- though you would know better than I if this is
likely) then:
There is no such case as "no cpuset" !!
The minimal, fastest, case is one root cpuset holding all tasks.
> If something is a good default it shouldn't need user space
> configuration at all imho. Only the "weird" cases should.
So are you just saying we got the default backwards?
Well ... I left the default for memory spreading these
inode slab caches as it was - not spread (preferring
node local).
I did that because I did not have the awareness that this default
should be changed for most systems. I tend to leave defaults as they
are, unless I have good reason to change them.
But for the SGI systems I care about, I'd prefer the default to be
spreading them.
If you think it would be better to change this default, now that the
mechanism is in place to do support spreading these slabs, then I could
certainly go along with that.
Then your systems would not have to do anything in user space, unless
they wanted to disable spreading these slabs (which of course they
could easily do using cpusets ;).
Should we change the default to enable this spreading?
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
next prev parent reply other threads:[~2006-03-01 21:19 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-02-27 7:02 [PATCH 01/02] cpuset memory spread slab cache filesys Paul Jackson
2006-02-27 7:02 ` [PATCH 02/02] cpuset memory spread slab cache format Paul Jackson
2006-02-27 19:34 ` [PATCH 01/02] cpuset memory spread slab cache filesys Andi Kleen
2006-02-27 20:16 ` Paul Jackson
2006-02-27 20:36 ` Christoph Lameter
2006-02-27 20:49 ` Andi Kleen
2006-02-27 20:56 ` Christoph Lameter
2006-02-27 21:02 ` Andi Kleen
2006-02-27 22:14 ` Christoph Lameter
2006-02-27 22:39 ` Andi Kleen
2006-02-27 23:13 ` Christoph Lameter
2006-02-28 1:56 ` Paul Jackson
2006-02-28 17:13 ` Andi Kleen
2006-03-01 18:27 ` Paul Jackson
2006-03-01 18:34 ` Andi Kleen
2006-03-01 18:38 ` Christoph Lameter
2006-03-01 18:58 ` Paul Jackson
2006-03-01 19:21 ` Andi Kleen
2006-03-01 20:53 ` Paul Jackson
2006-03-01 20:59 ` Andi Kleen
2006-03-01 21:19 ` Paul Jackson [this message]
2006-03-01 21:21 ` Andi Kleen
2006-03-01 22:20 ` Christoph Lameter
2006-03-01 22:52 ` Paul Jackson
2006-03-02 1:57 ` Andi Kleen
2006-03-02 14:38 ` Christoph Lameter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060301131910.beb949be.pj@sgi.com \
--to=pj@sgi.com \
--cc=Simon.Derr@bull.net \
--cc=ak@suse.de \
--cc=clameter@engr.sgi.com \
--cc=clameter@sgi.com \
--cc=dgc@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=steiner@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox