From: Paul Jackson <pj@sgi.com>
To: Andrew Morton <akpm@osdl.org>
Cc: dgc@sgi.com, steiner@sgi.com, Simon.Derr@bull.net, ak@suse.de,
linux-kernel@vger.kernel.org, clameter@sgi.com
Subject: Re: [PATCH 1/5] cpuset memory spread basic implementation
Date: Mon, 6 Feb 2006 01:32:27 -0800 [thread overview]
Message-ID: <20060206013227.2407cf8c.pj@sgi.com> (raw)
In-Reply-To: <20060205230816.4ae6b6e2.akpm@osdl.org>
Andrew wrote:
> Well I agree.
Good.
> And I think that the only way we'll get peak performance for
> an acceptaly broad range of applications is to provide many fine-grained
> controls and the appropriate documentation and instrumentation to help
> developers and administrators use those controls.
>
> We're all on the same page here. I'm questioning whether slab and
> pagecache should be inextricably lumped together though.
They certainly don't need to be lumped. I just don't go about
creating additional mechanism or apparatus until I smell the need.
(Well, sometimes I do -- too much fun. ;)
When Andrew Morton, who has far more history with this code than I,
recommends such additional mechanism, that's all the smelling I need.
How fine grained would you recommend, Andrew?
Is page vs slab cache the appropriate level of granularity?
> Is it possible to integrate the slab and pagecache allocation policies more
> cleanly into a process's mempolicy? Right now, MPOL_* are disjoint.
>
> (Why is the spreading policy part of cpusets at all? Shouldn't it be part
> of the mempolicy layer?)
The NUMA mempolicy code handles per-task, task internal memory placement
policy, and the cpuset code handles cpuset-wide cpu and memory placement
policy.
In actual usage, spreading the kernel caches of a job is very much a
decision that is made per-job(*), by the system administrator or batch
scheduler, not by the application coder. The application code may well
be -very- awary of the placement of their data pages in user address
space, and to manage this will use calls such as mbind and
set_mempolicy, in addition to using node-local placement (arranging to
fault in each page from a thread running on the node that is to receive
that page). The application has no interest in micromanaging the
kernels placement of page and slab caches, other than choosing between
node-local and cpuset spread strategies.
(*) Actually, made per-cpuset, not per-job. But where this matters,
that tends to be the same thing.
--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.925.600.0401
next prev parent reply other threads:[~2006-02-06 9:32 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-02-04 7:19 [PATCH 1/5] cpuset memory spread basic implementation Paul Jackson
2006-02-04 7:19 ` [PATCH 2/5] cpuset memory spread page cache implementation and hooks Paul Jackson
2006-02-04 23:49 ` Andrew Morton
2006-02-05 1:42 ` Paul Jackson
2006-02-05 1:54 ` Andrew Morton
2006-02-05 3:28 ` Christoph Lameter
2006-02-05 5:06 ` Andrew Morton
2006-02-05 6:08 ` Paul Jackson
2006-02-05 6:15 ` Andrew Morton
2006-02-05 6:28 ` Paul Jackson
2006-02-06 0:20 ` Paul Jackson
2006-02-06 5:51 ` Paul Jackson
2006-02-06 7:14 ` Pekka J Enberg
2006-02-06 7:42 ` Pekka J Enberg
2006-02-06 7:51 ` Pekka J Enberg
2006-02-06 17:32 ` Pekka Enberg
2006-02-04 7:19 ` [PATCH 3/5] cpuset memory spread slab cache implementation Paul Jackson
2006-02-04 23:49 ` Andrew Morton
2006-02-05 3:37 ` Christoph Lameter
2006-02-04 7:19 ` [PATCH 4/5] cpuset memory spread slab cache optimizations Paul Jackson
2006-02-04 23:50 ` Andrew Morton
2006-02-05 3:18 ` Paul Jackson
2006-02-04 23:50 ` Andrew Morton
2006-02-05 4:10 ` Paul Jackson
2006-02-04 7:19 ` [PATCH 5/5] cpuset memory spread slab cache hooks Paul Jackson
2006-02-06 4:37 ` Andrew Morton
2006-02-04 23:49 ` [PATCH 1/5] cpuset memory spread basic implementation Andrew Morton
2006-02-05 3:35 ` Christoph Lameter
2006-02-06 4:33 ` Andrew Morton
2006-02-06 5:50 ` Paul Jackson
2006-02-06 6:02 ` Andrew Morton
2006-02-06 6:17 ` Ingo Molnar
2006-02-06 7:22 ` Paul Jackson
2006-02-06 7:43 ` Ingo Molnar
2006-02-06 8:19 ` Paul Jackson
2006-02-06 8:22 ` Ingo Molnar
2006-02-06 8:40 ` Ingo Molnar
2006-02-06 9:03 ` Paul Jackson
2006-02-06 9:09 ` Ingo Molnar
2006-02-06 9:27 ` Paul Jackson
2006-02-06 9:37 ` Ingo Molnar
2006-02-06 20:22 ` Paul Jackson
2006-02-06 8:47 ` Paul Jackson
2006-02-06 8:51 ` Ingo Molnar
2006-02-06 9:09 ` Paul Jackson
2006-02-06 10:09 ` Andi Kleen
2006-02-06 10:11 ` Ingo Molnar
2006-02-06 10:16 ` Andi Kleen
2006-02-06 10:23 ` Ingo Molnar
2006-02-06 10:35 ` Andi Kleen
2006-02-06 14:42 ` Paul Jackson
2006-02-06 14:35 ` Paul Jackson
2006-02-06 16:48 ` Christoph Lameter
2006-02-06 17:11 ` Andi Kleen
2006-02-06 18:21 ` Christoph Lameter
2006-02-06 18:36 ` Andi Kleen
2006-02-06 18:43 ` Christoph Lameter
2006-02-06 18:48 ` Andi Kleen
2006-02-06 19:19 ` Christoph Lameter
2006-02-06 20:27 ` Paul Jackson
2006-02-06 18:43 ` Ingo Molnar
2006-02-06 20:01 ` Paul Jackson
2006-02-06 20:05 ` Ingo Molnar
2006-02-06 20:27 ` Christoph Lameter
2006-02-06 20:41 ` Ingo Molnar
2006-02-06 20:49 ` Christoph Lameter
2006-02-06 21:07 ` Ingo Molnar
2006-02-06 22:10 ` Christoph Lameter
2006-02-06 23:29 ` Ingo Molnar
2006-02-06 23:45 ` Paul Jackson
2006-02-07 0:19 ` Ingo Molnar
2006-02-07 1:17 ` David Chinner
2006-02-07 9:31 ` Andi Kleen
2006-02-07 11:53 ` Ingo Molnar
2006-02-07 12:14 ` Andi Kleen
2006-02-07 12:30 ` Ingo Molnar
2006-02-07 12:43 ` Andi Kleen
2006-02-07 12:58 ` Ingo Molnar
2006-02-07 13:14 ` Andi Kleen
2006-02-07 14:11 ` Ingo Molnar
2006-02-07 14:23 ` Andi Kleen
2006-02-07 17:11 ` Christoph Lameter
2006-02-07 17:29 ` Andi Kleen
2006-02-07 17:39 ` Christoph Lameter
2006-02-07 17:10 ` Christoph Lameter
2006-02-07 17:28 ` Andi Kleen
2006-02-07 17:42 ` Christoph Lameter
2006-02-07 17:51 ` Andi Kleen
2006-02-07 17:06 ` Christoph Lameter
2006-02-07 17:26 ` Andi Kleen
2006-02-04 23:50 ` Andrew Morton
2006-02-04 23:57 ` David S. Miller
2006-02-06 4:37 ` Andrew Morton
2006-02-06 6:02 ` Ingo Molnar
2006-02-06 6:56 ` Paul Jackson
2006-02-06 7:08 ` Andrew Morton
2006-02-06 7:39 ` Ingo Molnar
2006-02-06 8:22 ` Paul Jackson
2006-02-06 8:35 ` Paul Jackson
2006-02-06 9:32 ` Paul Jackson [this message]
2006-02-06 9:57 ` Andrew Morton
2006-02-06 9:18 ` Simon Derr
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060206013227.2407cf8c.pj@sgi.com \
--to=pj@sgi.com \
--cc=Simon.Derr@bull.net \
--cc=ak@suse.de \
--cc=akpm@osdl.org \
--cc=clameter@sgi.com \
--cc=dgc@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=steiner@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox