From: Andi Kleen <ak@suse.de>
To: Ingo Molnar <mingo@elte.hu>
Cc: steiner@sgi.com, Paul Jackson <pj@sgi.com>,
clameter@engr.sgi.com, akpm@osdl.org, dgc@sgi.com,
Simon.Derr@bull.net, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/5] cpuset memory spread basic implementation
Date: Tue, 7 Feb 2006 15:23:19 +0100 [thread overview]
Message-ID: <200602071523.20174.ak@suse.de> (raw)
In-Reply-To: <20060207141141.GA14706@elte.hu>
On Tuesday 07 February 2006 15:11, Ingo Molnar wrote:
>
> * Andi Kleen <ak@suse.de> wrote:
>
> > I meant it's not that big an issue if it's remote, but it's bad if it
> > fills up the local node.
>
> are you sure this is not some older VM issue?
Unless you implement page migration for all caches it's still there.
The only way to get rid of caches on a node currently is to throw them
away. And refetching them from disk is quite costly.
> Unless it's a fundamental
> property of NUMA systems, it would be bad to factor in some VM artifact
> into the caching design.
Why not? It has to work with the real existing VM, not some imaginary perfect
one.
> > Basically you have to consider the frequency of access:
> >
> > Mapped memory is very frequently accessed. For it memory placement is
> > really important. Optimizing it at the cost of everything else is a
> > good default strategy
> >
> > File cache is much less frequently accessed (most programs buffer
> > read/write well) and when it is accessed it is using functions that
> > are relatively latency tolerant (kernel memcpy). So memory placement
> > is much less important here.
> >
> > And d/inode are also very infrequently accessed compared to local
> > memory, so the occasionally additional latency is better than
> > competing too much with local memory allocation.
>
> Most pagecache pages are clean,
... unless you've just written a lot of data.
> and it's easy and fast to zap a clean
> page when a new anonymous page needs space. So i dont really see why the
> pagecache is such a big issue - it should in essence be invisible to the
> rest of the VM. (barring the extreme case of lots of dirty pages in the
> pagecache) What am i missing?
d/icaches for once don't work this way. Do a find / and watch the results on
your local node.
And in practice your assumption of everything clean and nice in page cache is
also often not true.
> > > i also mentioned software-based clusters in the previous mail, so it's
> > > not only about big systems. Caching attributes are very much relevant
> > > there. Tightly integrated clusters can be considered NUMA systems with a
> > > NUMA factor of 1000 or so (or worse).
> >
> > To be honest I don't think systems with such a NUMA factor will ever
> > work well in the general case. So I wouldn't recommend considering
> > them much if at all in your design thoughts. The result would likely
> > not be a good balanced design.
>
> loosely coupled clusters do seem to work quite well, since the
> overwhelming majority of computing jobs tend to deal with easily
> partitionable workloads.
Yes, but with message passing but without any kind of shared memory.
> Making clusters more seemless via software
> (a'ka OpenMosix) is still quite tempting i think.
Ok we agree on that then. Great.
-Andi
next prev parent reply other threads:[~2006-02-07 14:23 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-02-04 7:19 [PATCH 1/5] cpuset memory spread basic implementation Paul Jackson
2006-02-04 7:19 ` [PATCH 2/5] cpuset memory spread page cache implementation and hooks Paul Jackson
2006-02-04 23:49 ` Andrew Morton
2006-02-05 1:42 ` Paul Jackson
2006-02-05 1:54 ` Andrew Morton
2006-02-05 3:28 ` Christoph Lameter
2006-02-05 5:06 ` Andrew Morton
2006-02-05 6:08 ` Paul Jackson
2006-02-05 6:15 ` Andrew Morton
2006-02-05 6:28 ` Paul Jackson
2006-02-06 0:20 ` Paul Jackson
2006-02-06 5:51 ` Paul Jackson
2006-02-06 7:14 ` Pekka J Enberg
2006-02-06 7:42 ` Pekka J Enberg
2006-02-06 7:51 ` Pekka J Enberg
2006-02-06 17:32 ` Pekka Enberg
2006-02-04 7:19 ` [PATCH 3/5] cpuset memory spread slab cache implementation Paul Jackson
2006-02-04 23:49 ` Andrew Morton
2006-02-05 3:37 ` Christoph Lameter
2006-02-04 7:19 ` [PATCH 4/5] cpuset memory spread slab cache optimizations Paul Jackson
2006-02-04 23:50 ` Andrew Morton
2006-02-05 3:18 ` Paul Jackson
2006-02-04 23:50 ` Andrew Morton
2006-02-05 4:10 ` Paul Jackson
2006-02-04 7:19 ` [PATCH 5/5] cpuset memory spread slab cache hooks Paul Jackson
2006-02-06 4:37 ` Andrew Morton
2006-02-04 23:49 ` [PATCH 1/5] cpuset memory spread basic implementation Andrew Morton
2006-02-05 3:35 ` Christoph Lameter
2006-02-06 4:33 ` Andrew Morton
2006-02-06 5:50 ` Paul Jackson
2006-02-06 6:02 ` Andrew Morton
2006-02-06 6:17 ` Ingo Molnar
2006-02-06 7:22 ` Paul Jackson
2006-02-06 7:43 ` Ingo Molnar
2006-02-06 8:19 ` Paul Jackson
2006-02-06 8:22 ` Ingo Molnar
2006-02-06 8:40 ` Ingo Molnar
2006-02-06 9:03 ` Paul Jackson
2006-02-06 9:09 ` Ingo Molnar
2006-02-06 9:27 ` Paul Jackson
2006-02-06 9:37 ` Ingo Molnar
2006-02-06 20:22 ` Paul Jackson
2006-02-06 8:47 ` Paul Jackson
2006-02-06 8:51 ` Ingo Molnar
2006-02-06 9:09 ` Paul Jackson
2006-02-06 10:09 ` Andi Kleen
2006-02-06 10:11 ` Ingo Molnar
2006-02-06 10:16 ` Andi Kleen
2006-02-06 10:23 ` Ingo Molnar
2006-02-06 10:35 ` Andi Kleen
2006-02-06 14:42 ` Paul Jackson
2006-02-06 14:35 ` Paul Jackson
2006-02-06 16:48 ` Christoph Lameter
2006-02-06 17:11 ` Andi Kleen
2006-02-06 18:21 ` Christoph Lameter
2006-02-06 18:36 ` Andi Kleen
2006-02-06 18:43 ` Christoph Lameter
2006-02-06 18:48 ` Andi Kleen
2006-02-06 19:19 ` Christoph Lameter
2006-02-06 20:27 ` Paul Jackson
2006-02-06 18:43 ` Ingo Molnar
2006-02-06 20:01 ` Paul Jackson
2006-02-06 20:05 ` Ingo Molnar
2006-02-06 20:27 ` Christoph Lameter
2006-02-06 20:41 ` Ingo Molnar
2006-02-06 20:49 ` Christoph Lameter
2006-02-06 21:07 ` Ingo Molnar
2006-02-06 22:10 ` Christoph Lameter
2006-02-06 23:29 ` Ingo Molnar
2006-02-06 23:45 ` Paul Jackson
2006-02-07 0:19 ` Ingo Molnar
2006-02-07 1:17 ` David Chinner
2006-02-07 9:31 ` Andi Kleen
2006-02-07 11:53 ` Ingo Molnar
2006-02-07 12:14 ` Andi Kleen
2006-02-07 12:30 ` Ingo Molnar
2006-02-07 12:43 ` Andi Kleen
2006-02-07 12:58 ` Ingo Molnar
2006-02-07 13:14 ` Andi Kleen
2006-02-07 14:11 ` Ingo Molnar
2006-02-07 14:23 ` Andi Kleen [this message]
2006-02-07 17:11 ` Christoph Lameter
2006-02-07 17:29 ` Andi Kleen
2006-02-07 17:39 ` Christoph Lameter
2006-02-07 17:10 ` Christoph Lameter
2006-02-07 17:28 ` Andi Kleen
2006-02-07 17:42 ` Christoph Lameter
2006-02-07 17:51 ` Andi Kleen
2006-02-07 17:06 ` Christoph Lameter
2006-02-07 17:26 ` Andi Kleen
2006-02-04 23:50 ` Andrew Morton
2006-02-04 23:57 ` David S. Miller
2006-02-06 4:37 ` Andrew Morton
2006-02-06 6:02 ` Ingo Molnar
2006-02-06 6:56 ` Paul Jackson
2006-02-06 7:08 ` Andrew Morton
2006-02-06 7:39 ` Ingo Molnar
2006-02-06 8:22 ` Paul Jackson
2006-02-06 8:35 ` Paul Jackson
2006-02-06 9:32 ` Paul Jackson
2006-02-06 9:57 ` Andrew Morton
2006-02-06 9:18 ` Simon Derr
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200602071523.20174.ak@suse.de \
--to=ak@suse.de \
--cc=Simon.Derr@bull.net \
--cc=akpm@osdl.org \
--cc=clameter@engr.sgi.com \
--cc=dgc@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=pj@sgi.com \
--cc=steiner@sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox