From: ebiederm@xmission.com (Eric W. Biederman)
To: Andi Kleen <ak@suse.de>
Cc: Linus Torvalds <torvalds@transmeta.com>,
Andrew Morton <akpm@zip.com.au>,
linux-kernel@vger.kernel.org,
Michael Hohnbaum <hohnbaum@us.ibm.com>,
Martin Bligh <mjbligh@us.ibm.com>,
Paul McKenney <Paul.McKenney@us.ibm.com>
Subject: Re: [patch[ Simple Topology API
Date: 14 Jul 2002 20:34:51 -0600 [thread overview]
Message-ID: <m1k7nxpvlg.fsf@frodo.biederman.org> (raw)
In-Reply-To: <20020714214334.A16892@wotan.suse.de>
Andi Kleen <ak@suse.de> writes:
>
> At least on Hammer the latency difference is small enough that
> caring about the overall bandwidth makes more sense.
I agree. I will have to look closer but unless there is more
juice than I have seen in Hyper-Transport it is going to become
one of the architectural bottlenecks of the Hammer.
Currently you get 1600MB/s in a single direction. Not to bad.
But when the memory controllers get out to dual channel DDR-II 400,
the local bandwidth to that memory is 6400MB/s, and the bandwidth to
remote memory 1600MB/s, or 3200MB/s (if reads are as common as
writes).
So I suspect bandwidth intensive applications will really benefit
from local memory optimization on the Hammer. I can buy that the
latency is negligible, the fact the links don't appear to scale
in bandwidth as well as the connection to memory may be a bigger
issue.
> > And then you associate that zone-list with the process, and use that
> > zone-list for all process allocations.
>
> That's the basic idea sure for normal allocations from applications
> that do not care much about NUMA.
>
> But "numa aware" applications want to do other things like:
> - put some memory area into every node (e.g. for the numa equivalent of
> per CPU data in the kernel)
> - "stripe" a shared memory segment over all available memory subsystems
> (e.g. to use memory bandwidth fully if you know your interconnect can
> take it; that's e.g. the case on the Hammer)
The latter I really quite believe. Even dual channel PC2100 can
exceed your interprocessor bandwidth.
And yes I have measured 2000MB/s memory copy with an Athlon MP and
PC2100 memory.
Eric
next prev parent reply other threads:[~2002-07-15 2:43 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <3D2F75D7.3060105@us.ibm.com.suse.lists.linux.kernel>
[not found] ` <3D2F9521.96D7080B@zip.com.au.suse.lists.linux.kernel>
2002-07-13 20:08 ` [patch[ Simple Topology API Andi Kleen
2002-07-14 19:17 ` Linus Torvalds
2002-07-14 19:43 ` Andi Kleen
2002-07-15 2:34 ` Eric W. Biederman [this message]
2002-07-15 15:25 ` Sandy Harris
2002-07-15 16:33 ` Chris Friesen
2002-07-16 10:30 ` Eric W. Biederman
2002-07-16 12:59 ` Rik van Riel
2002-07-16 15:45 ` Martin J. Bligh
2002-07-16 19:03 ` Martin J. Bligh
2002-07-16 22:29 ` Matthew Dobson
2002-07-17 0:21 ` Michael Hohnbaum
2002-07-15 17:48 ` Matthew Dobson
2002-07-15 19:50 Jukka Honkela
-- strict thread matches above, loose matches on Subject: below --
2002-07-13 0:35 Matthew Dobson
2002-07-13 2:49 ` Andrew Morton
2002-07-15 18:49 ` Matthew Dobson
2002-07-13 8:04 ` Alexander Viro
2002-07-13 17:13 ` Albert D. Cahalan
2002-07-15 23:52 ` Matthew Dobson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m1k7nxpvlg.fsf@frodo.biederman.org \
--to=ebiederm@xmission.com \
--cc=Paul.McKenney@us.ibm.com \
--cc=ak@suse.de \
--cc=akpm@zip.com.au \
--cc=hohnbaum@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mjbligh@us.ibm.com \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox