From: Mark Gross <mgross@linux.intel.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Balbir Singh <balbir@in.ibm.com>, Mel Gorman <mel@skynet.ie>,
npiggin@suse.de, clameter@engr.sgi.com, mingo@elte.hu,
jschopp@austin.ibm.com, arjan@infradead.org, mbligh@mbligh.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: The performance and behaviour of the anti-fragmentation related patches
Date: Fri, 2 Mar 2007 10:45:29 -0800 [thread overview]
Message-ID: <20070302184529.GA8761@linux.intel.com> (raw)
In-Reply-To: <Pine.LNX.4.64.0703020903190.3953@woody.linux-foundation.org>
On Fri, Mar 02, 2007 at 09:16:17AM -0800, Linus Torvalds wrote:
>
>
> On Fri, 2 Mar 2007, Mark Gross wrote:
> > >
> > > Yes, the same issues exist for other DRAM forms too, but to a *much*
> > > smaller degree.
> >
> > DDR3-1333 may be better than FBDIMM's but don't count on it being much
> > better.
>
> Hey, fair enough. But it's not a problem (and it doesn't have a solution)
> today. I'm not sure it's going to have a solution tomorrow either.
>
> > > Also, IN PRACTICE you're never ever going to see this anyway. Almost
> > > everybody wants bank interleaving, because it's a huge performance win on
> > > many loads. That, in turn, means that your memory will be spread out over
> > > multiple DIMM's even for a single page, much less any bigger area.
> >
> > 4-way interleave across banks on systems may not be as common as you may
> > think for future chip sets. 2-way interleave across DIMMs within a bank
> > will stay.
>
> .. and think about a realistic future.
>
> EVERYBODY will do on-die memory controllers. Yes, Intel doesn't do it
> today, but in the one- to two-year timeframe even Intel will.
True.
>
> What does that mean? It means that in bigger systems, you will no longer
> even *have* 8 or 16 banks where turning off a few banks makes sense.
> You'll quite often have just a few DIMM's per die, because that's what you
> want for latency. Then you'll have CSI or HT or another interconnect.
>
> And with a few DIMM's per die, you're back where even just 2-way
> interleaving basically means that in order to turn off your DIMM, you
> probably need to remove HALF the memory for that CPU.
I think there will be more than just 2 dims per cpu socket on systems
that care about this type of capability.
>
> In other words: TURNING OFF DIMM's IS A BEDTIME STORY FOR DIMWITTED
> CHILDREN.
Its very true that taking advantage of the first incarnations of this
type of thing will be limited to specific workloads you personally don't
care about, but its got applications and customers.
BTW I hope we aren't talking past each other, there are low power states
where the ram contents are persevered.
>
> There are maybe a couple machines IN EXISTENCE TODAY that can do it. But
> nobody actually does it in practice, and nobody even knows if it's going
> to be viable (yes, DRAM takes energy, but trying to keep memory free will
> likely waste power *too*, and I doubt anybody has any real idea of how
> much any of this would actually help in practice).
>
> And I don't think that will change. See above. The future is *not* moving
> towards more and more DIMMS. Quite the reverse. On workstations, we are
> currently in the "one or two DIMM's per die". Do you really think that
> will change? Hell no. And in big servers, pretty much everybody agrees
> that we will move towards that, rather than away from it.
>
> So:
> - forget about turning DIMM's off. There is *no* actual data supporting
> the notion that it's a good idea today, and I seriously doubt you can
> really argue that it will be a good idea in five or ten years. It's a
> hardware hack for a hardware problem, and the problems are way too
> complex for us to solve in time for the solution to be relevant.
>
> - aim for NUMA memory allocation and turning off whole *nodes*. That's
> much more likely to be productive in the longer timeframe. And yes, we
> may well want to do memory compaction for that too, but I suspect that
> the issues are going to be different (ie the way to do it is to simply
> prefer certain nodes for certain allocations, and then try to keep the
> jobs that you know can be idle on other nodes)
We doing the NUMA approach.
>
> Do you actually have real data supporting the notion that turning DIMM's
> off will be reasonable and worthwhile?
>
Yes we have data from our internal and external customers showing that
this stuff is worthwhile for specific workload that some people care
about. However; you need to understand that its by definition marketing data.
--mgross
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2007-03-02 18:45 UTC|newest]
Thread overview: 99+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-03-01 10:12 The performance and behaviour of the anti-fragmentation related patches Mel Gorman
2007-03-02 0:09 ` Andrew Morton
2007-03-02 0:44 ` Linus Torvalds
2007-03-02 1:52 ` Balbir Singh
2007-03-02 3:44 ` Linus Torvalds
2007-03-02 3:59 ` Andrew Morton
2007-03-02 5:11 ` Linus Torvalds
2007-03-02 5:50 ` KAMEZAWA Hiroyuki
2007-03-02 6:15 ` Paul Mundt
2007-03-02 17:01 ` Mel Gorman
2007-03-02 16:20 ` Mark Gross
2007-03-02 17:07 ` Andrew Morton
2007-03-02 17:35 ` Mark Gross
2007-03-02 18:02 ` Andrew Morton
2007-03-02 19:02 ` Mark Gross
2007-03-02 17:16 ` Linus Torvalds
2007-03-02 18:45 ` Mark Gross [this message]
2007-03-02 19:03 ` Linus Torvalds
2007-03-02 23:58 ` Martin J. Bligh
2007-03-02 4:18 ` Balbir Singh
2007-03-02 5:13 ` Jeremy Fitzhardinge
2007-03-06 4:16 ` Paul Mackerras
2007-03-02 16:58 ` Mel Gorman
2007-03-02 17:05 ` Joel Schopp
2007-03-05 3:21 ` Nick Piggin
2007-03-05 15:20 ` Joel Schopp
2007-03-05 16:01 ` Nick Piggin
2007-03-05 16:45 ` Joel Schopp
2007-05-03 8:49 ` Andy Whitcroft
2007-03-02 1:39 ` Balbir Singh
2007-03-02 2:34 ` KAMEZAWA Hiroyuki
2007-03-02 3:05 ` Christoph Lameter
2007-03-02 3:57 ` Nick Piggin
2007-03-02 4:06 ` Christoph Lameter
2007-03-02 4:21 ` Nick Piggin
2007-03-02 4:31 ` Christoph Lameter
2007-03-02 5:06 ` Nick Piggin
2007-03-02 5:40 ` Christoph Lameter
2007-03-02 5:49 ` Nick Piggin
2007-03-02 5:53 ` Christoph Lameter
2007-03-02 6:08 ` Nick Piggin
2007-03-02 6:19 ` Christoph Lameter
2007-03-02 6:29 ` Nick Piggin
2007-03-02 6:51 ` Christoph Lameter
2007-03-02 7:03 ` Andrew Morton
2007-03-02 7:19 ` Nick Piggin
2007-03-02 7:44 ` Christoph Lameter
2007-03-02 8:12 ` Nick Piggin
2007-03-02 8:21 ` Christoph Lameter
2007-03-02 8:38 ` Nick Piggin
2007-03-02 17:09 ` Christoph Lameter
2007-03-04 1:26 ` Rik van Riel
2007-03-04 1:51 ` Andrew Morton
2007-03-04 1:58 ` Rik van Riel
2007-03-02 5:50 ` Christoph Lameter
2007-03-02 4:29 ` Andrew Morton
2007-03-02 4:33 ` Christoph Lameter
2007-03-02 4:58 ` Andrew Morton
2007-03-02 4:20 ` Paul Mundt
2007-03-02 13:50 ` Arjan van de Ven
2007-03-02 15:29 ` Rik van Riel
2007-03-02 16:58 ` Andrew Morton
2007-03-02 17:09 ` Mel Gorman
2007-03-02 17:23 ` Christoph Lameter
2007-03-02 17:35 ` Andrew Morton
2007-03-02 17:43 ` Rik van Riel
2007-03-02 18:06 ` Andrew Morton
2007-03-02 18:15 ` Christoph Lameter
2007-03-02 18:23 ` Andrew Morton
2007-03-02 18:23 ` Rik van Riel
2007-03-02 19:31 ` Christoph Lameter
2007-03-02 19:40 ` Rik van Riel
2007-03-02 21:12 ` Bill Irwin
2007-03-02 21:19 ` Rik van Riel
2007-03-02 21:52 ` Andrew Morton
2007-03-02 22:03 ` Rik van Riel
2007-03-02 22:22 ` Andrew Morton
2007-03-02 22:34 ` Rik van Riel
2007-03-02 22:51 ` Martin Bligh
2007-03-02 22:54 ` Rik van Riel
2007-03-02 23:28 ` Martin J. Bligh
2007-03-03 0:24 ` Andrew Morton
2007-03-02 22:52 ` Chuck Ebbert
2007-03-02 22:59 ` Andrew Morton
2007-03-02 23:20 ` Rik van Riel
2007-03-03 1:40 ` William Lee Irwin III
2007-03-03 1:58 ` Andrew Morton
2007-03-03 3:55 ` William Lee Irwin III
2007-03-03 0:33 ` William Lee Irwin III
2007-03-03 0:54 ` Andrew Morton
2007-03-03 3:15 ` Christoph Lameter
2007-03-03 4:19 ` William Lee Irwin III
2007-03-03 17:16 ` Martin J. Bligh
2007-03-03 17:50 ` Christoph Lameter
2007-03-02 20:59 ` Bill Irwin
2007-03-02 1:52 ` Bill Irwin
2007-03-02 10:38 ` Mel Gorman
2007-03-02 16:31 ` Joel Schopp
2007-03-02 21:37 ` Bill Irwin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070302184529.GA8761@linux.intel.com \
--to=mgross@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=arjan@infradead.org \
--cc=balbir@in.ibm.com \
--cc=clameter@engr.sgi.com \
--cc=jschopp@austin.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mbligh@mbligh.org \
--cc=mel@skynet.ie \
--cc=mingo@elte.hu \
--cc=npiggin@suse.de \
--cc=torvalds@linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).