From: Dave Hansen <haveblue@us.ibm.com>
To: linux-ia64@vger.kernel.org
Subject: RE: [patch 0/4] ia64 SPARSEMEM
Date: Tue, 31 May 2005 21:58:22 +0000 [thread overview]
Message-ID: <1117576702.20180.70.camel@localhost> (raw)
In-Reply-To: <20050523175031.GC2783@localhost.localdomain>
On Tue, 2005-05-31 at 14:41 -0700, Luck, Tony wrote:
> >* It has good tlb behavior.
> True ... definitely better than VIRTUAL_MEM_MAP. But what effect does
> this have on system level performance?
It slightly improves performance on everything I've run it on, at least
compared to discontigmem. That means a few ppc64 configurations, x86
summit, and NUMAQ.
> >* It is faster and has a lower icache footprint than existing
> > discontigmem implementations.
> Did I miss some benchmark results?
I've posted them a few times. The gain is somewhere in the 1-2% on
NUMAQ. Nothing substantial.
I can dig the results up again, but they're going to mean close to
nothing on your hardware. I'd suggest running it yourself, and seeing
exactly how it behaves.
> >* On a theoretical 16TB ppc64 system with 16MB sections, the overhead of
> > the mem_section[] table is 8MB.
> Back to the "somewhat sparse" arguments of point #1. In fact this theoretical
> system isn't "sparse" at all!
Well, the overhead is still 8MB, even if the system only has 32MB:
16MB@0 and 16MB@(1TB-16MB). That's pretty sparse.
In any case, I agree that the current code isn't optimal across all ia64
platforms. But, I don't think we're seriously tied to that single, flat
array. It's just the easiest way to do it for now.
> >Also, nothing seriously confines us to a flat array of mem_sections,
> >that's just the only implementation right now. The pagetables that are
> >walked in the TLB miss handler (for vmem_map[]) could just as easily be
> >a set of two-level mem_section tables that are walked in software. That
> >just adds an extra load to the pfn_to_page() path. Plus, if somebody
> ^^^^^^^^^^^^^^^^^^^^^^^
> >does this, all sparsemem architectures can benefit.
>
> What would the performance impacts of this extra load be? pfn_to_page()
> appears to be a pretty common operation.
On a normal, x86 flatmem system there's a single load to do
pfn_to_page() from *mem_map. With today's sparsemem, that goes to two
loads (page->flags and mem_section[section]). I haven't been able to
measure the effect of this extra load on any macro-benchmarks.
-- Dave
next prev parent reply other threads:[~2005-05-31 21:58 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-05-23 17:50 [patch 0/4] ia64 SPARSEMEM Bob Picco
2005-05-24 3:29 ` David Mosberger
2005-05-24 14:33 ` Bob Picco
2005-05-24 16:27 ` Bob Picco
2005-05-26 0:32 ` Luck, Tony
2005-05-26 20:09 ` David Mosberger
2005-05-26 20:54 ` Bob Picco
2005-05-26 21:02 ` Dave Hansen
2005-05-26 21:34 ` Luck, Tony
2005-05-26 21:44 ` Jack Steiner
2005-05-26 21:51 ` Bob Picco
2005-05-26 22:03 ` Luck, Tony
2005-05-26 22:04 ` Bob Picco
2005-05-27 5:14 ` Yasunori Goto
2005-05-27 10:35 ` Bob Picco
2005-05-27 16:23 ` David Mosberger
2005-05-27 22:04 ` Jack Steiner
2005-05-30 0:18 ` KAMEZAWA Hiroyuki
2005-05-31 17:55 ` Luck, Tony
2005-05-31 18:14 ` Dave Hansen
2005-05-31 18:15 ` Jack Steiner
2005-05-31 21:41 ` Luck, Tony
2005-05-31 21:58 ` Dave Hansen [this message]
2005-06-01 1:37 ` Bob Picco
2005-06-01 9:14 ` Andy Whitcroft
2005-06-01 22:48 ` David Mosberger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1117576702.20180.70.camel@localhost \
--to=haveblue@us.ibm.com \
--cc=linux-ia64@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox