linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Matt Porter <mporter@kernel.crashing.org>
To: Wolfgang Grandegger <wolfgang.grandegger@bluewin.ch>
Cc: linuxppc-embedded@lists.linuxppc.org
Subject: Re: 405 TLB miss reduction
Date: Wed, 10 Dec 2003 09:03:27 -0700	[thread overview]
Message-ID: <20031210090327.A18009@home.com> (raw)
In-Reply-To: <3FBD5348000912F8@mssbzhh-int.msg.bluewin.ch>; from wolfgang.grandegger@bluewin.ch on Wed, Dec 10, 2003 at 03:43:41PM +0100


On Wed, Dec 10, 2003 at 03:43:41PM +0100, Wolfgang Grandegger wrote:
>
> Hello,
>
> we are suffering from TLB misses on a 405GP processor, eating up to
> 10% of the CPU power when running our (rather big) application. We
> can regain a few percent by using the kernel option CONFIG_PIN_TLB
> but we are thinking about further kernel modifications to reduce
> TLB misses. What comes into my mind is:
>
>  - using a kernel PAGE_SIZE of 8KB (instead of 4KB).
>  - using large-page TLB entries.
>
> Has anybody already investigated the effort or benefit of such
> changes or knows about other (simple) measures (apart from
> replacing the hardware)?

David Gibson and Paul M. implemented large TLB kernel lowmem
support in 2.5/2.6 for 405.  It allows for large TLB entries
to be loaded on kernel lowmem TLB misses.  This is better than
the CONFIG_PIN_TLB since it works for all of your kernel lowmem
system memory rather than the fixed amount of memory that
CONFIG_PIN_TLB covers.

I've been thinking about enabling a variant of Andi Kleen's patch
to allow modules to be loaded into kernel lowmem space instead of
vmalloc space (to avoid the performance penalty of modular drivers).
This takes advantage of the large kernel lowmem 405 support above
and on 440 all kernel lowmem is in a pinned tlb for architectural
reasons.

I've also been thinking about dynamically using large TLB/PTE mappings
for ioremap on 405/440.

In 2.6, there is hugetlb userspace infrastructure that could be enabled
for the large page sizes on 4xx.

Allowing a compile time choice of default page size would also be useful.

Basically, all of these cases can provide a performance advantage
depending on your embedded application...it all depends on what your
application is doing.

-Matt

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

  reply	other threads:[~2003-12-10 16:03 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-12-10 14:43 405 TLB miss reduction Wolfgang Grandegger
2003-12-10 16:03 ` Matt Porter [this message]
2003-12-11  9:06   ` Wolfgang Grandegger
2003-12-11 15:46     ` Dan Malek
2003-12-11 17:45     ` Matt Porter
2003-12-12  9:50       ` Wolfgang Grandegger
2003-12-15 11:26       ` Joakim Tjernlund
2003-12-10 17:08 ` Dan Malek
2003-12-11 10:37   ` Wolfgang Grandegger
2003-12-11 16:48     ` Jon Masters
2003-12-11 16:56       ` Wolfgang Grandegger
2003-12-11 17:06         ` Jon Masters
2003-12-11 17:36           ` Matt Porter
2003-12-11 16:44 ` Jon Masters

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20031210090327.A18009@home.com \
    --to=mporter@kernel.crashing.org \
    --cc=linuxppc-embedded@lists.linuxppc.org \
    --cc=wolfgang.grandegger@bluewin.ch \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).