linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Josh Boyer <jwboyer@linux.vnet.ibm.com>
To: Grant Erickson <gerickson@nuovations.com>
Cc: linuxppc-dev@ozlabs.org
Subject: Re: [PATCH] PPC40x: Limit Allocable RAM During Early Mapping
Date: Thu, 30 Oct 2008 10:03:53 -0400	[thread overview]
Message-ID: <20081030100353.240e795b@zod.rchland.ibm.com> (raw)
In-Reply-To: <1225316474-29035-1-git-send-email-gerickson@nuovations.com>

On Wed, 29 Oct 2008 14:41:14 -0700
Grant Erickson <gerickson@nuovations.com> wrote:

> If the size of RAM is not an exact power of two, we may not have
> covered RAM in its entirety with large 16 and 4 MiB
> pages. Consequently, restrict the top end of RAM currently allocable
> by updating '__initial_memory_limit_addr' so that calls to the LMB to
> allocate PTEs for "tail" coverage with normal-sized pages (or other
> reasons) do not attempt to allocate outside the allowed range.
> 
> Signed-off-by: Grant Erickson <gerickson@nuovations.com>
> ---
> 
> This bug was discovered in the course of working on CONFIG_LOGBUFFER support
> (see http://ozlabs.org/pipermail/linuxppc-dev/2008-October/064685.html).
> However, the bug is triggered quite easily independent of that feature
> by placing a memory limit via the 'mem=' kernel command line that results in
> a memory size that is not equal to an exact power of two.
> 
> For example, on the AMCC PowerPC 405EXr "Haleakala" board with 256 MiB
> of RAM, mmu_mapin_ram() normally covers RAM with precisely 16 16 MiB
> large pages. However, if a memory limit of 256 MiB - 20 KiB (as might
> be the case for CONFIG_LOGBUFFER) is put in place with
> "mem=268414976", then large pages only cover (16 MiB * 15) + (4 MiB *
> 3) = 252 MiB with a 4 MiB - 20 KiB "tail" to cover with normal, 4 KiB
> pages via map_page().
> 
> Unfortunately, if __initial_memory_limit_addr is not updated from its
> initial value of 0x1000 0000 (256 MiB) to reflect what was actually
> mapped via mmu_mapin_ram(), the following happens during the "tail"
> mapping when the first PTE is allocated at 0xFFF A000 (rather than the
> desired 0xFBF F000):
> 
>     mapin_ram
>         mmu_mapin_ram
>         map_page
>             pte_alloc_kernel
>                 pte_alloc_one_kernel
>                     early_get_page
>                         lmb_alloc_base
>                     clear_page
>                         clear_pages
>                             dcbz    0,page  <-- BOOM!
> 
> a non-recoverable page fault.

Nice catch.  I was looking to see if 44x had the same problem, but I
don't think it does because we simply over-map DRAM there.  Does that
seem correct to you, or am I missing something on 44x that would cause
this same problem?

josh

  reply	other threads:[~2008-10-30 14:04 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-29 21:41 [PATCH] PPC40x: Limit Allocable RAM During Early Mapping Grant Erickson
2008-10-30 14:03 ` Josh Boyer [this message]
2008-10-30 14:33   ` Grant Erickson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20081030100353.240e795b@zod.rchland.ibm.com \
    --to=jwboyer@linux.vnet.ibm.com \
    --cc=gerickson@nuovations.com \
    --cc=linuxppc-dev@ozlabs.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).