linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* Kernel space question?
@ 1999-10-29 13:05 Wohlgemuth, Jason
  1999-10-29 17:07 ` Dan Malek
  0 siblings, 1 reply; 2+ messages in thread
From: Wohlgemuth, Jason @ 1999-10-29 13:05 UTC (permalink / raw)
  To: 'linuxppc-dev@lists.linuxppc.org'


Just a quick question... When I was working to get Linux running on a custom
MPC860, it was necessary to move the IMMR to 0xF??????? in order to avoid a
kernel panic after the VM was running.  I understand the kernel has a
certain range of memory which is kernel space when the IMMR was at a lower
address it could not be directly accessed.  Could someone explain to me the
exact size of kernel space and what the valid range of addresses can be
accessed outside it?

Thanks in advance,
Jason

** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Kernel space question?
  1999-10-29 13:05 Kernel space question? Wohlgemuth, Jason
@ 1999-10-29 17:07 ` Dan Malek
  0 siblings, 0 replies; 2+ messages in thread
From: Dan Malek @ 1999-10-29 17:07 UTC (permalink / raw)
  To: Wohlgemuth, Jason
  Cc: 'linuxppc-dev@lists.linuxppc.org', linuxppc-embedded


Wohlgemuth, Jason wrote:


> Just a quick question... When I was working to get Linux running on a custom
> MPC860, it was necessary to move the IMMR to 0xF???????

This is the way it works on the 8xx, and is somewhat board specific......

A portion of the kernel virtual space is mapped 1:1 virtual to physical
early in the boot phase.  Physical addresses in this space must reside
in the same kernel virtual space, and further in a place that doesn't
conflict with other kernel virtual addresses.

The user application virtual space consumes the first 2 Gbytes (0 to
0x80000000).  The kernel virtual text starts at 0xc0000000, with
data following.  There is a "protection hole" between the end of kernel
data and the start of the kernel dynamically allocated space, but
this space is still within 0xcxxxxxxx.

Obviously the kernel can't map any physical addresses 1:1 in these
ranges.

Most of the 8xx boards clustered (some more than others :-) the
physical resources of the boards, like boot rom, extra flash rom,
nvram and the IMMR into the top physical address space.  It was both
easy (because of the 8xx 8Mbyte "page" option) to map all of this
above 0xfxxxxxxx.  It was also efficient because you could get all of
this space with minimal page tables.

Once the kernel maps the minimum required address space (at least
the IMMR) early in the boot phase, the remainder of the mappings
come from the dynamically allocated space.

Because of the flexibility of the 8xx memory controller and interface
units, my long term plan is to simply remove the variety of board
configurations and simply program all 8xx processors to have the
same address space.  During the early days, it was less confusing
to keep the typical board address space so when you grabbed the user
manuals the Linux map matched the documentation.  Times change....



	-- Dan

** Sent via the linuxppc-dev mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~1999-10-29 17:07 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
1999-10-29 13:05 Kernel space question? Wohlgemuth, Jason
1999-10-29 17:07 ` Dan Malek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).