linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* RE: [Fwd: Memory layout question]
@ 2004-05-25 14:17 Heater, Daniel (GE Infrastructure)
  2004-05-26  6:21 ` Oliver Korpilla
  0 siblings, 1 reply; 6+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-05-25 14:17 UTC (permalink / raw)
  To: Oliver Korpilla; +Cc: linuxppc-embedded


We're setting up a 2700 here to test with, so that will help.

> It's not done and over with virt_to_bus() etc.
>
> What we basically got here is a PCI configuration and
> portability issue.
>
> On the MVME2100, e.g., the PCI host bridge grabs I/O resource
> 0x80000000
> - 0xFFFFFFFF.
>
> On the VMIC driver it requests an I/O memory ressource, and a
> region on
> the I/O memory is awarded.
>
> In order to request a region being mapped by the PCI host bridge, one
> would have to request a region of the PCI host bridge
> resource, not the
> I/O resource.

Can the pci_lo_bound and pci_hi_bound module parameter be used to
limit the range of memory resources requested to those available to
the PCI bridge that the Universe chip lives behind.

>
> As far as I can deduce from looking at kernel/resource.c
> allocate_resource(), find_resource() and __request_resource() have no
> recursion, so one cannot request an appropriate region from the
> iomem_resource.
>
> I guess to do it portably PCI functions may be needed, though
> I'm still
> looking at it.
>
>  From my current knowledge, the driver may have 3 issues:
> 1) How to request a "safe" range of PCI addresses.

The pci_lo_bound and pci_hi_bound module parameters may help.

> 2) How to map those PCI addresses safely to virtual (kernel) and bus
> (PCI device) addresses.
> 3) Using the safer readb/readw ... etc. calls, or stuff like
> memcpy_io
> to portably access the VME bus, perhaps in read() and write()
> implementations, perhaps deprecating the not-so-portable
> dereferencing
> of a pointer.

Issue 3 gets confusing, (as endian issues always do). On VMIC hardware,
there is custom byte swapping hardware to deal with the big-endian VMEbus
to little-endian PCI bus. The Universe chip also has some internal byte
swapping hardware. I not sure that the read[bwl]/write[bwl] calls
would do the correct thing considering the existing byte swapping hardware.
(I'm not sure it would do the wrong thing either :-/)

> 1) and 2) are non-issues on the x86, because of the PCI and memory
> layout. So all these 3 issues are about portability.
>
> I'm looking into this, starting with 2) currently.
>
> Maybe the driver would be easier to port and maintain, if the
> universe
> gets treated like a "proper" PCI device right from the start. I'm not
> experienced enough to say something about that right now.

Unfortunately, that's the design of the Tundra Universe chip.
I don't thing there is any way for us to correct that.

Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.

** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 6+ messages in thread
* RE: [Fwd: Memory layout question]
@ 2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
  2004-05-25 13:56 ` Oliver Korpilla
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-05-18 15:25 UTC (permalink / raw)
  To: linuxppc-embedded; +Cc: okorpil


> I'm currently trying out the VMIC Tundra Universe II driver, a
> PCI-VME bus bridge quite common in embedded devices (from VMIC and
> Motorola). VMIC manufactures mostly x86 boards, but I try to use
> the driver on a MVME5500 with 7455 Motorola PPC.
> There seems to be a memory problem involved, as follows:

-- snip --

> Actually the addresses returned by allocate_resource seem to come from
> system memory, because ioremap_nocache logs a debug statement
> that is only triggered if the remapping is below the high_memory
bound.
> (And there already seems to be a virtual address associated with it -
> physical 0x40000000 is RAM 0xc<whatever> - kernel space, I guess)
>
> But the address returned is 0x40000000 !! Isn't that the 2nd
> GB of address space? My board only has 512 MB of storage, and is only
> upgradable to 1GB, so shouldn't an address starting at 0x40000000
> physically never be a memory address and never be below the
> high_memory bound?

Can you post a copy of /proc/iomem, and /proc/vme/master, and any
relevant portions of the output from dmesg if available?

> Can I even dereference an I/O memory pointer on the PowerPC?
> (It can be done on x86) I know, I know, I _should_ use readb and
> friends, but can it be done? Or _must_ I strictly use the macros
> because it won't work the other way round?

That's the way I do it on x86. I haven't tried PPC.

BTW, please copy me directly on replies to this thread. I am not
subscribed to this list.

Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.


** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2004-05-26 11:56 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-05-25 14:17 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
2004-05-26  6:21 ` Oliver Korpilla
  -- strict thread matches above, loose matches on Subject: below --
2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
2004-05-25 13:56 ` Oliver Korpilla
2004-05-26  8:37 ` Oliver Korpilla
2004-05-26 11:56 ` Oliver Korpilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).