* RE: [Fwd: Memory layout question]
@ 2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
2004-05-25 13:56 ` Oliver Korpilla
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-05-18 15:25 UTC (permalink / raw)
To: linuxppc-embedded; +Cc: okorpil
> I'm currently trying out the VMIC Tundra Universe II driver, a
> PCI-VME bus bridge quite common in embedded devices (from VMIC and
> Motorola). VMIC manufactures mostly x86 boards, but I try to use
> the driver on a MVME5500 with 7455 Motorola PPC.
> There seems to be a memory problem involved, as follows:
-- snip --
> Actually the addresses returned by allocate_resource seem to come from
> system memory, because ioremap_nocache logs a debug statement
> that is only triggered if the remapping is below the high_memory
bound.
> (And there already seems to be a virtual address associated with it -
> physical 0x40000000 is RAM 0xc<whatever> - kernel space, I guess)
>
> But the address returned is 0x40000000 !! Isn't that the 2nd
> GB of address space? My board only has 512 MB of storage, and is only
> upgradable to 1GB, so shouldn't an address starting at 0x40000000
> physically never be a memory address and never be below the
> high_memory bound?
Can you post a copy of /proc/iomem, and /proc/vme/master, and any
relevant portions of the output from dmesg if available?
> Can I even dereference an I/O memory pointer on the PowerPC?
> (It can be done on x86) I know, I know, I _should_ use readb and
> friends, but can it be done? Or _must_ I strictly use the macros
> because it won't work the other way round?
That's the way I do it on x86. I haven't tried PPC.
BTW, please copy me directly on replies to this thread. I am not
subscribed to this list.
Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Fwd: Memory layout question]
2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
@ 2004-05-25 13:56 ` Oliver Korpilla
2004-05-26 8:37 ` Oliver Korpilla
2004-05-26 11:56 ` Oliver Korpilla
2 siblings, 0 replies; 6+ messages in thread
From: Oliver Korpilla @ 2004-05-25 13:56 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
It's not done and over with virt_to_bus() etc.
What we basically got here is a PCI configuration and portability issue.
On the MVME2100, e.g., the PCI host bridge grabs I/O resource 0x80000000
- 0xFFFFFFFF.
On the VMIC driver it requests an I/O memory ressource, and a region on
the I/O memory is awarded.
In order to request a region being mapped by the PCI host bridge, one
would have to request a region of the PCI host bridge resource, not the
I/O resource.
As far as I can deduce from looking at kernel/resource.c
allocate_resource(), find_resource() and __request_resource() have no
recursion, so one cannot request an appropriate region from the
iomem_resource.
I guess to do it portably PCI functions may be needed, though I'm still
looking at it.
From my current knowledge, the driver may have 3 issues:
1) How to request a "safe" range of PCI addresses.
2) How to map those PCI addresses safely to virtual (kernel) and bus
(PCI device) addresses.
3) Using the safer readb/readw ... etc. calls, or stuff like memcpy_io
to portably access the VME bus, perhaps in read() and write()
implementations, perhaps deprecating the not-so-portable dereferencing
of a pointer.
1) and 2) are non-issues on the x86, because of the PCI and memory
layout. So all these 3 issues are about portability.
I'm looking into this, starting with 2) currently.
Maybe the driver would be easier to port and maintain, if the universe
gets treated like a "proper" PCI device right from the start. I'm not
experienced enough to say something about that right now.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [Fwd: Memory layout question]
@ 2004-05-25 14:17 Heater, Daniel (GE Infrastructure)
2004-05-26 6:21 ` Oliver Korpilla
0 siblings, 1 reply; 6+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-05-25 14:17 UTC (permalink / raw)
To: Oliver Korpilla; +Cc: linuxppc-embedded
We're setting up a 2700 here to test with, so that will help.
> It's not done and over with virt_to_bus() etc.
>
> What we basically got here is a PCI configuration and
> portability issue.
>
> On the MVME2100, e.g., the PCI host bridge grabs I/O resource
> 0x80000000
> - 0xFFFFFFFF.
>
> On the VMIC driver it requests an I/O memory ressource, and a
> region on
> the I/O memory is awarded.
>
> In order to request a region being mapped by the PCI host bridge, one
> would have to request a region of the PCI host bridge
> resource, not the
> I/O resource.
Can the pci_lo_bound and pci_hi_bound module parameter be used to
limit the range of memory resources requested to those available to
the PCI bridge that the Universe chip lives behind.
>
> As far as I can deduce from looking at kernel/resource.c
> allocate_resource(), find_resource() and __request_resource() have no
> recursion, so one cannot request an appropriate region from the
> iomem_resource.
>
> I guess to do it portably PCI functions may be needed, though
> I'm still
> looking at it.
>
> From my current knowledge, the driver may have 3 issues:
> 1) How to request a "safe" range of PCI addresses.
The pci_lo_bound and pci_hi_bound module parameters may help.
> 2) How to map those PCI addresses safely to virtual (kernel) and bus
> (PCI device) addresses.
> 3) Using the safer readb/readw ... etc. calls, or stuff like
> memcpy_io
> to portably access the VME bus, perhaps in read() and write()
> implementations, perhaps deprecating the not-so-portable
> dereferencing
> of a pointer.
Issue 3 gets confusing, (as endian issues always do). On VMIC hardware,
there is custom byte swapping hardware to deal with the big-endian VMEbus
to little-endian PCI bus. The Universe chip also has some internal byte
swapping hardware. I not sure that the read[bwl]/write[bwl] calls
would do the correct thing considering the existing byte swapping hardware.
(I'm not sure it would do the wrong thing either :-/)
> 1) and 2) are non-issues on the x86, because of the PCI and memory
> layout. So all these 3 issues are about portability.
>
> I'm looking into this, starting with 2) currently.
>
> Maybe the driver would be easier to port and maintain, if the
> universe
> gets treated like a "proper" PCI device right from the start. I'm not
> experienced enough to say something about that right now.
Unfortunately, that's the design of the Tundra Universe chip.
I don't thing there is any way for us to correct that.
Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Fwd: Memory layout question]
2004-05-25 14:17 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
@ 2004-05-26 6:21 ` Oliver Korpilla
0 siblings, 0 replies; 6+ messages in thread
From: Oliver Korpilla @ 2004-05-26 6:21 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Heater, Daniel (GE Infrastructure) wrote:
>We're setting up a 2700 here to test with, so that will help.
>
>
>
Have you run into similar issues with the VMIVME7050?
>>In order to request a region being mapped by the PCI host bridge, one
>>would have to request a region of the PCI host bridge
>>resource, not the
>>I/O resource.
>>
>>
>
>Can the pci_lo_bound and pci_hi_bound module parameter be used to
>limit the range of memory resources requested to those available to
>the PCI bridge that the Universe chip lives behind.
>
>
>
Actually I tried that out - it's the only way to even load the driver on
a MVME2100 without interfering with the Tulip Ethernet driver. While
both these define sane bounds to allocate I/O memory from, if set
correctly, the allocation request always fails because of the different
layout of the resource tree.
On x86: I/O memory -> Addresses relevant for the Tundra.
on PPC: I/O memory -> PCI host bridge -> Addresses relevant for the Tundra.
Since allocate_resource() does not traverse the tree, but instead tries
to allocate a resource as a child of the resource given (here:
iomem_resource), this will always fail: All the PCI addresses can only
be allocated to children of "PCI host bridge", not to children of I/O
memory.
>>From my current knowledge, the driver may have 3 issues:
>>1) How to request a "safe" range of PCI addresses.
>>
>>
>
>The pci_lo_bound and pci_hi_bound module parameters may help.
>
>
>
See above.
>>3) Using the safer readb/readw ... etc. calls, or stuff like
>>memcpy_io
>>to portably access the VME bus, perhaps in read() and write()
>>implementations, perhaps deprecating the not-so-portable
>>dereferencing
>>of a pointer.
>>
>>
>
>Issue 3 gets confusing, (as endian issues always do). On VMIC hardware,
>there is custom byte swapping hardware to deal with the big-endian VMEbus
>to little-endian PCI bus. The Universe chip also has some internal byte
>swapping hardware. I not sure that the read[bwl]/write[bwl] calls
>would do the correct thing considering the existing byte swapping hardware.
>(I'm not sure it would do the wrong thing either :-/)
>
>
>
Well, I'm not to sure either: They'd at least byte-swap between CPU and
PCI bus, because they are of different endianness on the PPC. Generally
spoken, since we can have both Intel and PowerPC boards on the VME, I
guess this will always be an issue. Either you configure the hardware on
the VME, or you have to work some magic in software. But dereferencing a
pointer into I/O memory is simply not safe on every architecture or
platform. Maybe, all in all, read() and write() with memcpy_io() may be
more portable and robust, and the pointer stuff can be used for x86 only.
>>Maybe the driver would be easier to port and maintain, if the
>>universe
>>gets treated like a "proper" PCI device right from the start. I'm not
>>experienced enough to say something about that right now.
>>
>>
>
>Unfortunately, that's the design of the Tundra Universe chip.
>I don't thing there is any way for us to correct that.
>
>
I see. But I meant not the memory window mechanism, but the data
structures of the driver. If we're only trying to handle PCI stuff, why
not flesh it out as a PCI driver. The data structures of PCI drivers
like pci_dev can be used instead of our own generic handle. It may be
that we need the PCI functions to do everything portably, so adaption of
interface to that of other PCI devices may become necessary to satisfy
interfaces of PCI-related calls. I'm still looking into this.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Fwd: Memory layout question]
2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
2004-05-25 13:56 ` Oliver Korpilla
@ 2004-05-26 8:37 ` Oliver Korpilla
2004-05-26 11:56 ` Oliver Korpilla
2 siblings, 0 replies; 6+ messages in thread
From: Oliver Korpilla @ 2004-05-26 8:37 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Maybe the following logic for getting proper PCI addresses would work:
1.) Use stuff like pci_module_init() to find and register the Universe
device (using standard PCI functions to find the Tundra device).
2.) Determine its PCI bus from the pci_dev structure.
3.) Use the resource[] vector associated with that bus to request a set
of PCI addresses on that bus.
This should be portable and clean, and always return a set of proper PCI
addresses usable by the Tundra Universe, at least AFAIK from looking at
include/linux/pci.h and drivers/pci/pci.c.
Since you already use the PCI configuration word mechanism and device
IDs this shouldn't be too hard to adapt to.
What do you think?
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Fwd: Memory layout question]
2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
2004-05-25 13:56 ` Oliver Korpilla
2004-05-26 8:37 ` Oliver Korpilla
@ 2004-05-26 11:56 ` Oliver Korpilla
2 siblings, 0 replies; 6+ messages in thread
From: Oliver Korpilla @ 2004-05-26 11:56 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Below I sketched out a function that may give an address from the bus the Tundra
Universe is on - it is designed to be used, where normally the
allocate_resource() call is used (e.g. in create_slsi_window()).
With kind regards,
Oliver Korpilla
/* Try getting a resource (range of PCI addresses) from the PCI bus we're on */
static int allocate_pci_resource(unsigned long size, unsigned long align,
struct resource *new_resource) {
/* Determine the bus the Tundra is on */
struct pci_bus *bus = universe_pci_dev->bus;
int i;
for (i=0; i<4; i++) {
int retval;
/* Get one of the bus address ranges */
struct resource *r = bus->resource[i];
/* Check if that resource "exists" */
if (!r)
continue;
/* If the resource is not I/O memory (e.g. I/O ports) */
if (! (r->flags & IORESOURCE_MEM))
continue;
#ifdef DEBUG
/* Print out name of resource for debugging */
if (r->name)
printk(KERN_INFO "Checking bus resource with name \"%s\".\n", r->name);
printk(KERN_INFO "resource.start: %08lX, resource.end: %08lX.\n",
r->start, r->end);
#endif
/* Try to allocate a new sub-resource from this
given the proper size and alignment*/
retval = allocate_resource(r, new_resource, size, 0, ~0,
align, NULL, NULL);
/* If this allocation fails, try with next resource
(and give debug message) */
if (retval < 0) {
#ifdef DEBUG
if (r->name)
printk(KERN_INFO
"Failed allocating from bus resource with name \"%s\".\n",
r->name);
else
printk(KERN_INFO
"Failed allocating from bus resource with number %d.\n", i);
#endif
continue;
}
/* If this allocation succeeds, return what allocate_resource() returned */
else
return retval;
}
/* return busy if no resource could be successfully allocated */
return -EBUSY;
}
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2004-05-26 11:56 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-05-25 14:17 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
2004-05-26 6:21 ` Oliver Korpilla
-- strict thread matches above, loose matches on Subject: below --
2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
2004-05-25 13:56 ` Oliver Korpilla
2004-05-26 8:37 ` Oliver Korpilla
2004-05-26 11:56 ` Oliver Korpilla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).