* RE: [Fwd: Memory layout question]
@ 2004-05-18 15:25 Heater, Daniel (GE Infrastructure)
2004-05-19 6:51 ` Differing PCI layouts trigger porting driver problem [Was: " Oliver Korpilla
` (5 more replies)
0 siblings, 6 replies; 17+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-05-18 15:25 UTC (permalink / raw)
To: linuxppc-embedded; +Cc: okorpil
> I'm currently trying out the VMIC Tundra Universe II driver, a
> PCI-VME bus bridge quite common in embedded devices (from VMIC and
> Motorola). VMIC manufactures mostly x86 boards, but I try to use
> the driver on a MVME5500 with 7455 Motorola PPC.
> There seems to be a memory problem involved, as follows:
-- snip --
> Actually the addresses returned by allocate_resource seem to come from
> system memory, because ioremap_nocache logs a debug statement
> that is only triggered if the remapping is below the high_memory
bound.
> (And there already seems to be a virtual address associated with it -
> physical 0x40000000 is RAM 0xc<whatever> - kernel space, I guess)
>
> But the address returned is 0x40000000 !! Isn't that the 2nd
> GB of address space? My board only has 512 MB of storage, and is only
> upgradable to 1GB, so shouldn't an address starting at 0x40000000
> physically never be a memory address and never be below the
> high_memory bound?
Can you post a copy of /proc/iomem, and /proc/vme/master, and any
relevant portions of the output from dmesg if available?
> Can I even dereference an I/O memory pointer on the PowerPC?
> (It can be done on x86) I know, I know, I _should_ use readb and
> friends, but can it be done? Or _must_ I strictly use the macros
> because it won't work the other way round?
That's the way I do it on x86. I haven't tried PPC.
BTW, please copy me directly on replies to this thread. I am not
subscribed to this list.
Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Differing PCI layouts trigger porting driver problem [Was: Memory layout question]
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
@ 2004-05-19 6:51 ` Oliver Korpilla
2004-05-25 13:56 ` [Fwd: " Oliver Korpilla
` (4 subsequent siblings)
5 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-05-19 6:51 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello, Daniel!
I think I very much erred when trying to identify the error - the
problem is a memory layout one, but a PCI related one.
The driver works on x86 because the PCI bus and the CPU physical
addresses match - the Universe Master Window Base Register (LSI_BSx) is
given the start address of the address range to map (the set of
addresses on the PCI bus the Tundra Universe chip will try to answer
to). In the driver the Universe is till now given the CPU physical address.
But the Tundra Universe is a PCI device. All the addresses it sees need
to be bus addresses, not CPU physical ones. This is - if I figured it
out correctly - because of the host-to-PCI-bridge mapping. All physical
addresses dereferenced by the CPU will correctly translate to their
corresponding mapped PCI bus addresses because of the host bridge, but
if you want the Universe chip to catch those addresses, it needs to have
the bus address stored in its registers (the address matching the
original physical address after the host-bridge translation).
So I hope this can actually resolved by converting the address to a bus
address and storing that. Doing this will keep the module portable,
because the mapping would resolve to identity on x86, or at least I
think so.
I guess the correct new order would be:
allocate_resource(&iomem_resource, ... ) to obtain a range of physical
addresses, ioremap_nocache that physical start address to obtain virtual
addresses for the ->virt pointer in the master window structure (now
done in vme_master_window_map() I think), and then use virt_to_bus () to
obtain the address to store in the master window base register. Then the
driver should work both on (nearly) arbitrary setups for x86 and PPC.
Of course I have to try that out first (I'm still home now), and let you
know if that produced any results. Did VMIC run into similar problems
when trying to run the driver module on the VMIVME7050 (750FX/GX IBM
PowerPC board)? Because it is not yet in the list of supported boards?
Thanks for the reply!
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [Fwd: Memory layout question]
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
2004-05-19 6:51 ` Differing PCI layouts trigger porting driver problem [Was: " Oliver Korpilla
@ 2004-05-25 13:56 ` Oliver Korpilla
2004-05-26 8:37 ` Oliver Korpilla
` (3 subsequent siblings)
5 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-05-25 13:56 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
It's not done and over with virt_to_bus() etc.
What we basically got here is a PCI configuration and portability issue.
On the MVME2100, e.g., the PCI host bridge grabs I/O resource 0x80000000
- 0xFFFFFFFF.
On the VMIC driver it requests an I/O memory ressource, and a region on
the I/O memory is awarded.
In order to request a region being mapped by the PCI host bridge, one
would have to request a region of the PCI host bridge resource, not the
I/O resource.
As far as I can deduce from looking at kernel/resource.c
allocate_resource(), find_resource() and __request_resource() have no
recursion, so one cannot request an appropriate region from the
iomem_resource.
I guess to do it portably PCI functions may be needed, though I'm still
looking at it.
From my current knowledge, the driver may have 3 issues:
1) How to request a "safe" range of PCI addresses.
2) How to map those PCI addresses safely to virtual (kernel) and bus
(PCI device) addresses.
3) Using the safer readb/readw ... etc. calls, or stuff like memcpy_io
to portably access the VME bus, perhaps in read() and write()
implementations, perhaps deprecating the not-so-portable dereferencing
of a pointer.
1) and 2) are non-issues on the x86, because of the PCI and memory
layout. So all these 3 issues are about portability.
I'm looking into this, starting with 2) currently.
Maybe the driver would be easier to port and maintain, if the universe
gets treated like a "proper" PCI device right from the start. I'm not
experienced enough to say something about that right now.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [Fwd: Memory layout question]
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
2004-05-19 6:51 ` Differing PCI layouts trigger porting driver problem [Was: " Oliver Korpilla
2004-05-25 13:56 ` [Fwd: " Oliver Korpilla
@ 2004-05-26 8:37 ` Oliver Korpilla
2004-05-26 11:56 ` Oliver Korpilla
` (2 subsequent siblings)
5 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-05-26 8:37 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Maybe the following logic for getting proper PCI addresses would work:
1.) Use stuff like pci_module_init() to find and register the Universe
device (using standard PCI functions to find the Tundra device).
2.) Determine its PCI bus from the pci_dev structure.
3.) Use the resource[] vector associated with that bus to request a set
of PCI addresses on that bus.
This should be portable and clean, and always return a set of proper PCI
addresses usable by the Tundra Universe, at least AFAIK from looking at
include/linux/pci.h and drivers/pci/pci.c.
Since you already use the PCI configuration word mechanism and device
IDs this shouldn't be too hard to adapt to.
What do you think?
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: [Fwd: Memory layout question]
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
` (2 preceding siblings ...)
2004-05-26 8:37 ` Oliver Korpilla
@ 2004-05-26 11:56 ` Oliver Korpilla
2004-06-02 7:42 ` Successful master window access Oliver Korpilla
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
5 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-05-26 11:56 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Below I sketched out a function that may give an address from the bus the Tundra
Universe is on - it is designed to be used, where normally the
allocate_resource() call is used (e.g. in create_slsi_window()).
With kind regards,
Oliver Korpilla
/* Try getting a resource (range of PCI addresses) from the PCI bus we're on */
static int allocate_pci_resource(unsigned long size, unsigned long align,
struct resource *new_resource) {
/* Determine the bus the Tundra is on */
struct pci_bus *bus = universe_pci_dev->bus;
int i;
for (i=0; i<4; i++) {
int retval;
/* Get one of the bus address ranges */
struct resource *r = bus->resource[i];
/* Check if that resource "exists" */
if (!r)
continue;
/* If the resource is not I/O memory (e.g. I/O ports) */
if (! (r->flags & IORESOURCE_MEM))
continue;
#ifdef DEBUG
/* Print out name of resource for debugging */
if (r->name)
printk(KERN_INFO "Checking bus resource with name \"%s\".\n", r->name);
printk(KERN_INFO "resource.start: %08lX, resource.end: %08lX.\n",
r->start, r->end);
#endif
/* Try to allocate a new sub-resource from this
given the proper size and alignment*/
retval = allocate_resource(r, new_resource, size, 0, ~0,
align, NULL, NULL);
/* If this allocation fails, try with next resource
(and give debug message) */
if (retval < 0) {
#ifdef DEBUG
if (r->name)
printk(KERN_INFO
"Failed allocating from bus resource with name \"%s\".\n",
r->name);
else
printk(KERN_INFO
"Failed allocating from bus resource with number %d.\n", i);
#endif
continue;
}
/* If this allocation succeeds, return what allocate_resource() returned */
else
return retval;
}
/* return busy if no resource could be successfully allocated */
return -EBUSY;
}
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Successful master window access
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
` (3 preceding siblings ...)
2004-05-26 11:56 ` Oliver Korpilla
@ 2004-06-02 7:42 ` Oliver Korpilla
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
5 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-02 7:42 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
While vme_dma_read() and vme_dma_write() worked already, even on the PowerPC,
vme_peek() and vme_poke() (both based on PCI master windows) did not work.
Today I successfully made accesses using that functions with my modified module,
giving me the desired results:
1st test: MVME2100 (PPC) master - MVME162 (68k) slave
vme_dma_read -A <VME address @ slave board> -d VME_D32
vme_peek -A <VME address @ slave board> -d VME_D32
Produced the same result for both accesses.
2nd test: MVME2100 (PPC) master - MVME162 (68k) slave
vme_dma_write -A <VME address @ slave board> -d VME_D32 0xDEADFACE
vme_peek -A <VME address @ slave board> -d VME_D32
Produced the written value on reading.
It seems, though I have to verify 1st, doing two things did the trick:
1.) Acquiring the PCI Window address range from the PCI bus address range of the
bridge's PCI bus.
2.) Writing a help function that converts the physical address to a PCI bus
address (doing nothing for x86, and on PPC doing a conversion). Only this
address may be used when writing to the Universe window registers.
It seems this may be a solution that should work for both platforms.
Somehow I'm very excited and happy! :)
I will go clean up that code, test it more thoroughly (with tracers enabled),
focus on slave windows afterwards, and maybe even get to test it on a VMIVME7698
- to verify it is still correct on Intel.
When I got something more substantial than the current hack, I'll submit it to
you, of course.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* VME driver patch for PowerPC
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
` (4 preceding siblings ...)
2004-06-02 7:42 ` Successful master window access Oliver Korpilla
@ 2004-06-07 15:30 ` Oliver Korpilla
2004-06-08 9:05 ` VME driver patch for PowerPC [Continued] Oliver Korpilla
` (3 more replies)
5 siblings, 4 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-07 15:30 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
Could you check, whether the patch to your driver (version 7433-3.2 of your
Linux support) still compiles on an Intel platform, and works as intended?
Problem is, this is not complete:
Direct referencing of the obtained memory pointer will lead to "bad" PCI
accesses, 8 long words in a row, writing to the window will trigger a read
(again 8 long words) and a failed write afterwards.
But using standard readl() and writel() on the ioremap_nocache'd addresses will
work just fine (just except that it swaps the byte-order, from PCI little endian
to host big endian), with correct and proper accesses.
So dereferencing the pointer has bad results, using memcpy() as well, but kernel
macros work fine.
Either this is still a cache problem, and if so, I may or may not find and solve
it, or I have to use appropriate macros in read() and write() calls, and
implement those - this would work on both platforms.
Patch follows below (it's not really much - minimally different from what I've
already sent you, but it's the minimal change that will produce PCI accesses,
and correct ones, if used with readl()/writel()).
With kind regards,
Oliver Korpilla
diff -urN vme_universe/module/vme_master.c vme_universe-new/module/vme_master.c
--- vme_universe/module/vme_master.c 2004-06-07 17:16:13.000000000 +0200
+++ vme_universe-new/module/vme_master.c 2004-06-07 17:13:43.000000000 +0200
@@ -163,6 +163,65 @@
MODULE_PARM(master_window6, "3-4i");
MODULE_PARM(master_window7, "3-4i");
+extern struct pci_dev *universe_pci_dev;
+
+/* Try getting a resource (range of PCI addresses) from the PCI bus we're on */
+static int allocate_pci_resource(unsigned long size, unsigned long align,
+ struct resource *new_resource) {
+ /* Determine the bus the Tundra is on */
+ struct pci_bus *bus = universe_pci_dev->bus;
+ int i;
+
+ for (i=0; i<4; i++) {
+ int retval;
+ struct resource *r = bus->resource[i];
+
+ /* Check if that resource exists */
+ if (!r)
+ continue;
+
+ /* If the resource is not I/O memory (e.g. I/O ports) */
+ if (! (r->flags & IORESOURCE_MEM))
+ continue;
+
+#ifdef DEBUG
+ /* Print out name of resource for debugging */
+ if (r->name)
+ printk(KERN_INFO "Checking bus resource with name \"%s\".\n", r->name);
+ printk(KERN_INFO "resource.start: %08lX, resource.end: %08lX.\n",
+ r->start, r->end);
+#endif
+
+ /* Try to allocate a new sub-resource from this
+ given the proper size and alignment*/
+ retval = allocate_resource(r, new_resource, size,
+ pci_lo_bound, pci_hi_bound,
+ align, NULL, NULL);
+
+ /* If this allocation fails, try with next resource
+ (and give debug message) */
+ if (retval < 0) {
+
+#ifdef DEBUG
+ if (r->name)
+ printk(KERN_INFO
+ "Failed allocating from bus resource with name \"%s\".\n",
+ r->name);
+ else
+ printk(KERN_INFO
+ "Failed allocating from bus resource with number %d.\n", i);
+#endif
+
+ continue;
+ }
+ /* If this allocation succeeds, return what allocate_resource() returned */
+ else
+ return retval;
+ }
+
+ /* return busy if no resource could be successfully allocated */
+ return -EBUSY;
+}
/*============================================================================
* Hook for display proc page info
@@ -428,9 +487,8 @@
return rval;
}
} else {
- rval = allocate_resource(&iomem_resource, &window->resource,
- size, pci_lo_bound, pci_hi_bound,
- resolution, NULL, NULL);
+ rval = allocate_pci_resource(size, resolution, &window->resource);
+
if (rval) {
window->resource.start = 0;
window->resource.end = 0;
@@ -622,9 +680,8 @@
/* Allocate a 64MB window with 64kb resolution
*/
- rval = allocate_resource(&iomem_resource, &slsi_window.resource,
- 0x4000000, pci_lo_bound, pci_hi_bound,
- 0x10000, NULL, NULL);
+ rval = allocate_pci_resource(0x4000000, 0x10000, &slsi_window.resource);
+
if (rval) {
printk(KERN_WARNING "VME: Unable to allocate memory for SLSI "
"window\n");
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* VME driver patch for PowerPC [Continued]
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
@ 2004-06-08 9:05 ` Oliver Korpilla
2004-06-08 9:59 ` VME driver change suggestion Oliver Korpilla
` (2 subsequent siblings)
3 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-08 9:05 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
I modified your vmemcpy() function and put it into driver master module like
below: (message continues after code)
/*===========================================================================
* Copy data using the width specified
* Returns: 0 or -1
*/
int vmemcpy_fromio(void *dest, const void *src, int nelem, int dw)
{
int ii;
/*
* Depending on data width use byte, word or long word accesses.
*/
switch (dw) {
case VME_D8:
{
const uint8_t *s = src;
uint8_t *d = dest;
for (ii = 0; ii < nelem; ++ii, ++s, ++d)
*d = readb(s);
}
break;
case VME_D16:
{
const uint16_t *s = src;
uint16_t *d = dest;
for (ii = 0; ii < nelem; ++ii, ++s, ++d)
*d = readw(s);
}
break;
case VME_D32:
{
const uint32_t *s = src;
uint32_t *d = dest;
for (ii = 0; ii < nelem; ++ii, ++s, ++d)
*d = readl(s);
}
break;
default:
return -EINVAL;
}
return 0;
}
/*===========================================================================
* Copy data using the width specified
* Returns: 0 or -1
*/
int vmemcpy_toio(void *dest, const void *src, int nelem, int dw)
{
int ii;
/*
* Depending on data width use byte, word or long word accesses.
*/
switch (dw) {
case VME_D8:
{
const uint8_t *s = src;
uint8_t *d = dest;
for (ii = 0; ii < nelem; ++ii, ++s, ++d)
writeb(*s, d);
}
break;
case VME_D16:
{
const uint16_t *s = src;
uint16_t *d = dest;
for (ii = 0; ii < nelem; ++ii, ++s, ++d)
writew(*s, d);
}
break;
case VME_D32:
{
const uint32_t *s = src;
uint32_t *d = dest;
for (ii = 0; ii < nelem; ++ii, ++s, ++d)
writel(*s, d);
}
break;
default:
return -EINVAL;
}
return 0;
}
I left out VME_D64 because it has no direct corresponding readll() and writell()
in include/asm-ppc/io.h.
I tested it with reading 4 bytes, VME data width VME_D8, writing 4 byte, and
reading 4 byte, giving me the following VME tracer results: (message continues
after trace)
| TIME BUS ADDRESS DATA R/W SIZE STAT IRQ* IACK* AM EX
| rel. LEVEL 7654321 OC IO
-----+---------------------------------------------------------------------
=> TRIG| 0.00 us 3 01000000 ....DE.. R UBYTE OK ------- 1 1 0D 1
1| 1.04 us 3 01000001 ......AD R LBYTE OK ------- 1 1 0D 1
2| 1.08 us 3 01000002 ....FA.. R UBYTE OK ------- 1 1 0D 1
3| 1.08 us 3 01000003 ......CE R LBYTE OK ------- 1 1 0D 1
4| 50.892 ms 3 01000000 ....AF.. W UBYTE OK ------- 1 1 0D 1
5| 0.40 us 3 01000001 ......FE W LBYTE OK ------- 1 1 0D 1
6| 0.44 us 3 01000002 ....AF.. W UBYTE OK ------- 1 1 0D 1
7| 0.44 us 3 01000003 ......FE W LBYTE OK ------- 1 1 0D 1
8| 38.440 ms 3 01000000 ....AF.. R UBYTE OK ------- 1 1 0D 1
9| 1.24 us 3 01000001 ......FE R LBYTE OK ------- 1 1 0D 1
10| 1.08 us 3 01000002 ....AF.. R UBYTE OK ------- 1 1 0D 1
11| 1.08 us 3 01000003 ......FE R LBYTE OK ------- 1 1 0D 1
So, this corresponds to reading successfully 0xDEADFACE bytewise from the bus,
writing 0xAFFEAFFE back, and reading that value again successfully
(vmemcpy_fromio - vmemcpy_toio - vmemcpy_fromio), with a data width of VME_D8.
I guess it should be possible to implement read() and write() in terms of these
function, though I'm not entirely sure how to determine how to select the master
window from user space.
Any ideas on read() and write()?
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* VME driver change suggestion
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
2004-06-08 9:05 ` VME driver patch for PowerPC [Continued] Oliver Korpilla
@ 2004-06-08 9:59 ` Oliver Korpilla
2004-06-09 11:25 ` VME driver patch for PowerPC Oliver Korpilla
2004-06-09 12:59 ` Oliver Korpilla
3 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-08 9:59 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
To make the driver truly portable, it could be changed as follows:
1.) Add devices for the windows, similarly to the VMELinux.org driver:
Master windows:
* /dev/m0 c 221 0
* /dev/m1 c 221 1
* /dev/m2 c 221 2
* /dev/m3 c 221 3
* /dev/m4 c 221 4
* /dev/m5 c 221 5
* /dev/m6 c 221 6
* /dev/m7 c 221 7
Control window:
* /dev/ctl c 221 8
Slave window:
* /dev/s0 c 221 9
* /dev/s1 c 221 10
* /dev/s2 c 221 11
* /dev/s3 c 221 12
* /dev/s4 c 221 13
* /dev/s5 c 221 14
* /dev/s6 c 221 15
* /dev/s7 c 221 16
(You've already partially matched that scheme with your ctl device)
2.) Implement read(), write() and llseek() for master and slave windows to
read/write values from/to the VME bus. Each would determine the master/slave
window read/written by determining the minor device number.
3.) Add ability to request specific windows back into ioctl, so mapping could be
associated with the proper devices.
4.) Include boundary checking in the read/write/llseek implementations
corresponding to their window values to avoid bad accesses "out of window".
read() and write () would use the vmemcpy() function I introduced earlier when
doing accesses to master windows.
Any ideas and suggestions towards this?
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: VME driver patch for PowerPC
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
2004-06-08 9:05 ` VME driver patch for PowerPC [Continued] Oliver Korpilla
2004-06-08 9:59 ` VME driver change suggestion Oliver Korpilla
@ 2004-06-09 11:25 ` Oliver Korpilla
2004-06-09 12:59 ` Oliver Korpilla
3 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-09 11:25 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
I tried dereferencing the pointer in kernel space like this:
unsigned long int *virtaddr = 0;
// [...]
// After the Universe register were written in
// __create_master_window()
virtaddr = ioremap_nocache(window->phys_base, window->size);
printk(KERN_INFO "Dereferenced pointer 0x%08X.\n", *virtaddr);
Guess what that produced: A single-beat transaction producing the date within
the expected time constraints without a cache burst or any other "bad stuff".
So kernel space pages are fine, correctly set to cache-inhibited and guarded (no
reordering of accesses).
The culprit could be the vme_mmap_phys() function, because it introduces another
mapping of pages, and with mmap you cannot control caching behaviour.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: VME driver patch for PowerPC
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
` (2 preceding siblings ...)
2004-06-09 11:25 ` VME driver patch for PowerPC Oliver Korpilla
@ 2004-06-09 12:59 ` Oliver Korpilla
2004-06-09 13:14 ` Complete " Oliver Korpilla
3 siblings, 1 reply; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-09 12:59 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Hello!
Adding the patch below activated correct behaviour for vme_peek/poke, with
correct data width (tested VME_D8, VME_D16 and VME_D32), and without cache bursts.
It simply sets the cache-inhibited and guarded bits before remapping the pages
(these flags are PowerPC-specific, and the pci_mmap_page_range() function is
sadly no exported kernel symbol).
Would you again be so kind to run some "still works"-test on an Intel board?
Looks like a hack, but I guess it simply gets the job done... No need for a
larger rewrite here.
May take some time till I can again lay hands on a MVME5500 (PPC 7455 - 1Ghz) to
verify on another PPC board, though.
With kind regards,
Oliver Korpilla
Index: module/vme_main.c
===================================================================
--- module/vme_main.c (revision 5)
+++ module/vme_main.c (revision 6)
@@ -191,14 +191,18 @@
*/
int vme_mmap(struct file *file_ptr, struct vm_area_struct *vma)
{
-
DPRINTF("Attempting to map %#lx bytes of memory at "
"physical address %#lx\n", vma->vm_end - vma->vm_start,
vma->vm_pgoff << PAGE_SHIFT);
+#ifdef CONFIG_PPC32
+ vma->vm_page_prot.pgprot |= _PAGE_NO_CACHE | _PAGE_GUARDED;
+ DPRINTF("PowerPC protection flags set.\n");
+#endif
+
/* Don't swap these pages out
*/
- vma->vm_flags |= VM_RESERVED;
+ vma->vm_flags |= VM_LOCKED | VM_IO | VM_SHM;
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,3) || defined RH9BRAINDAMAGE
return remap_page_range(vma, vma->vm_start, vma->vm_pgoff << PAGE_SHIFT,
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Complete VME driver patch for PowerPC
2004-06-09 12:59 ` Oliver Korpilla
@ 2004-06-09 13:14 ` Oliver Korpilla
0 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-09 13:14 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
This should be the complete patch for vmisft-7433-3.2 to work on PPCs.
With kind regards,
Oliver Korpilla
Index: module/vme_master.c
===================================================================
--- module/vme_master.c (revision 1)
+++ module/vme_master.c (revision 8)
@@ -164,6 +164,66 @@
MODULE_PARM(master_window7, "3-4i");
+extern struct pci_dev *universe_pci_dev;
+
+/* Try getting a resource (range of PCI addresses) from the PCI bus we're on */
+static int allocate_pci_resource(unsigned long size, unsigned long align,
+ struct resource *new_resource) {
+ /* Determine the bus the Tundra is on */
+ struct pci_bus *bus = universe_pci_dev->bus;
+ int i;
+
+ for (i=0; i<4; i++) {
+ int retval;
+ struct resource *r = bus->resource[i];
+
+ /* Check if that resource exists */
+ if (!r)
+ continue;
+
+ /* If the resource is not I/O memory (e.g. I/O ports) */
+ if (! (r->flags & IORESOURCE_MEM))
+ continue;
+
+#ifdef DEBUG
+ /* Print out name of resource for debugging */
+ if (r->name)
+ printk(KERN_INFO "Checking bus resource with name \"%s\".\n", r->name);
+ printk(KERN_INFO "resource.start: %08lX, resource.end: %08lX.\n",
+ r->start, r->end);
+#endif
+
+ /* Try to allocate a new sub-resource from this
+ given the proper size and alignment*/
+ retval = allocate_resource(r, new_resource, size,
+ pci_lo_bound, pci_hi_bound,
+ align, NULL, NULL);
+
+ /* If this allocation fails, try with next resource
+ (and give debug message) */
+ if (retval < 0) {
+
+#ifdef DEBUG
+ if (r->name)
+ printk(KERN_INFO
+ "Failed allocating from bus resource with name \"%s\".\n",
+ r->name);
+ else
+ printk(KERN_INFO
+ "Failed allocating from bus resource with number %d.\n", i);
+#endif
+
+ continue;
+ }
+ /* If this allocation succeeds, return what allocate_resource() returned */
+ else
+ return retval;
+ }
+
+ /* return busy if no resource could be successfully allocated */
+ return -EBUSY;
+}
+
/*============================================================================
* Hook for display proc page info
* WARNING: If the amount of data displayed exceeds a page, then we need to
@@ -428,9 +488,8 @@
return rval;
}
} else {
- rval = allocate_resource(&iomem_resource, &window->resource,
- size, pci_lo_bound, pci_hi_bound,
- resolution, NULL, NULL);
+ rval = allocate_pci_resource(size, resolution, &window->resource);
+
if (rval) {
window->resource.start = 0;
window->resource.end = 0;
@@ -622,9 +681,8 @@
/* Allocate a 64MB window with 64kb resolution
*/
- rval = allocate_resource(&iomem_resource, &slsi_window.resource,
- 0x4000000, pci_lo_bound, pci_hi_bound,
- 0x10000, NULL, NULL);
+ rval = allocate_pci_resource(0x4000000, 0x10000, &slsi_window.resource);
+
if (rval) {
printk(KERN_WARNING "VME: Unable to allocate memory for SLSI "
"window\n");
Index: module/vme_main.c
===================================================================
--- module/vme_main.c (revision 1)
+++ module/vme_main.c (revision 8)
@@ -191,14 +191,18 @@
*/
int vme_mmap(struct file *file_ptr, struct vm_area_struct *vma)
{
-
DPRINTF("Attempting to map %#lx bytes of memory at "
"physical address %#lx\n", vma->vm_end - vma->vm_start,
vma->vm_pgoff << PAGE_SHIFT);
+#ifdef CONFIG_PPC32
+ vma->vm_page_prot.pgprot |= _PAGE_NO_CACHE | _PAGE_GUARDED;
+ DPRINTF("PowerPC protection flags set.\n");
+#endif
+
/* Don't swap these pages out
*/
- vma->vm_flags |= VM_RESERVED;
+ vma->vm_flags |= VM_LOCKED | VM_IO | VM_SHM;
#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,3) || defined RH9BRAINDAMAGE
return remap_page_range(vma, vma->vm_start, vma->vm_pgoff << PAGE_SHIFT,
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: VME driver patch for PowerPC
@ 2004-06-09 2:55 Heater, Daniel (GE Infrastructure)
2004-06-09 6:40 ` Oliver Korpilla
0 siblings, 1 reply; 17+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-06-09 2:55 UTC (permalink / raw)
To: Oliver Korpilla; +Cc: linuxppc-embedded
> Could you check, whether the patch to your driver (version
> 7433-3.2 of your
> Linux support) still compiles on an Intel platform, and works
> as intended?
Yep. I only did a quick test, but it appears to work fine on x86.
I've merged your patch up with the code base for the next release.
I think I have an idea for the read/write question, but I need to
look at it a little closer. I'll get back to you in a bit.
Thanks,
Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: VME driver patch for PowerPC
2004-06-09 2:55 Heater, Daniel (GE Infrastructure)
@ 2004-06-09 6:40 ` Oliver Korpilla
0 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-09 6:40 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Heater, Daniel (GE Infrastructure) wrote:
>>Could you check, whether the patch to your driver (version
>>7433-3.2 of your
>>Linux support) still compiles on an Intel platform, and works
>>as intended?
>>
>>
>
>Yep. I only did a quick test, but it appears to work fine on x86.
>I've merged your patch up with the code base for the next release.
>
>
>
Great!
I don't know about x86 bus organization, but maybe it limits the address
range for available addresses too strongly (on a single bus, where the
Universe is on). Comparing how much window space you can map with the
old version and the new one on x86 could prove useful.
>I think I have an idea for the read/write question, but I need to
>look at it a little closer. I'll get back to you in a bit.
>
>
With a little bit of "luck" maybe there's a way around it, and I'm
currently looking into it:
readb(), readw() and readl() work fine (and their writeb/w/l
counterparts do as well), but pointer derefencing does not. Every
pointer dereference does trigger an 8 long word read from the bus first.
I asked what could trigger a long read from a single dereference of
memory? Cache. Not surprisingly cache line length of my current
development board (MVME2100) is 8 long words for Cache Burst
Transactions. So I have the wanted single-beat transactions when using
readx()/writex(), as documented when using cache-inhibited, guarded or
write-through memory or if the cache is disabled, but am getting cache
bursts on pointers.
I will take a closer look at the properties of the pages I'm using in
kernel space and in user space, and the page tables, and maybe that
fixes it.
BTW, everywhere where you use pci_alloc_consistent (dma, slaves), it
ports perfectly fine.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: VME driver patch for PowerPC
@ 2004-06-09 13:59 Heater, Daniel (GE Infrastructure)
2004-06-09 14:29 ` Oliver Korpilla
0 siblings, 1 reply; 17+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-06-09 13:59 UTC (permalink / raw)
To: Oliver Korpilla; +Cc: linuxppc-embedded
> +#ifdef CONFIG_PPC32
> + vma->vm_page_prot.pgprot |= _PAGE_NO_CACHE | _PAGE_GUARDED;
> + DPRINTF("PowerPC protection flags set.\n");
> +#endif
Cool. I was just about to suggest _PAGE_NO_CACHE | _PAGE_GUARDED.
> /* Don't swap these pages out
> */
> - vma->vm_flags |= VM_RESERVED;
> + vma->vm_flags |= VM_LOCKED | VM_IO | VM_SHM;
I'm trying to understand this change. VM_IO looks like it needs to
be there to prevent deadlocks on core dumps.
http://www.uwsg.iu.edu/hypermail/linux/kernel/0202.0/1309.html
and if I'm interpreting some older mailing list postings correctly,
VM_RESERVED is a replacement for VM_LOCKED | VM_SHM but VM_RESERVED
may yield some performance advantages. Thus, in later kernels you
only see VM_RESERVED and not VM_LOCKED | VM_SHM.
Maybe since this is an out of tree driver, it should have
> + vma->vm_flags |= VM_LOCKED | VM_IO | VM_SHM | VM_RESERVED;
to handle older kernels and still get the advantages of VM_RESERVED
on newer kernels.
What do you think? Am I interpreting this correctly?
Daniel.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread* Re: VME driver patch for PowerPC
2004-06-09 13:59 Heater, Daniel (GE Infrastructure)
@ 2004-06-09 14:29 ` Oliver Korpilla
0 siblings, 0 replies; 17+ messages in thread
From: Oliver Korpilla @ 2004-06-09 14:29 UTC (permalink / raw)
To: Heater, Daniel (GE Infrastructure); +Cc: linuxppc-embedded
Heater, Daniel (GE Infrastructure) wrote:
>> /* Don't swap these pages out
>> */
>>- vma->vm_flags |= VM_RESERVED;
>>+ vma->vm_flags |= VM_LOCKED | VM_IO | VM_SHM;
>
>
> I'm trying to understand this change. VM_IO looks like it needs to
> be there to prevent deadlocks on core dumps.
> http://www.uwsg.iu.edu/hypermail/linux/kernel/0202.0/1309.html
>
> and if I'm interpreting some older mailing list postings correctly,
> VM_RESERVED is a replacement for VM_LOCKED | VM_SHM but VM_RESERVED
> may yield some performance advantages. Thus, in later kernels you
> only see VM_RESERVED and not VM_LOCKED | VM_SHM.
>
> Maybe since this is an out of tree driver, it should have
>
>>+ vma->vm_flags |= VM_LOCKED | VM_IO | VM_SHM | VM_RESERVED;
>
>
> to handle older kernels and still get the advantages of VM_RESERVED
> on newer kernels.
>
> What do you think? Am I interpreting this correctly?
>
To be frank, I "modelled" this after the flag configuration in
pci_mmap_page_range() in the PowerPC tree of kernel 2.4.21 (where I got the page
protection changes, too).
If VM_RESERVED is somewhat of an alias, it should prove okay. Looking into my
"documentation" yielded no quick results for flag interpretation.
I could simply test, if both works.
With kind regards,
Oliver Korpilla
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: VME driver patch for PowerPC
@ 2004-06-09 20:01 Heater, Daniel (GE Infrastructure)
0 siblings, 0 replies; 17+ messages in thread
From: Heater, Daniel (GE Infrastructure) @ 2004-06-09 20:01 UTC (permalink / raw)
To: okorpil; +Cc: linuxppc-embedded
> >>Could you check, whether the patch to your driver (version
> >>7433-3.2 of your
> >>Linux support) still compiles on an Intel platform, and works
> >>as intended?
> >>
> >Yep. I only did a quick test, but it appears to work fine on x86.
> >I've merged your patch up with the code base for the next release.
> >
> Great!
>
> I don't know about x86 bus organization, but maybe it limits
> the address
> range for available addresses too strongly (on a single bus,
> where the
> Universe is on). Comparing how much window space you can map with the
> old version and the new one on x86 could prove useful.
Your dead on right. I cannot allocate as much space. In fact,
on a VMIVME-7750 I'm only able to allocate 768KB now vs. the
~2MB I could access before.
> Adding the patch below activated correct behaviour for vme_peek/poke,
> with correct data width (tested VME_D8, VME_D16 and VME_D32), and
> without cache bursts.
>
> It simply sets the cache-inhibited and guarded bits before
> remapping the pages (these flags are PowerPC-specific, and the
> pci_mmap_page_range() function is sadly no exported kernel symbol).
>
> Would you again be so kind to run some "still works"-test on an
> Intel board?
That part still works.
Thanks,
Daniel L. Heater
Software Development, Embedded Systems
GE Fanuc Automation Americas, Inc.
VMIC, Inc.
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2004-06-09 20:01 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-05-18 15:25 [Fwd: Memory layout question] Heater, Daniel (GE Infrastructure)
2004-05-19 6:51 ` Differing PCI layouts trigger porting driver problem [Was: " Oliver Korpilla
2004-05-25 13:56 ` [Fwd: " Oliver Korpilla
2004-05-26 8:37 ` Oliver Korpilla
2004-05-26 11:56 ` Oliver Korpilla
2004-06-02 7:42 ` Successful master window access Oliver Korpilla
2004-06-07 15:30 ` VME driver patch for PowerPC Oliver Korpilla
2004-06-08 9:05 ` VME driver patch for PowerPC [Continued] Oliver Korpilla
2004-06-08 9:59 ` VME driver change suggestion Oliver Korpilla
2004-06-09 11:25 ` VME driver patch for PowerPC Oliver Korpilla
2004-06-09 12:59 ` Oliver Korpilla
2004-06-09 13:14 ` Complete " Oliver Korpilla
-- strict thread matches above, loose matches on Subject: below --
2004-06-09 2:55 Heater, Daniel (GE Infrastructure)
2004-06-09 6:40 ` Oliver Korpilla
2004-06-09 13:59 Heater, Daniel (GE Infrastructure)
2004-06-09 14:29 ` Oliver Korpilla
2004-06-09 20:01 Heater, Daniel (GE Infrastructure)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).