linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* Help!
@ 1999-06-03  1:51 Jeremy Welling
  0 siblings, 0 replies; 8+ messages in thread
From: Jeremy Welling @ 1999-06-03  1:51 UTC (permalink / raw)
  To: linuxppc-dev


I've got a G3 minitower.  I can't get it linux to acknowledge the ultra
scsi drive.  I tried to bypass the builtin scsi by plugging it into the
pci scsi card.  No help.  I've also tried kernels 2.2.1 thru 2.2.7.  I
started by partitioning the internal drive with the correct partitions.
When boot linux through bootX, I go into the redhat installer and it
can't find the disk.  I drop out to the redhat messages and it says
"/proc/scsi/scsi: Attached devices: none".  The kernel messages note
detecting scsi0 :  MESH  scsi : 1 host.  and no other controllers.  If
anyone has any input, it would be greatly appreciated.

Thanks!
Jeremy Welling

[[ This message was sent via the linuxppc-dev mailing list.  Replies are ]]
[[ not  forced  back  to the list, so be sure to Cc linuxppc-dev if your ]]
[[ reply is of general interest. Please check http://lists.linuxppc.org/ ]]
[[ and http://www.linuxppc.org/ for useful information before posting.   ]]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Help!
@ 2004-10-26  0:57 soar.wu
  2004-10-26 14:13 ` Help! Mark Chambers
  0 siblings, 1 reply; 8+ messages in thread
From: soar.wu @ 2004-10-26  0:57 UTC (permalink / raw)
  To: linuxppc-embedded


[-- Attachment #1.1: Type: text/plain, Size: 2172 bytes --]

Clear DayHi,

I am very sorry to disturb you.
I have a problem.

We use the powerPC chip, though it is not your IBM chip.
It is Motorola MPC8260.
We have the HPI to the local bus of the MPC8260.

Firstly we will write data to the HPI,
then read data from the HPI to the SDRAM(this SDRAM is connected with 60x-BUS).
The last we will read the data from the SDRAM to compare the data which is write to HPI.

But we find it is different, there is a hop.
After we read out the data from SDRAM, we find there is a change.
e.g. the data which is write to the HPI is ordered by 1,2,3,4,5,6,7,8. 
but the data which is read out from HPI and store to SDRAM is 1,2,3,4,6,7,8,..

Then I modify the source codes as the follows:
void hpi_ul_memcpy_dsp2h(void *dest,U32 src_dsp_memory_addr, U32 count)
{
..............
for(i = 0; i < len ; i+=4)
0x9ee940 +0x074: li r0, 0x0 (0)
0x9ee944 +0x078: stw r0, 0x18(r31)
0x9ee948 +0x07c: lwz r0, 0x18(r31)
0x9ee94c +0x080: lwz r9, 0x1c(r31)
0x9ee950 +0x084: cmpl crf1, 0, r0, r9
0x9ee954 +0x088: bc 0xc, 0x4, hpi_ul_memcpy_dsp2h + 0x90
0x9ee958 +0x08c: b hpi_ul_memcpy_dsp2h + 0xcc
{
#if 0 /* the old source codes*/
*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
p_cur++;
#else /* the new source codes*/
tmpReadRst = READ_UL_HPI_REG(UL_HPIDA_ADDR);
0x9ee95c +0x090: lis r9, 0x5200 (20992)
0x9ee960 +0x094: ori r9, r9, 0x8
0x9ee964 +0x098: lwz r0, 0x0(r9)
0x9ee968 +0x09c: stw r0, 0x24(r31)
*p_cur = tmpReadRst;
0x9ee96c +0x0a0: lwz r9, 0x14(r31)
0x9ee970 +0x0a4: lwz r0, 0x24(r31)
0x9ee974 +0x0a8: stw r0, 0x0(r9)
p_cur++;
0x9ee978 +0x0ac: lwz r9, 0x14(r31)
0x9ee97c +0x0b0: addi r0, r9, 0x4 (4)
0x9ee980 +0x0b4: or r9, r0, r0
0x9ee984 +0x0b8: stw r9, 0x14(r31)
#endif
}
..........
}
We find the problem is disappeared.

But I donot know the reason.
I only know something, like the data which is loaded by loading instruction to register, 
is valid only when the instruction is excuted after 2-clock-cycle

Do you tell me the reason??

Wait for your reply. 
Best Regards,
Soar Wu

Node B, 3G Department
UTStarcom (China) Inc.
Tel:(86-0755)26952899-4610
Email:soar.wu@utstar.com



[-- Attachment #1.2: Type: text/html, Size: 4389 bytes --]

[-- Attachment #2: Clear Day Bkgrd.JPG --]
[-- Type: image/jpeg, Size: 5675 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help!
  2004-10-26  0:57 Help! soar.wu
@ 2004-10-26 14:13 ` Mark Chambers
  2004-10-27  3:47   ` Help! soar.wu
  0 siblings, 1 reply; 8+ messages in thread
From: Mark Chambers @ 2004-10-26 14:13 UTC (permalink / raw)
  To: soar.wu, linuxppc-embedded

Clear Day>Firstly we will write data to the HPI,
>then read data from the HPI to the SDRAM(this SDRAM is connected with
60x-BUS).
>The last we will read the data from the SDRAM to compare the data which is
write >to HPI.

Can you show the entire source for hpi_ul_memcpy_dsp2h()?

Also, you say 'HPI' - are connected to a T.I. DSP?  Which one?

Mark Chambers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Help!
  2004-10-26 14:13 ` Help! Mark Chambers
@ 2004-10-27  3:47   ` soar.wu
  2004-10-27  3:48     ` Help! soar.wu
  0 siblings, 1 reply; 8+ messages in thread
From: soar.wu @ 2004-10-27  3:47 UTC (permalink / raw)
  To: Mark Chambers, linuxppc-embedded

[-- Attachment #1: Type: text/plain, Size: 2480 bytes --]


The attached file is the hpi_ul_memcpy_dsp2h() function.

Yes, the 'HPI' - are connected to a T.I. DSP, 6416 type.

About the HPI problem progress.
now we have the following resolve method:

1, If we use a temp variable to store the read out data, then store the data to SDRAM, it is OK, there is no hop.
     tmpReadRst = READ_UL_HPI_REG(UL_HPIDA_ADDR);
    *p_cur = tmpReadRst;
     p_cur++;

2, If we donot modify the source codes, but we use the optimization O3 to compile the source codes, it is OK.
	*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
	p_cur++;

3, If we add a sync instruction to the source  codes, it is OK.
	*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
            __asm__("  eieio; sync");
	p_cur++;

4, If we modify the BSP, update the memory operation option,
from :
PHYS_MEM_DESC sysPhysMemDesc [] =
{
...
    /*all the other small chip*/
    {
    (void *) 0x50000000,
    (void *) 0x50000000,
    0x08000000,     
    VM_STATE_MASK_VALID | VM_STATE_MASK_WRITABLE | VM_STATE_MASK_CACHEABLE,
    VM_STATE_VALID      | VM_STATE_WRITABLE      | VM_STATE_CACHEABLE_NOT
    },
...
to:
PHYS_MEM_DESC sysPhysMemDesc [] =
{
...
    /*all the other small chip*/
    {
    (void *) 0x50000000,
    (void *) 0x50000000,
    0x08000000,     
    VM_STATE_MASK_VALID | VM_STATE_MASK_WRITABLE | VM_STATE_MASK_CACHEABLE | VM_STATE_MASK_GUARDED,
    VM_STATE_VALID      | VM_STATE_WRITABLE      | VM_STATE_CACHEABLE_NOT | VM_STATE_GUARDED
    },
...

We added the option VM_STATE_VM_MASK_GUARDED and VM_STATE_GUARDED,
still use the old source codes:
	*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
	p_cur++;

Now We find there is no hop. It is OK


5, If we use the memory which is allocated by function cacheDmaMalloc(), 
the hop still exist, but the number of  hop is little than when we use malloc().

Do you tell me the reason??

Wait for your reply.

Best Regards,
Soar Wu

-----Original Message-----
From: Mark Chambers [mailto:markc@mail.com]
Sent: 2004年10月26日 22:13
To: soar.wu; linuxppc-embedded@ozlabs.org
Subject: Re: Help!


Clear Day>Firstly we will write data to the HPI,
>then read data from the HPI to the SDRAM(this SDRAM is connected with
60x-BUS).
>The last we will read the data from the SDRAM to compare the data which is
write >to HPI.

Can you show the entire source for hpi_ul_memcpy_dsp2h()?

Also, you say 'HPI' - are connected to a T.I. DSP?  Which one?

Mark Chambers

[-- Attachment #2: dsp2h.txt --]
[-- Type: text/plain, Size: 4031 bytes --]

void hpi_ul_memcpy_dsp2h(void *dest,U32 src_dsp_memory_addr, U32 count)
{
	0x9ee8bc       hpi_ul_memcpy_dsp2h:    stwu        r1, 0xffd0(r1)
	0x9ee8c0       +0x004:                 mfspr       r0, LR
	0x9ee8c4       +0x008:                 stw         r31, 0x2c(r1)
	0x9ee8c8       +0x00c:                 stw         r0, 0x34(r1)
	0x9ee8cc       +0x010:                 or          r31, r1, r1
	0x9ee8d0       +0x014:                 stw         r3, 0x8(r31)
	0x9ee8d4       +0x018:                 stw         r4, 0xc(r31)
	0x9ee8d8       +0x01c:                 stw         r5, 0x10(r31)
	U32 *p_cur = dest;
	0x9ee8dc       +0x020:                 lwz         r0, 0x8(r31)
	0x9ee8e0       +0x024:                 stw         r0, 0x14(r31)
	U32  i,len;
	U32 scraddr = src_dsp_memory_addr;
	0x9ee8e4       +0x028:                 lwz         r0, 0xc(r31)
	0x9ee8e8       +0x02c:                 stw         r0, 0x20(r31)
	
	len = DWORD_ALIGN(count);
	0x9ee8ec       +0x030:                 lwz         r9, 0x10(r31)
	0x9ee8f0       +0x034:                 addi        r0, r9, 0x3 (3)
	0x9ee8f4       +0x038:                 rlwinm      r9, r0, 0x1e, 2, 31
	0x9ee8f8       +0x03c:                 or          r0, r9, r9
	0x9ee8fc       +0x040:                 rlwinm      r9, r0, 0x2, 0, 29
	0x9ee900       +0x044:                 stw         r9, 0x1c(r31)

	HPI_UL_LOCK();	
	0x9ee904       +0x048:                 lis         r9, 0x9f (159)
	0x9ee908       +0x04c:                 addi        r11, r9, 0x6d8 (1752)
	0x9ee90c       +0x050:                 lwz         r3, 0x0(r11)
	0x9ee910       +0x054:                 li          r4, 0xffff (-1)
	0x9ee914       +0x058:                 bl          semTake

	

	WRITE_UL_HPI_REG(UL_HPIA_ADDR,scraddr);
	0x9ee918       +0x05c:                 lis         r9, 0x5200 (20992)
	0x9ee91c       +0x060:                 ori         r9, r9, 0x4
	0x9ee920       +0x064:                 lwz         r0, 0x20(r31)
	0x9ee924       +0x068:                 stw         r0, 0x0(r9)
	for(i = 0; i < len ; i+=4)
	0x9ee928       +0x06c:                 li          r0, 0x0 (0)
	0x9ee92c       +0x070:                 stw         r0, 0x18(r31)
	0x9ee930       +0x074:                 lwz         r0, 0x18(r31)
	0x9ee934       +0x078:                 lwz         r9, 0x1c(r31)
	0x9ee938       +0x07c:                 cmpl        crf1, 0, r0, r9
	0x9ee93c       +0x080:                 bc          0xc, 0x4, hpi_ul_memcpy_dsp2h + 0x88
	0x9ee940       +0x084:                 b           hpi_ul_memcpy_dsp2h + 0xbc
	{
		*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
	0x9ee944       +0x088:                 lwz         r9, 0x14(r31)
	0x9ee948       +0x08c:                 lis         r11, 0x5200 (20992)
	0x9ee94c       +0x090:                 ori         r11, r11, 0x8
	0x9ee950       +0x094:                 lwz         r0, 0x0(r11)
	0x9ee954       +0x098:                 stw         r0, 0x0(r9)
		p_cur++;
	0x9ee958       +0x09c:                 lwz         r9, 0x14(r31)
	0x9ee95c       +0x0a0:                 addi        r0, r9, 0x4 (4)
	0x9ee960       +0x0a4:                 or          r9, r0, r0
	0x9ee964       +0x0a8:                 stw         r9, 0x14(r31)
	}

	HPI_UL_UNLOCK();	
	0x9ee978       +0x0bc:                 lis         r9, 0x9f (159)
	0x9ee97c       +0x0c0:                 addi        r11, r9, 0x6d8 (1752)
	0x9ee980       +0x0c4:                 lwz         r3, 0x0(r11)
	0x9ee984       +0x0c8:                 bl          semGive

	return;
	0x9ee988       +0x0cc:                 b           hpi_ul_memcpy_dsp2h + 0xd0
}
	0x9ee98c       +0x0d0:                 lwz         r11, 0x0(r1)
	0x9ee990       +0x0d4:                 lwz         r0, 0x4(r11)
	0x9ee994       +0x0d8:                 mtspr       LR, r0
	0x9ee998       +0x0dc:                 lwz         r31, 0xfffc(r11)
	0x9ee99c       +0x0e0:                 or          r1, r11, r11
	0x9ee9a0       +0x0e4:                 blr         

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Help!
  2004-10-27  3:47   ` Help! soar.wu
@ 2004-10-27  3:48     ` soar.wu
  2004-10-27  3:49       ` Help! soar.wu
  2004-10-27 12:20       ` Help! Mark Chambers
  0 siblings, 2 replies; 8+ messages in thread
From: soar.wu @ 2004-10-27  3:48 UTC (permalink / raw)
  To: linuxppc-embedded

DQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpGcm9tOiBzb2FyLnd1IFttYWlsdG86c29h
ci53dUB1dHN0YXIuY29tXQ0KU2VudDogMjAwNMTqMTDUwjI3yNUgMTE6NDgNClRvOiBNYXJrIENo
YW1iZXJzOyBsaW51eHBwYy1lbWJlZGRlZEBvemxhYnMub3JnDQpTdWJqZWN0OiBSRTogSGVscCEN
Cg0KDQoNClRoZSBhdHRhY2hlZCBmaWxlIGlzIHRoZSBocGlfdWxfbWVtY3B5X2RzcDJoKCkgZnVu
Y3Rpb24uDQoNClllcywgdGhlICdIUEknIC0gYXJlIGNvbm5lY3RlZCB0byBhIFQuSS4gRFNQLCA2
NDE2IHR5cGUuDQoNCkFib3V0IHRoZSBIUEkgcHJvYmxlbSBwcm9ncmVzcy4NCm5vdyB3ZSBoYXZl
IHRoZSBmb2xsb3dpbmcgcmVzb2x2ZSBtZXRob2Q6DQoNCjEsIElmIHdlIHVzZSBhIHRlbXAgdmFy
aWFibGUgdG8gc3RvcmUgdGhlIHJlYWQgb3V0IGRhdGEsIHRoZW4gc3RvcmUgdGhlIGRhdGEgdG8g
U0RSQU0sIGl0IGlzIE9LLCB0aGVyZSBpcyBubyBob3AuDQogICAgIHRtcFJlYWRSc3QgPSBSRUFE
X1VMX0hQSV9SRUcoVUxfSFBJREFfQUREUik7DQogICAgKnBfY3VyID0gdG1wUmVhZFJzdDsNCiAg
ICAgcF9jdXIrKzsNCg0KMiwgSWYgd2UgZG9ub3QgbW9kaWZ5IHRoZSBzb3VyY2UgY29kZXMsIGJ1
dCB3ZSB1c2UgdGhlIG9wdGltaXphdGlvbiBPMyB0byBjb21waWxlIHRoZSBzb3VyY2UgY29kZXMs
IGl0IGlzIE9LLg0KCSpwX2N1ciA9IFJFQURfVUxfSFBJX1JFRyhVTF9IUElEQV9BRERSKTsNCglw
X2N1cisrOw0KDQozLCBJZiB3ZSBhZGQgYSBzeW5jIGluc3RydWN0aW9uIHRvIHRoZSBzb3VyY2Ug
IGNvZGVzLCBpdCBpcyBPSy4NCgkqcF9jdXIgPSBSRUFEX1VMX0hQSV9SRUcoVUxfSFBJREFfQURE
Uik7DQogICAgICAgICAgICBfX2FzbV9fKCIgIGVpZWlvOyBzeW5jIik7DQoJcF9jdXIrKzsNCg0K
NCwgSWYgd2UgbW9kaWZ5IHRoZSBCU1AsIHVwZGF0ZSB0aGUgbWVtb3J5IG9wZXJhdGlvbiBvcHRp
b24sDQpmcm9tIDoNClBIWVNfTUVNX0RFU0Mgc3lzUGh5c01lbURlc2MgW10gPQ0Kew0KLi4uDQog
ICAgLyphbGwgdGhlIG90aGVyIHNtYWxsIGNoaXAqLw0KICAgIHsNCiAgICAodm9pZCAqKSAweDUw
MDAwMDAwLA0KICAgICh2b2lkICopIDB4NTAwMDAwMDAsDQogICAgMHgwODAwMDAwMCwgICAgIA0K
ICAgIFZNX1NUQVRFX01BU0tfVkFMSUQgfCBWTV9TVEFURV9NQVNLX1dSSVRBQkxFIHwgVk1fU1RB
VEVfTUFTS19DQUNIRUFCTEUsDQogICAgVk1fU1RBVEVfVkFMSUQgICAgICB8IFZNX1NUQVRFX1dS
SVRBQkxFICAgICAgfCBWTV9TVEFURV9DQUNIRUFCTEVfTk9UDQogICAgfSwNCi4uLg0KdG86DQpQ
SFlTX01FTV9ERVNDIHN5c1BoeXNNZW1EZXNjIFtdID0NCnsNCi4uLg0KICAgIC8qYWxsIHRoZSBv
dGhlciBzbWFsbCBjaGlwKi8NCiAgICB7DQogICAgKHZvaWQgKikgMHg1MDAwMDAwMCwNCiAgICAo
dm9pZCAqKSAweDUwMDAwMDAwLA0KICAgIDB4MDgwMDAwMDAsICAgICANCiAgICBWTV9TVEFURV9N
QVNLX1ZBTElEIHwgVk1fU1RBVEVfTUFTS19XUklUQUJMRSB8IFZNX1NUQVRFX01BU0tfQ0FDSEVB
QkxFIHwgVk1fU1RBVEVfTUFTS19HVUFSREVELA0KICAgIFZNX1NUQVRFX1ZBTElEICAgICAgfCBW
TV9TVEFURV9XUklUQUJMRSAgICAgIHwgVk1fU1RBVEVfQ0FDSEVBQkxFX05PVCB8IFZNX1NUQVRF
X0dVQVJERUQNCiAgICB9LA0KLi4uDQoNCldlIGFkZGVkIHRoZSBvcHRpb24gVk1fU1RBVEVfVk1f
TUFTS19HVUFSREVEIGFuZCBWTV9TVEFURV9HVUFSREVELA0Kc3RpbGwgdXNlIHRoZSBvbGQgc291
cmNlIGNvZGVzOg0KCSpwX2N1ciA9IFJFQURfVUxfSFBJX1JFRyhVTF9IUElEQV9BRERSKTsNCglw
X2N1cisrOw0KDQpOb3cgV2UgZmluZCB0aGVyZSBpcyBubyBob3AuIEl0IGlzIE9LDQoNCg0KNSwg
SWYgd2UgdXNlIHRoZSBtZW1vcnkgd2hpY2ggaXMgYWxsb2NhdGVkIGJ5IGZ1bmN0aW9uIGNhY2hl
RG1hTWFsbG9jKCksIA0KdGhlIGhvcCBzdGlsbCBleGlzdCwgYnV0IHRoZSBudW1iZXIgb2YgIGhv
cCBpcyBsaXR0bGUgdGhhbiB3aGVuIHdlIHVzZSBtYWxsb2MoKS4NCg0KRG8geW91IHRlbGwgbWUg
dGhlIHJlYXNvbj8/DQoNCldhaXQgZm9yIHlvdXIgcmVwbHkuDQoNCkJlc3QgUmVnYXJkcywNClNv
YXIgV3UNCg0KLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IE1hcmsgQ2hhbWJlcnMg
W21haWx0bzptYXJrY0BtYWlsLmNvbV0NClNlbnQ6IDIwMDTE6jEw1MIyNsjVIDIyOjEzDQpUbzog
c29hci53dTsgbGludXhwcGMtZW1iZWRkZWRAb3psYWJzLm9yZw0KU3ViamVjdDogUmU6IEhlbHAh
DQoNCg0KQ2xlYXIgRGF5PkZpcnN0bHkgd2Ugd2lsbCB3cml0ZSBkYXRhIHRvIHRoZSBIUEksDQo+
dGhlbiByZWFkIGRhdGEgZnJvbSB0aGUgSFBJIHRvIHRoZSBTRFJBTSh0aGlzIFNEUkFNIGlzIGNv
bm5lY3RlZCB3aXRoDQo2MHgtQlVTKS4NCj5UaGUgbGFzdCB3ZSB3aWxsIHJlYWQgdGhlIGRhdGEg
ZnJvbSB0aGUgU0RSQU0gdG8gY29tcGFyZSB0aGUgZGF0YSB3aGljaCBpcw0Kd3JpdGUgPnRvIEhQ
SS4NCg0KQ2FuIHlvdSBzaG93IHRoZSBlbnRpcmUgc291cmNlIGZvciBocGlfdWxfbWVtY3B5X2Rz
cDJoKCk/DQoNCkFsc28sIHlvdSBzYXkgJ0hQSScgLSBhcmUgY29ubmVjdGVkIHRvIGEgVC5JLiBE
U1A/ICBXaGljaCBvbmU/DQoNCk1hcmsgQ2hhbWJlcnMNCg==

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Help!
  2004-10-27  3:48     ` Help! soar.wu
@ 2004-10-27  3:49       ` soar.wu
  2004-10-27 12:20       ` Help! Mark Chambers
  1 sibling, 0 replies; 8+ messages in thread
From: soar.wu @ 2004-10-27  3:49 UTC (permalink / raw)
  To: linuxppc-embedded

[-- Attachment #1: Type: text/plain, Size: 2851 bytes --]

Sorry for forget the attached file

-----Original Message-----
From: soar.wu [mailto:soar.wu@utstar.com]
Sent: 2004年10月27日 11:49
To: linuxppc-embedded@ozlabs.org
Subject: RE: Help!




-----Original Message-----
From: soar.wu [mailto:soar.wu@utstar.com]
Sent: 2004年10月27日 11:48
To: Mark Chambers; linuxppc-embedded@ozlabs.org
Subject: RE: Help!



The attached file is the hpi_ul_memcpy_dsp2h() function.

Yes, the 'HPI' - are connected to a T.I. DSP, 6416 type.

About the HPI problem progress.
now we have the following resolve method:

1, If we use a temp variable to store the read out data, then store the data to SDRAM, it is OK, there is no hop.
     tmpReadRst = READ_UL_HPI_REG(UL_HPIDA_ADDR);
    *p_cur = tmpReadRst;
     p_cur++;

2, If we donot modify the source codes, but we use the optimization O3 to compile the source codes, it is OK.
	*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
	p_cur++;

3, If we add a sync instruction to the source  codes, it is OK.
	*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
            __asm__("  eieio; sync");
	p_cur++;

4, If we modify the BSP, update the memory operation option,
from :
PHYS_MEM_DESC sysPhysMemDesc [] =
{
...
    /*all the other small chip*/
    {
    (void *) 0x50000000,
    (void *) 0x50000000,
    0x08000000,     
    VM_STATE_MASK_VALID | VM_STATE_MASK_WRITABLE | VM_STATE_MASK_CACHEABLE,
    VM_STATE_VALID      | VM_STATE_WRITABLE      | VM_STATE_CACHEABLE_NOT
    },
...
to:
PHYS_MEM_DESC sysPhysMemDesc [] =
{
...
    /*all the other small chip*/
    {
    (void *) 0x50000000,
    (void *) 0x50000000,
    0x08000000,     
    VM_STATE_MASK_VALID | VM_STATE_MASK_WRITABLE | VM_STATE_MASK_CACHEABLE | VM_STATE_MASK_GUARDED,
    VM_STATE_VALID      | VM_STATE_WRITABLE      | VM_STATE_CACHEABLE_NOT | VM_STATE_GUARDED
    },
...

We added the option VM_STATE_VM_MASK_GUARDED and VM_STATE_GUARDED,
still use the old source codes:
	*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
	p_cur++;

Now We find there is no hop. It is OK


5, If we use the memory which is allocated by function cacheDmaMalloc(), 
the hop still exist, but the number of  hop is little than when we use malloc().

Do you tell me the reason??

Wait for your reply.

Best Regards,
Soar Wu

-----Original Message-----
From: Mark Chambers [mailto:markc@mail.com]
Sent: 2004年10月26日 22:13
To: soar.wu; linuxppc-embedded@ozlabs.org
Subject: Re: Help!


Clear Day>Firstly we will write data to the HPI,
>then read data from the HPI to the SDRAM(this SDRAM is connected with
60x-BUS).
>The last we will read the data from the SDRAM to compare the data which is
write >to HPI.

Can you show the entire source for hpi_ul_memcpy_dsp2h()?

Also, you say 'HPI' - are connected to a T.I. DSP?  Which one?

Mark Chambers

[-- Attachment #2: dsp2h.txt --]
[-- Type: text/plain, Size: 4031 bytes --]

void hpi_ul_memcpy_dsp2h(void *dest,U32 src_dsp_memory_addr, U32 count)
{
	0x9ee8bc       hpi_ul_memcpy_dsp2h:    stwu        r1, 0xffd0(r1)
	0x9ee8c0       +0x004:                 mfspr       r0, LR
	0x9ee8c4       +0x008:                 stw         r31, 0x2c(r1)
	0x9ee8c8       +0x00c:                 stw         r0, 0x34(r1)
	0x9ee8cc       +0x010:                 or          r31, r1, r1
	0x9ee8d0       +0x014:                 stw         r3, 0x8(r31)
	0x9ee8d4       +0x018:                 stw         r4, 0xc(r31)
	0x9ee8d8       +0x01c:                 stw         r5, 0x10(r31)
	U32 *p_cur = dest;
	0x9ee8dc       +0x020:                 lwz         r0, 0x8(r31)
	0x9ee8e0       +0x024:                 stw         r0, 0x14(r31)
	U32  i,len;
	U32 scraddr = src_dsp_memory_addr;
	0x9ee8e4       +0x028:                 lwz         r0, 0xc(r31)
	0x9ee8e8       +0x02c:                 stw         r0, 0x20(r31)
	
	len = DWORD_ALIGN(count);
	0x9ee8ec       +0x030:                 lwz         r9, 0x10(r31)
	0x9ee8f0       +0x034:                 addi        r0, r9, 0x3 (3)
	0x9ee8f4       +0x038:                 rlwinm      r9, r0, 0x1e, 2, 31
	0x9ee8f8       +0x03c:                 or          r0, r9, r9
	0x9ee8fc       +0x040:                 rlwinm      r9, r0, 0x2, 0, 29
	0x9ee900       +0x044:                 stw         r9, 0x1c(r31)

	HPI_UL_LOCK();	
	0x9ee904       +0x048:                 lis         r9, 0x9f (159)
	0x9ee908       +0x04c:                 addi        r11, r9, 0x6d8 (1752)
	0x9ee90c       +0x050:                 lwz         r3, 0x0(r11)
	0x9ee910       +0x054:                 li          r4, 0xffff (-1)
	0x9ee914       +0x058:                 bl          semTake

	

	WRITE_UL_HPI_REG(UL_HPIA_ADDR,scraddr);
	0x9ee918       +0x05c:                 lis         r9, 0x5200 (20992)
	0x9ee91c       +0x060:                 ori         r9, r9, 0x4
	0x9ee920       +0x064:                 lwz         r0, 0x20(r31)
	0x9ee924       +0x068:                 stw         r0, 0x0(r9)
	for(i = 0; i < len ; i+=4)
	0x9ee928       +0x06c:                 li          r0, 0x0 (0)
	0x9ee92c       +0x070:                 stw         r0, 0x18(r31)
	0x9ee930       +0x074:                 lwz         r0, 0x18(r31)
	0x9ee934       +0x078:                 lwz         r9, 0x1c(r31)
	0x9ee938       +0x07c:                 cmpl        crf1, 0, r0, r9
	0x9ee93c       +0x080:                 bc          0xc, 0x4, hpi_ul_memcpy_dsp2h + 0x88
	0x9ee940       +0x084:                 b           hpi_ul_memcpy_dsp2h + 0xbc
	{
		*p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
	0x9ee944       +0x088:                 lwz         r9, 0x14(r31)
	0x9ee948       +0x08c:                 lis         r11, 0x5200 (20992)
	0x9ee94c       +0x090:                 ori         r11, r11, 0x8
	0x9ee950       +0x094:                 lwz         r0, 0x0(r11)
	0x9ee954       +0x098:                 stw         r0, 0x0(r9)
		p_cur++;
	0x9ee958       +0x09c:                 lwz         r9, 0x14(r31)
	0x9ee95c       +0x0a0:                 addi        r0, r9, 0x4 (4)
	0x9ee960       +0x0a4:                 or          r9, r0, r0
	0x9ee964       +0x0a8:                 stw         r9, 0x14(r31)
	}

	HPI_UL_UNLOCK();	
	0x9ee978       +0x0bc:                 lis         r9, 0x9f (159)
	0x9ee97c       +0x0c0:                 addi        r11, r9, 0x6d8 (1752)
	0x9ee980       +0x0c4:                 lwz         r3, 0x0(r11)
	0x9ee984       +0x0c8:                 bl          semGive

	return;
	0x9ee988       +0x0cc:                 b           hpi_ul_memcpy_dsp2h + 0xd0
}
	0x9ee98c       +0x0d0:                 lwz         r11, 0x0(r1)
	0x9ee990       +0x0d4:                 lwz         r0, 0x4(r11)
	0x9ee994       +0x0d8:                 mtspr       LR, r0
	0x9ee998       +0x0dc:                 lwz         r31, 0xfffc(r11)
	0x9ee99c       +0x0e0:                 or          r1, r11, r11
	0x9ee9a0       +0x0e4:                 blr         

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Help!
  2004-10-27  3:48     ` Help! soar.wu
  2004-10-27  3:49       ` Help! soar.wu
@ 2004-10-27 12:20       ` Mark Chambers
  2004-10-27 13:08         ` Help! soar.wu
  1 sibling, 1 reply; 8+ messages in thread
From: Mark Chambers @ 2004-10-27 12:20 UTC (permalink / raw)
  To: soar.wu; +Cc: linuxppc-embedded

> Yes, the 'HPI' - are connected to a T.I. DSP, 6416 type.
>
> About the HPI problem progress.
> now we have the following resolve method:
>
> 1, If we use a temp variable to store the read out data, then store the
data to SDRAM, it is OK, there is no hop.
>      tmpReadRst = READ_UL_HPI_REG(UL_HPIDA_ADDR);
>     *p_cur = tmpReadRst;
>      p_cur++;
>
> 2, If we donot modify the source codes, but we use the optimization O3 to
compile the source codes, it is OK.
> *p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
> p_cur++;
>
> 3, If we add a sync instruction to the source  codes, it is OK.
> *p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
>             __asm__("  eieio; sync");
> p_cur++;
>
> 4, If we modify the BSP, update the memory operation option,
> from :
> PHYS_MEM_DESC sysPhysMemDesc [] =
> {
> ...
>     /*all the other small chip*/
>     {
>     (void *) 0x50000000,
>     (void *) 0x50000000,
>     0x08000000,
>     VM_STATE_MASK_VALID | VM_STATE_MASK_WRITABLE |
VM_STATE_MASK_CACHEABLE,
>     VM_STATE_VALID      | VM_STATE_WRITABLE      | VM_STATE_CACHEABLE_NOT
>     },
> ...
> to:
> PHYS_MEM_DESC sysPhysMemDesc [] =
> {
> ...
>     /*all the other small chip*/
>     {
>     (void *) 0x50000000,
>     (void *) 0x50000000,
>     0x08000000,
>     VM_STATE_MASK_VALID | VM_STATE_MASK_WRITABLE | VM_STATE_MASK_CACHEABLE
| VM_STATE_MASK_GUARDED,
>     VM_STATE_VALID      | VM_STATE_WRITABLE      | VM_STATE_CACHEABLE_NOT
| VM_STATE_GUARDED
>     },
> ...
>
> We added the option VM_STATE_VM_MASK_GUARDED and VM_STATE_GUARDED,
> still use the old source codes:
> *p_cur = READ_UL_HPI_REG(UL_HPIDA_ADDR);
> p_cur++;
>
> Now We find there is no hop. It is OK
>
>
> 5, If we use the memory which is allocated by function cacheDmaMalloc(),
> the hop still exist, but the number of  hop is little than when we use
malloc().
>
> Do you tell me the reason??
>

sysPhysMemDesc? This is VxWorks, not linux, right?  Well, the same general
points can be made, but this list is not the right one for VxWorks issues.

1) Even though the 8260 is cache-coherent you should make sure the MMU is
set up so the HPI space is non-cached.
2) In C code make sure the HPI space is declared as 'volatile' so the
compiler doesn't optimize out references to the address.
3) Remember that the HPI interface on the DSP is DMA driven and goes through
a FIFO, so that writes to DSP memory have a delay before they actually
appear in DSP memory.  And don't forget to look at cache issues on the DSP
side.  Your problem may actually be on the DSP side and your various fixes
are just introducing enough delay for things to complete on the DSP side.

Good luck,
Mark Chambers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Help!
  2004-10-27 12:20       ` Help! Mark Chambers
@ 2004-10-27 13:08         ` soar.wu
  0 siblings, 0 replies; 8+ messages in thread
From: soar.wu @ 2004-10-27 13:08 UTC (permalink / raw)
  To: linuxppc-embedded

RGVhciBNYXJrLA0KDQpUaGFuayB5b3UgZm9yIHlvdXIgZGV0YWlsZWQgcmVwbHkuDQoNCkkgd2ls
bCBjb250aW51ZSB0byBjaGVjayBpdC4NCg0KVGhhbmsgeW91Lg0KDQpTb2FyIFd1DQoNCi0tLS0t
T3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpGcm9tOiBNYXJrIENoYW1iZXJzIFttYWlsdG86bWFya2NA
bWFpbC5jb21dDQpTZW50OiAyMDA0xOoxMNTCMjfI1SAyMDoyMA0KVG86IHNvYXIud3UNCkNjOiBs
aW51eHBwYy1lbWJlZGRlZEBvemxhYnMub3JnDQpTdWJqZWN0OiBSZTogSGVscCENCg0KDQo+IFll
cywgdGhlICdIUEknIC0gYXJlIGNvbm5lY3RlZCB0byBhIFQuSS4gRFNQLCA2NDE2IHR5cGUuDQo+
DQo+IEFib3V0IHRoZSBIUEkgcHJvYmxlbSBwcm9ncmVzcy4NCj4gbm93IHdlIGhhdmUgdGhlIGZv
bGxvd2luZyByZXNvbHZlIG1ldGhvZDoNCj4NCj4gMSwgSWYgd2UgdXNlIGEgdGVtcCB2YXJpYWJs
ZSB0byBzdG9yZSB0aGUgcmVhZCBvdXQgZGF0YSwgdGhlbiBzdG9yZSB0aGUNCmRhdGEgdG8gU0RS
QU0sIGl0IGlzIE9LLCB0aGVyZSBpcyBubyBob3AuDQo+ICAgICAgdG1wUmVhZFJzdCA9IFJFQURf
VUxfSFBJX1JFRyhVTF9IUElEQV9BRERSKTsNCj4gICAgICpwX2N1ciA9IHRtcFJlYWRSc3Q7DQo+
ICAgICAgcF9jdXIrKzsNCj4NCj4gMiwgSWYgd2UgZG9ub3QgbW9kaWZ5IHRoZSBzb3VyY2UgY29k
ZXMsIGJ1dCB3ZSB1c2UgdGhlIG9wdGltaXphdGlvbiBPMyB0bw0KY29tcGlsZSB0aGUgc291cmNl
IGNvZGVzLCBpdCBpcyBPSy4NCj4gKnBfY3VyID0gUkVBRF9VTF9IUElfUkVHKFVMX0hQSURBX0FE
RFIpOw0KPiBwX2N1cisrOw0KPg0KPiAzLCBJZiB3ZSBhZGQgYSBzeW5jIGluc3RydWN0aW9uIHRv
IHRoZSBzb3VyY2UgIGNvZGVzLCBpdCBpcyBPSy4NCj4gKnBfY3VyID0gUkVBRF9VTF9IUElfUkVH
KFVMX0hQSURBX0FERFIpOw0KPiAgICAgICAgICAgICBfX2FzbV9fKCIgIGVpZWlvOyBzeW5jIik7
DQo+IHBfY3VyKys7DQo+DQo+IDQsIElmIHdlIG1vZGlmeSB0aGUgQlNQLCB1cGRhdGUgdGhlIG1l
bW9yeSBvcGVyYXRpb24gb3B0aW9uLA0KPiBmcm9tIDoNCj4gUEhZU19NRU1fREVTQyBzeXNQaHlz
TWVtRGVzYyBbXSA9DQo+IHsNCj4gLi4uDQo+ICAgICAvKmFsbCB0aGUgb3RoZXIgc21hbGwgY2hp
cCovDQo+ICAgICB7DQo+ICAgICAodm9pZCAqKSAweDUwMDAwMDAwLA0KPiAgICAgKHZvaWQgKikg
MHg1MDAwMDAwMCwNCj4gICAgIDB4MDgwMDAwMDAsDQo+ICAgICBWTV9TVEFURV9NQVNLX1ZBTElE
IHwgVk1fU1RBVEVfTUFTS19XUklUQUJMRSB8DQpWTV9TVEFURV9NQVNLX0NBQ0hFQUJMRSwNCj4g
ICAgIFZNX1NUQVRFX1ZBTElEICAgICAgfCBWTV9TVEFURV9XUklUQUJMRSAgICAgIHwgVk1fU1RB
VEVfQ0FDSEVBQkxFX05PVA0KPiAgICAgfSwNCj4gLi4uDQo+IHRvOg0KPiBQSFlTX01FTV9ERVND
IHN5c1BoeXNNZW1EZXNjIFtdID0NCj4gew0KPiAuLi4NCj4gICAgIC8qYWxsIHRoZSBvdGhlciBz
bWFsbCBjaGlwKi8NCj4gICAgIHsNCj4gICAgICh2b2lkICopIDB4NTAwMDAwMDAsDQo+ICAgICAo
dm9pZCAqKSAweDUwMDAwMDAwLA0KPiAgICAgMHgwODAwMDAwMCwNCj4gICAgIFZNX1NUQVRFX01B
U0tfVkFMSUQgfCBWTV9TVEFURV9NQVNLX1dSSVRBQkxFIHwgVk1fU1RBVEVfTUFTS19DQUNIRUFC
TEUNCnwgVk1fU1RBVEVfTUFTS19HVUFSREVELA0KPiAgICAgVk1fU1RBVEVfVkFMSUQgICAgICB8
IFZNX1NUQVRFX1dSSVRBQkxFICAgICAgfCBWTV9TVEFURV9DQUNIRUFCTEVfTk9UDQp8IFZNX1NU
QVRFX0dVQVJERUQNCj4gICAgIH0sDQo+IC4uLg0KPg0KPiBXZSBhZGRlZCB0aGUgb3B0aW9uIFZN
X1NUQVRFX1ZNX01BU0tfR1VBUkRFRCBhbmQgVk1fU1RBVEVfR1VBUkRFRCwNCj4gc3RpbGwgdXNl
IHRoZSBvbGQgc291cmNlIGNvZGVzOg0KPiAqcF9jdXIgPSBSRUFEX1VMX0hQSV9SRUcoVUxfSFBJ
REFfQUREUik7DQo+IHBfY3VyKys7DQo+DQo+IE5vdyBXZSBmaW5kIHRoZXJlIGlzIG5vIGhvcC4g
SXQgaXMgT0sNCj4NCj4NCj4gNSwgSWYgd2UgdXNlIHRoZSBtZW1vcnkgd2hpY2ggaXMgYWxsb2Nh
dGVkIGJ5IGZ1bmN0aW9uIGNhY2hlRG1hTWFsbG9jKCksDQo+IHRoZSBob3Agc3RpbGwgZXhpc3Qs
IGJ1dCB0aGUgbnVtYmVyIG9mICBob3AgaXMgbGl0dGxlIHRoYW4gd2hlbiB3ZSB1c2UNCm1hbGxv
YygpLg0KPg0KPiBEbyB5b3UgdGVsbCBtZSB0aGUgcmVhc29uPz8NCj4NCg0Kc3lzUGh5c01lbURl
c2M/IFRoaXMgaXMgVnhXb3Jrcywgbm90IGxpbnV4LCByaWdodD8gIFdlbGwsIHRoZSBzYW1lIGdl
bmVyYWwNCnBvaW50cyBjYW4gYmUgbWFkZSwgYnV0IHRoaXMgbGlzdCBpcyBub3QgdGhlIHJpZ2h0
IG9uZSBmb3IgVnhXb3JrcyBpc3N1ZXMuDQoNCjEpIEV2ZW4gdGhvdWdoIHRoZSA4MjYwIGlzIGNh
Y2hlLWNvaGVyZW50IHlvdSBzaG91bGQgbWFrZSBzdXJlIHRoZSBNTVUgaXMNCnNldCB1cCBzbyB0
aGUgSFBJIHNwYWNlIGlzIG5vbi1jYWNoZWQuDQoyKSBJbiBDIGNvZGUgbWFrZSBzdXJlIHRoZSBI
UEkgc3BhY2UgaXMgZGVjbGFyZWQgYXMgJ3ZvbGF0aWxlJyBzbyB0aGUNCmNvbXBpbGVyIGRvZXNu
J3Qgb3B0aW1pemUgb3V0IHJlZmVyZW5jZXMgdG8gdGhlIGFkZHJlc3MuDQozKSBSZW1lbWJlciB0
aGF0IHRoZSBIUEkgaW50ZXJmYWNlIG9uIHRoZSBEU1AgaXMgRE1BIGRyaXZlbiBhbmQgZ29lcyB0
aHJvdWdoDQphIEZJRk8sIHNvIHRoYXQgd3JpdGVzIHRvIERTUCBtZW1vcnkgaGF2ZSBhIGRlbGF5
IGJlZm9yZSB0aGV5IGFjdHVhbGx5DQphcHBlYXIgaW4gRFNQIG1lbW9yeS4gIEFuZCBkb24ndCBm
b3JnZXQgdG8gbG9vayBhdCBjYWNoZSBpc3N1ZXMgb24gdGhlIERTUA0Kc2lkZS4gIFlvdXIgcHJv
YmxlbSBtYXkgYWN0dWFsbHkgYmUgb24gdGhlIERTUCBzaWRlIGFuZCB5b3VyIHZhcmlvdXMgZml4
ZXMNCmFyZSBqdXN0IGludHJvZHVjaW5nIGVub3VnaCBkZWxheSBmb3IgdGhpbmdzIHRvIGNvbXBs
ZXRlIG9uIHRoZSBEU1Agc2lkZS4NCg0KR29vZCBsdWNrLA0KTWFyayBDaGFtYmVycw0KDQo=

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2004-10-27 13:07 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
1999-06-03  1:51 Help! Jeremy Welling
  -- strict thread matches above, loose matches on Subject: below --
2004-10-26  0:57 Help! soar.wu
2004-10-26 14:13 ` Help! Mark Chambers
2004-10-27  3:47   ` Help! soar.wu
2004-10-27  3:48     ` Help! soar.wu
2004-10-27  3:49       ` Help! soar.wu
2004-10-27 12:20       ` Help! Mark Chambers
2004-10-27 13:08         ` Help! soar.wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).