* [PATCH 1/2] pci-phb: check for the 32-bit overflow @ 2015-04-22 10:57 Nikunj A Dadhania 2015-04-23 13:43 ` Thomas Huth 0 siblings, 1 reply; 8+ messages in thread From: Nikunj A Dadhania @ 2015-04-22 10:57 UTC (permalink / raw) To: linuxppc-dev; +Cc: aik, thuth, nikunj, david With the addition of 64-bit BARS and increase in the mmio address space, the code was hitting this limit. The memory of pci devices across the bridges were not accessible due to which the drivers failed. Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> --- board-qemu/slof/pci-phb.fs | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs index 529772f..e307d95 100644 --- a/board-qemu/slof/pci-phb.fs +++ b/board-qemu/slof/pci-phb.fs @@ -258,7 +258,8 @@ setup-puid decode-64 2 / dup >r \ Decode and calc size/2 pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address dup pci-next-mmio ! \ which is the same as MMIO base - r> + pci-max-mmio ! \ calc max MMIO address + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and + \ check the 32-bit boundary ENDOF 3000000 OF \ 64-bit memory space? decode-64 pci-next-mem64 ! -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-22 10:57 [PATCH 1/2] pci-phb: check for the 32-bit overflow Nikunj A Dadhania @ 2015-04-23 13:43 ` Thomas Huth 2015-04-24 3:52 ` Nikunj A Dadhania 0 siblings, 1 reply; 8+ messages in thread From: Thomas Huth @ 2015-04-23 13:43 UTC (permalink / raw) To: Nikunj A Dadhania; +Cc: aik, linuxppc-dev, david Am Wed, 22 Apr 2015 16:27:19 +0530 schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: > With the addition of 64-bit BARS and increase in the mmio address > space, the code was hitting this limit. The memory of pci devices > across the bridges were not accessible due to which the drivers > failed. > > Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > --- > board-qemu/slof/pci-phb.fs | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs > index 529772f..e307d95 100644 > --- a/board-qemu/slof/pci-phb.fs > +++ b/board-qemu/slof/pci-phb.fs > @@ -258,7 +258,8 @@ setup-puid > decode-64 2 / dup >r \ Decode and calc size/2 > pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address Could pci-max-mem overflow, too? > dup pci-next-mmio ! \ which is the same as MMIO base > - r> + pci-max-mmio ! \ calc max MMIO address > + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and > + \ check the 32-bit boundary Thomas ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-23 13:43 ` Thomas Huth @ 2015-04-24 3:52 ` Nikunj A Dadhania 2015-04-24 10:56 ` Thomas Huth 0 siblings, 1 reply; 8+ messages in thread From: Nikunj A Dadhania @ 2015-04-24 3:52 UTC (permalink / raw) To: Thomas Huth; +Cc: aik, linuxppc-dev, david Hi Thomas, Thomas Huth <thuth@redhat.com> writes: > Am Wed, 22 Apr 2015 16:27:19 +0530 > schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: > >> With the addition of 64-bit BARS and increase in the mmio address >> space, the code was hitting this limit. The memory of pci devices >> across the bridges were not accessible due to which the drivers >> failed. >> >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> --- >> board-qemu/slof/pci-phb.fs | 3 ++- >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs >> index 529772f..e307d95 100644 >> --- a/board-qemu/slof/pci-phb.fs >> +++ b/board-qemu/slof/pci-phb.fs >> @@ -258,7 +258,8 @@ setup-puid >> decode-64 2 / dup >r \ Decode and calc size/2 >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address > > Could pci-max-mem overflow, too? Should not, its only the boundary that was an issue. Qemu sends base and size, base + size can be till uint32 max. So for example base was 0xC000.0000 and size was 0x4000.0000, we add up base + size and put pci-max-mmio as 0x1.0000.0000, which would get programmend in the bridge bars: lower limit as 0xC000 and 0x0000 as upper limit. And no mmio access were going across the bridge. In my testing, I have found one more issue with translate-my-address, it does not take care of 64-bit addresses. I have a patch working for SLOF, but its breaking the guest kernel booting. > >> dup pci-next-mmio ! \ which is the same as MMIO base >> - r> + pci-max-mmio ! \ calc max MMIO address >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and >> + \ check the 32-bit boundary > > Thomas Regards, Nikunj ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-24 3:52 ` Nikunj A Dadhania @ 2015-04-24 10:56 ` Thomas Huth 2015-04-24 11:06 ` Thomas Huth 2015-04-27 4:58 ` Nikunj A Dadhania 0 siblings, 2 replies; 8+ messages in thread From: Thomas Huth @ 2015-04-24 10:56 UTC (permalink / raw) To: Nikunj A Dadhania; +Cc: aik, linuxppc-dev, david On Fri, 24 Apr 2015 09:22:33 +0530 Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > > Hi Thomas, > > Thomas Huth <thuth@redhat.com> writes: > > Am Wed, 22 Apr 2015 16:27:19 +0530 > > schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: > > > >> With the addition of 64-bit BARS and increase in the mmio address > >> space, the code was hitting this limit. The memory of pci devices > >> across the bridges were not accessible due to which the drivers > >> failed. > >> > >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > >> --- > >> board-qemu/slof/pci-phb.fs | 3 ++- > >> 1 file changed, 2 insertions(+), 1 deletion(-) > >> > >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs > >> index 529772f..e307d95 100644 > >> --- a/board-qemu/slof/pci-phb.fs > >> +++ b/board-qemu/slof/pci-phb.fs > >> @@ -258,7 +258,8 @@ setup-puid > >> decode-64 2 / dup >r \ Decode and calc size/2 > >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address > > > > Could pci-max-mem overflow, too? > > Should not, its only the boundary that was an issue. > > Qemu sends base and size, base + size can be till uint32 max. So for > example base was 0xC000.0000 and size was 0x4000.0000, we add up base + > size and put pci-max-mmio as 0x1.0000.0000, which would get programmend > in the bridge bars: lower limit as 0xC000 and 0x0000 as upper > limit. And no mmio access were going across the bridge. > > In my testing, I have found one more issue with translate-my-address, > it does not take care of 64-bit addresses. I have a patch working for > SLOF, but its breaking the guest kernel booting. > > > > >> dup pci-next-mmio ! \ which is the same as MMIO base > >> - r> + pci-max-mmio ! \ calc max MMIO address > >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and > >> + \ check the 32-bit boundary Ok, thanks a lot for the example! I think your patch likely works in practice, but after staring at the code for a while, I think the real bug is slightly different. If I get the code above right, pci-max-mmio is normally set to the first address that is _not_ part of the mmio window anymore, right. Now have a look at pci-bridge-set-mmio-base in pci-scan.fs: : pci-bridge-set-mmio-base ( addr -- ) pci-next-mmio @ 100000 #aligned \ read the current Value and align to 1MB boundary dup 100000 + pci-next-mmio ! \ and write back with 1MB for bridge 10 rshift \ mmio-base reg is only the upper 16 bits pci-max-mmio @ FFFF0000 and or \ and Insert mmio Limit (set it to max) swap 20 + rtas-config-l! \ and write it into the bridge ; Seems like the pci-max-mmio, i.e. the first address that is not in the window anymore, is programmed into the memory limit register here - but according to the pci-to-pci bridge specification, it should be the last address of the window instead. So I think the correct fix would be to decrease the pci-max-mmio value in pci-bridge-set-mmio-base by 1- before programming it into the limit register (note: in pci-bridge-set-mmio-limit you can find a "1-" already, so I think this also should be done in pci-bridge-set-mmio-base, too) So if you've got some spare minutes, could you please check whether that would fix the issue, too? Thomas ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-24 10:56 ` Thomas Huth @ 2015-04-24 11:06 ` Thomas Huth 2015-04-27 5:00 ` Nikunj A Dadhania 2015-04-27 5:41 ` Nikunj A Dadhania 2015-04-27 4:58 ` Nikunj A Dadhania 1 sibling, 2 replies; 8+ messages in thread From: Thomas Huth @ 2015-04-24 11:06 UTC (permalink / raw) To: Nikunj A Dadhania; +Cc: aik, linuxppc-dev, david On Fri, 24 Apr 2015 12:56:57 +0200 Thomas Huth <thuth@redhat.com> wrote: > On Fri, 24 Apr 2015 09:22:33 +0530 > Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > > > > > Hi Thomas, > > > > Thomas Huth <thuth@redhat.com> writes: > > > Am Wed, 22 Apr 2015 16:27:19 +0530 > > > schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: > > > > > >> With the addition of 64-bit BARS and increase in the mmio address > > >> space, the code was hitting this limit. The memory of pci devices > > >> across the bridges were not accessible due to which the drivers > > >> failed. > > >> > > >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> > > >> --- > > >> board-qemu/slof/pci-phb.fs | 3 ++- > > >> 1 file changed, 2 insertions(+), 1 deletion(-) > > >> > > >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs > > >> index 529772f..e307d95 100644 > > >> --- a/board-qemu/slof/pci-phb.fs > > >> +++ b/board-qemu/slof/pci-phb.fs > > >> @@ -258,7 +258,8 @@ setup-puid > > >> decode-64 2 / dup >r \ Decode and calc size/2 > > >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address > > > > > > Could pci-max-mem overflow, too? > > > > Should not, its only the boundary that was an issue. > > > > Qemu sends base and size, base + size can be till uint32 max. So for > > example base was 0xC000.0000 and size was 0x4000.0000, we add up base + > > size and put pci-max-mmio as 0x1.0000.0000, which would get programmend > > in the bridge bars: lower limit as 0xC000 and 0x0000 as upper > > limit. And no mmio access were going across the bridge. > > > > In my testing, I have found one more issue with translate-my-address, > > it does not take care of 64-bit addresses. I have a patch working for > > SLOF, but its breaking the guest kernel booting. > > > > > > > >> dup pci-next-mmio ! \ which is the same as MMIO base > > >> - r> + pci-max-mmio ! \ calc max MMIO address > > >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and > > >> + \ check the 32-bit boundary > > Ok, thanks a lot for the example! I think your patch likely works in > practice, but after staring at the code for a while, I think the real > bug is slightly different. If I get the code above right, pci-max-mmio > is normally set to the first address that is _not_ part of the mmio > window anymore, right. Now have a look at pci-bridge-set-mmio-base in > pci-scan.fs: > > : pci-bridge-set-mmio-base ( addr -- ) > pci-next-mmio @ 100000 #aligned \ read the current Value and align to 1MB boundary > dup 100000 + pci-next-mmio ! \ and write back with 1MB for bridge > 10 rshift \ mmio-base reg is only the upper 16 bits > pci-max-mmio @ FFFF0000 and or \ and Insert mmio Limit (set it to max) > swap 20 + rtas-config-l! \ and write it into the bridge > ; > > Seems like the pci-max-mmio, i.e. the first address that is not in the > window anymore, is programmed into the memory limit register here - but > according to the pci-to-pci bridge specification, it should be the last > address of the window instead. > > So I think the correct fix would be to decrease the pci-max-mmio > value in pci-bridge-set-mmio-base by 1- before programming it into the > limit register (note: in pci-bridge-set-mmio-limit you can find a "1-" > already, so I think this also should be done in > pci-bridge-set-mmio-base, too) > > So if you've got some spare minutes, could you please check whether that > would fix the issue, too? By the way, if I'm right, pci-bridge-set-mem-base seems to suffer from the same problem, too. Thomas ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-24 11:06 ` Thomas Huth @ 2015-04-27 5:00 ` Nikunj A Dadhania 2015-04-27 5:41 ` Nikunj A Dadhania 1 sibling, 0 replies; 8+ messages in thread From: Nikunj A Dadhania @ 2015-04-27 5:00 UTC (permalink / raw) To: Thomas Huth; +Cc: aik, linuxppc-dev, david Thomas Huth <thuth@redhat.com> writes: > On Fri, 24 Apr 2015 12:56:57 +0200 > Thomas Huth <thuth@redhat.com> wrote: > >> On Fri, 24 Apr 2015 09:22:33 +0530 >> Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: >> >> > >> > Hi Thomas, >> > >> > Thomas Huth <thuth@redhat.com> writes: >> > > Am Wed, 22 Apr 2015 16:27:19 +0530 >> > > schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: >> > > >> > >> With the addition of 64-bit BARS and increase in the mmio address >> > >> space, the code was hitting this limit. The memory of pci devices >> > >> across the bridges were not accessible due to which the drivers >> > >> failed. >> > >> >> > >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> > >> --- >> > >> board-qemu/slof/pci-phb.fs | 3 ++- >> > >> 1 file changed, 2 insertions(+), 1 deletion(-) >> > >> >> > >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs >> > >> index 529772f..e307d95 100644 >> > >> --- a/board-qemu/slof/pci-phb.fs >> > >> +++ b/board-qemu/slof/pci-phb.fs >> > >> @@ -258,7 +258,8 @@ setup-puid >> > >> decode-64 2 / dup >r \ Decode and calc size/2 >> > >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address >> > > >> > > Could pci-max-mem overflow, too? >> > >> > Should not, its only the boundary that was an issue. >> > >> > Qemu sends base and size, base + size can be till uint32 max. So for >> > example base was 0xC000.0000 and size was 0x4000.0000, we add up base + >> > size and put pci-max-mmio as 0x1.0000.0000, which would get programmend >> > in the bridge bars: lower limit as 0xC000 and 0x0000 as upper >> > limit. And no mmio access were going across the bridge. >> > >> > In my testing, I have found one more issue with translate-my-address, >> > it does not take care of 64-bit addresses. I have a patch working for >> > SLOF, but its breaking the guest kernel booting. >> > >> > > >> > >> dup pci-next-mmio ! \ which is the same as MMIO base >> > >> - r> + pci-max-mmio ! \ calc max MMIO address >> > >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and >> > >> + \ check the 32-bit boundary >> >> Ok, thanks a lot for the example! I think your patch likely works in >> practice, but after staring at the code for a while, I think the real >> bug is slightly different. If I get the code above right, pci-max-mmio >> is normally set to the first address that is _not_ part of the mmio >> window anymore, right. Now have a look at pci-bridge-set-mmio-base in >> pci-scan.fs: >> >> : pci-bridge-set-mmio-base ( addr -- ) >> pci-next-mmio @ 100000 #aligned \ read the current Value and align to 1MB boundary >> dup 100000 + pci-next-mmio ! \ and write back with 1MB for bridge >> 10 rshift \ mmio-base reg is only the upper 16 bits >> pci-max-mmio @ FFFF0000 and or \ and Insert mmio Limit (set it to max) >> swap 20 + rtas-config-l! \ and write it into the bridge >> ; >> >> Seems like the pci-max-mmio, i.e. the first address that is not in the >> window anymore, is programmed into the memory limit register here - but >> according to the pci-to-pci bridge specification, it should be the last >> address of the window instead. >> >> So I think the correct fix would be to decrease the pci-max-mmio >> value in pci-bridge-set-mmio-base by 1- before programming it into the >> limit register (note: in pci-bridge-set-mmio-limit you can find a "1-" >> already, so I think this also should be done in >> pci-bridge-set-mmio-base, too) >> >> So if you've got some spare minutes, could you please check whether that >> would fix the issue, too? > > By the way, if I'm right, pci-bridge-set-mem-base seems to suffer from > the same problem, too. Both have the same issue, so fixed like below diff --git a/slof/fs/pci-scan.fs b/slof/fs/pci-scan.fs index 15d0c8e..a552a74 100644 --- a/slof/fs/pci-scan.fs +++ b/slof/fs/pci-scan.fs @@ -87,7 +87,7 @@ here 100 allot CONSTANT pci-device-vec pci-next-mmio @ 100000 #aligned \ read the current Value and align to 1MB boundary dup 100000 + pci-next-mmio ! \ and write back with 1MB for bridge 10 rshift \ mmio-base reg is only the upper 16 bits - pci-max-mmio @ FFFF0000 and or \ and Insert mmio Limit (set it to max) + pci-max-mmio @ 1- FFFF0000 and or \ and Insert mmio Limit (set it to max) swap 20 + rtas-config-l! \ and write it into the bridge ; @@ -116,7 +116,7 @@ here 100 allot CONSTANT pci-device-vec 2 pick 2C + rtas-config-l! \ | and set the Limit THEN \ FI 10 rshift \ keep upper 16 bits - pci-max-mem @ FFFF0000 and or \ and Insert mmem Limit (set it to max) + pci-max-mem @ 1- FFFF0000 and or \ and Insert mmem Limit (set it to max) swap 24 + rtas-config-l! \ and write it into the bridge ; ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-24 11:06 ` Thomas Huth 2015-04-27 5:00 ` Nikunj A Dadhania @ 2015-04-27 5:41 ` Nikunj A Dadhania 1 sibling, 0 replies; 8+ messages in thread From: Nikunj A Dadhania @ 2015-04-27 5:41 UTC (permalink / raw) To: Thomas Huth; +Cc: aik, linuxppc-dev, david Thomas Huth <thuth@redhat.com> writes: > On Fri, 24 Apr 2015 12:56:57 +0200 > Thomas Huth <thuth@redhat.com> wrote: > >> On Fri, 24 Apr 2015 09:22:33 +0530 >> Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: >> >> > >> > Hi Thomas, >> > >> > Thomas Huth <thuth@redhat.com> writes: >> > > Am Wed, 22 Apr 2015 16:27:19 +0530 >> > > schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: >> > > >> > >> With the addition of 64-bit BARS and increase in the mmio address >> > >> space, the code was hitting this limit. The memory of pci devices >> > >> across the bridges were not accessible due to which the drivers >> > >> failed. >> > >> >> > >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> > >> --- >> > >> board-qemu/slof/pci-phb.fs | 3 ++- >> > >> 1 file changed, 2 insertions(+), 1 deletion(-) >> > >> >> > >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs >> > >> index 529772f..e307d95 100644 >> > >> --- a/board-qemu/slof/pci-phb.fs >> > >> +++ b/board-qemu/slof/pci-phb.fs >> > >> @@ -258,7 +258,8 @@ setup-puid >> > >> decode-64 2 / dup >r \ Decode and calc size/2 >> > >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address >> > > >> > > Could pci-max-mem overflow, too? >> > >> > Should not, its only the boundary that was an issue. >> > >> > Qemu sends base and size, base + size can be till uint32 max. So for >> > example base was 0xC000.0000 and size was 0x4000.0000, we add up base + >> > size and put pci-max-mmio as 0x1.0000.0000, which would get programmend >> > in the bridge bars: lower limit as 0xC000 and 0x0000 as upper >> > limit. And no mmio access were going across the bridge. >> > >> > In my testing, I have found one more issue with translate-my-address, >> > it does not take care of 64-bit addresses. I have a patch working for >> > SLOF, but its breaking the guest kernel booting. >> > >> > > >> > >> dup pci-next-mmio ! \ which is the same as MMIO base >> > >> - r> + pci-max-mmio ! \ calc max MMIO address >> > >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and >> > >> + \ check the 32-bit boundary >> >> Ok, thanks a lot for the example! I think your patch likely works in >> practice, but after staring at the code for a while, I think the real >> bug is slightly different. If I get the code above right, pci-max-mmio >> is normally set to the first address that is _not_ part of the mmio >> window anymore, right. Now have a look at pci-bridge-set-mmio-base in >> pci-scan.fs: >> >> : pci-bridge-set-mmio-base ( addr -- ) >> pci-next-mmio @ 100000 #aligned \ read the current Value and align to 1MB boundary >> dup 100000 + pci-next-mmio ! \ and write back with 1MB for bridge >> 10 rshift \ mmio-base reg is only the upper 16 bits >> pci-max-mmio @ FFFF0000 and or \ and Insert mmio Limit (set it to max) >> swap 20 + rtas-config-l! \ and write it into the bridge >> ; >> >> Seems like the pci-max-mmio, i.e. the first address that is not in the >> window anymore, is programmed into the memory limit register here - but >> according to the pci-to-pci bridge specification, it should be the last >> address of the window instead. >> >> So I think the correct fix would be to decrease the pci-max-mmio >> value in pci-bridge-set-mmio-base by 1- before programming it into the >> limit register (note: in pci-bridge-set-mmio-limit you can find a "1-" >> already, so I think this also should be done in >> pci-bridge-set-mmio-base, too) >> >> So if you've got some spare minutes, could you please check whether that >> would fix the issue, too? > > By the way, if I'm right, pci-bridge-set-mem-base seems to suffer from > the same problem, too. And pci-bridge-set-io-base as well. Regards Nikunj ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/2] pci-phb: check for the 32-bit overflow 2015-04-24 10:56 ` Thomas Huth 2015-04-24 11:06 ` Thomas Huth @ 2015-04-27 4:58 ` Nikunj A Dadhania 1 sibling, 0 replies; 8+ messages in thread From: Nikunj A Dadhania @ 2015-04-27 4:58 UTC (permalink / raw) To: Thomas Huth; +Cc: aik, linuxppc-dev, david Thomas Huth <thuth@redhat.com> writes: > On Fri, 24 Apr 2015 09:22:33 +0530 > Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> wrote: > >> >> Hi Thomas, >> >> Thomas Huth <thuth@redhat.com> writes: >> > Am Wed, 22 Apr 2015 16:27:19 +0530 >> > schrieb Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>: >> > >> >> With the addition of 64-bit BARS and increase in the mmio address >> >> space, the code was hitting this limit. The memory of pci devices >> >> across the bridges were not accessible due to which the drivers >> >> failed. >> >> >> >> Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> >> >> --- >> >> board-qemu/slof/pci-phb.fs | 3 ++- >> >> 1 file changed, 2 insertions(+), 1 deletion(-) >> >> >> >> diff --git a/board-qemu/slof/pci-phb.fs b/board-qemu/slof/pci-phb.fs >> >> index 529772f..e307d95 100644 >> >> --- a/board-qemu/slof/pci-phb.fs >> >> +++ b/board-qemu/slof/pci-phb.fs >> >> @@ -258,7 +258,8 @@ setup-puid >> >> decode-64 2 / dup >r \ Decode and calc size/2 >> >> pci-next-mem @ + dup pci-max-mem ! \ and calc max mem address >> > >> > Could pci-max-mem overflow, too? >> >> Should not, its only the boundary that was an issue. >> >> Qemu sends base and size, base + size can be till uint32 max. So for >> example base was 0xC000.0000 and size was 0x4000.0000, we add up base + >> size and put pci-max-mmio as 0x1.0000.0000, which would get programmend >> in the bridge bars: lower limit as 0xC000 and 0x0000 as upper >> limit. And no mmio access were going across the bridge. >> >> In my testing, I have found one more issue with translate-my-address, >> it does not take care of 64-bit addresses. I have a patch working for >> SLOF, but its breaking the guest kernel booting. >> >> > >> >> dup pci-next-mmio ! \ which is the same as MMIO base >> >> - r> + pci-max-mmio ! \ calc max MMIO address >> >> + r> + FFFFFFFF min pci-max-mmio ! \ calc max MMIO address and >> >> + \ check the 32-bit boundary > > Ok, thanks a lot for the example! I think your patch likely works in > practice, but after staring at the code for a while, I think the real > bug is slightly different. If I get the code above right, pci-max-mmio > is normally set to the first address that is _not_ part of the mmio > window anymore, right. Now have a look at pci-bridge-set-mmio-base in > pci-scan.fs: > > : pci-bridge-set-mmio-base ( addr -- ) > pci-next-mmio @ 100000 #aligned \ read the current Value and align to 1MB boundary > dup 100000 + pci-next-mmio ! \ and write back with 1MB for bridge > 10 rshift \ mmio-base reg is only the upper 16 bits > pci-max-mmio @ FFFF0000 and or \ and Insert mmio Limit (set it to max) > swap 20 + rtas-config-l! \ and write it into the bridge > ; > > Seems like the pci-max-mmio, i.e. the first address that is not in the > window anymore, is programmed into the memory limit register here - but > according to the pci-to-pci bridge specification, it should be the last > address of the window instead. > > So I think the correct fix would be to decrease the pci-max-mmio > value in pci-bridge-set-mmio-base by 1- before programming it into the > limit register (note: in pci-bridge-set-mmio-limit you can find a "1-" > already, so I think this also should be done in > pci-bridge-set-mmio-base, too) Yes, you are right, and because it was being done during probing and correct values set later on nobody noticed this as an issue. We only found it when we had an address that was over 32-bit. > > So if you've got some spare minutes, could you please check whether that > would fix the issue, too? I have tried this fix and works fine, will change this and repost. Regards, Nikunj ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2015-04-27 5:41 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-04-22 10:57 [PATCH 1/2] pci-phb: check for the 32-bit overflow Nikunj A Dadhania 2015-04-23 13:43 ` Thomas Huth 2015-04-24 3:52 ` Nikunj A Dadhania 2015-04-24 10:56 ` Thomas Huth 2015-04-24 11:06 ` Thomas Huth 2015-04-27 5:00 ` Nikunj A Dadhania 2015-04-27 5:41 ` Nikunj A Dadhania 2015-04-27 4:58 ` Nikunj A Dadhania
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).