* Re: [PATCH 00/11] of: dma-ranges fixes and improvements [not found] ` <20190930082055.GA21971@infradead.org> @ 2019-09-30 8:56 ` Thierry Reding 2019-09-30 9:55 ` Robin Murphy 0 siblings, 1 reply; 3+ messages in thread From: Thierry Reding @ 2019-09-30 8:56 UTC (permalink / raw) To: Christoph Hellwig Cc: Arnd Bergmann, Rob Herring, DTML, Linux ARM, linux-kernel@vger.kernel.org, linux-pci, Nicolas Saenz Julienne, Robin Murphy, Florian Fainelli, Stefan Wahren, Frank Rowand, Marek Vasut, Geert Uytterhoeven, Simon Horman, Lorenzo Pieralisi, Oza Pawandeep, linux-tegra [-- Attachment #1: Type: text/plain, Size: 3271 bytes --] On Mon, Sep 30, 2019 at 01:20:55AM -0700, Christoph Hellwig wrote: > On Sun, Sep 29, 2019 at 01:16:20PM +0200, Arnd Bergmann wrote: > > On a semi-related note, Thierry asked about one aspect of the dma-ranges > > property recently, which is the behavior of dma_set_mask() and related > > functions when a driver sets a mask that is larger than the memory > > area in the bus-ranges but smaller than the available physical RAM. > > As I understood Thierry's problem and the current code, the generic > > dma_set_mask() will either reject the new mask entirely or override > > the mask set by of_dma_configure, but it fails to set a correct mask > > within the limitations of the parent bus in this case. > > There days dma_set_mask will only reject a mask if it is too small > to be supported by the hardware. Larger than required masks are now > always accepted. Summarizing why this came up: the memory subsystem on Tegra194 has a mechanism controlled by bit 39 of physical addresses. This is used to support two variants of sector ordering for block linear formats. The GPU uses a slightly different ordering than other MSS clients, so the drivers have to set this bit depending on who they interoperate with. I was running into this as I was adding support for IOMMU support for the Ethernet controller on Tegra194. The controller has a HW feature register that contains how many address bits it supports. This is 40 for Tegra194, corresponding to the number of address bits to the MSS. Without IOMMU support that's not a problem because there are no systems with 40 bits of system memory. However, if we enable IOMMU support, the DMA/IOMMU code will allocate from the top of a 48-bit (constrained to 40 bits via the DMA mask) input address space. This causes bit 39 to be set, which in turn will make the MSS reorder sectors and break network communications. Since this reordering takes place at the MSS level, this applies to all MSS clients. Most of these clients always want bit 39 to be 0, whereas the clients that can and want to make use of the reordering always want bit 39 to be under their control, so they can control in a fine-grained way when to set it. This means that effectively all MSS clients can only address 39 bits, so instead of hard-coding that for each driver I thought it'd make sense to have a central place to configure this, so that all devices by default are restricted to 39-bit addressing. However, with the current DMA API implementation this causes a problem because the default 39-bit DMA mask would get overwritten by the driver (as in the example of the Ethernet controller setting a 40-bit DMA mask because that's what the hardware supports). I realize that this is somewhat exotic. On one hand it is correct for a driver to say that the hardware supports 40-bit addressing (i.e. the Ethernet controller can address bit 39), but from a system integration point of view, using bit 39 is wrong, except in a very restricted set of cases. If I understand correctly, describing this with a dma-ranges property is the right thing to do, but it wouldn't work with the current implementation because drivers can still override a lower DMA mask with a higher one. Thierry [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH 00/11] of: dma-ranges fixes and improvements 2019-09-30 8:56 ` [PATCH 00/11] of: dma-ranges fixes and improvements Thierry Reding @ 2019-09-30 9:55 ` Robin Murphy 2019-09-30 13:35 ` Thierry Reding 0 siblings, 1 reply; 3+ messages in thread From: Robin Murphy @ 2019-09-30 9:55 UTC (permalink / raw) To: Thierry Reding, Christoph Hellwig Cc: Arnd Bergmann, Rob Herring, DTML, Linux ARM, linux-kernel@vger.kernel.org, linux-pci, Nicolas Saenz Julienne, Florian Fainelli, Stefan Wahren, Frank Rowand, Marek Vasut, Geert Uytterhoeven, Simon Horman, Lorenzo Pieralisi, Oza Pawandeep, linux-tegra On 2019-09-30 9:56 am, Thierry Reding wrote: > On Mon, Sep 30, 2019 at 01:20:55AM -0700, Christoph Hellwig wrote: >> On Sun, Sep 29, 2019 at 01:16:20PM +0200, Arnd Bergmann wrote: >>> On a semi-related note, Thierry asked about one aspect of the dma-ranges >>> property recently, which is the behavior of dma_set_mask() and related >>> functions when a driver sets a mask that is larger than the memory >>> area in the bus-ranges but smaller than the available physical RAM. >>> As I understood Thierry's problem and the current code, the generic >>> dma_set_mask() will either reject the new mask entirely or override >>> the mask set by of_dma_configure, but it fails to set a correct mask >>> within the limitations of the parent bus in this case. >> >> There days dma_set_mask will only reject a mask if it is too small >> to be supported by the hardware. Larger than required masks are now >> always accepted. > > Summarizing why this came up: the memory subsystem on Tegra194 has a > mechanism controlled by bit 39 of physical addresses. This is used to > support two variants of sector ordering for block linear formats. The > GPU uses a slightly different ordering than other MSS clients, so the > drivers have to set this bit depending on who they interoperate with. > > I was running into this as I was adding support for IOMMU support for > the Ethernet controller on Tegra194. The controller has a HW feature > register that contains how many address bits it supports. This is 40 > for Tegra194, corresponding to the number of address bits to the MSS. > Without IOMMU support that's not a problem because there are no systems > with 40 bits of system memory. However, if we enable IOMMU support, the > DMA/IOMMU code will allocate from the top of a 48-bit (constrained to > 40 bits via the DMA mask) input address space. This causes bit 39 to be > set, which in turn will make the MSS reorder sectors and break network > communications. > > Since this reordering takes place at the MSS level, this applies to all > MSS clients. Most of these clients always want bit 39 to be 0, whereas > the clients that can and want to make use of the reordering always want > bit 39 to be under their control, so they can control in a fine-grained > way when to set it. > > This means that effectively all MSS clients can only address 39 bits, so > instead of hard-coding that for each driver I thought it'd make sense to > have a central place to configure this, so that all devices by default > are restricted to 39-bit addressing. However, with the current DMA API > implementation this causes a problem because the default 39-bit DMA mask > would get overwritten by the driver (as in the example of the Ethernet > controller setting a 40-bit DMA mask because that's what the hardware > supports). > > I realize that this is somewhat exotic. On one hand it is correct for a > driver to say that the hardware supports 40-bit addressing (i.e. the > Ethernet controller can address bit 39), but from a system integration > point of view, using bit 39 is wrong, except in a very restricted set of > cases. > > If I understand correctly, describing this with a dma-ranges property is > the right thing to do, but it wouldn't work with the current > implementation because drivers can still override a lower DMA mask with > a higher one. But that sounds like exactly the situation for which we introduced bus_dma_mask. If "dma-ranges" is found, then we should initialise that to reflect the limitation. Drivers may subsequently set a larger mask based on what the device is natively capable of, but the DMA API internals should quietly clamp that down to the bus mask wherever it matters. Since that change, the initial value of dma_mask and coherent_dma_mask doesn't really matter much, as we expect drivers to reset them anyway (and in general they have to be able to enlarge them from a 32-bit default value). As far as I'm aware this has been working fine (albeit in equivalent ACPI form) for at least one SoC with 64-bit device masks, a 48-bit IOMMU, and a 44-bit interconnect in between - indeed if I avoid distraction long enough to set up the big new box under my desk, the sending of future emails will depend on it ;) Robin. ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH 00/11] of: dma-ranges fixes and improvements 2019-09-30 9:55 ` Robin Murphy @ 2019-09-30 13:35 ` Thierry Reding 0 siblings, 0 replies; 3+ messages in thread From: Thierry Reding @ 2019-09-30 13:35 UTC (permalink / raw) To: Robin Murphy Cc: Christoph Hellwig, Arnd Bergmann, Rob Herring, DTML, Linux ARM, linux-kernel@vger.kernel.org, linux-pci, Nicolas Saenz Julienne, Florian Fainelli, Stefan Wahren, Frank Rowand, Marek Vasut, Geert Uytterhoeven, Simon Horman, Lorenzo Pieralisi, Oza Pawandeep, linux-tegra [-- Attachment #1: Type: text/plain, Size: 5516 bytes --] On Mon, Sep 30, 2019 at 10:55:15AM +0100, Robin Murphy wrote: > On 2019-09-30 9:56 am, Thierry Reding wrote: > > On Mon, Sep 30, 2019 at 01:20:55AM -0700, Christoph Hellwig wrote: > > > On Sun, Sep 29, 2019 at 01:16:20PM +0200, Arnd Bergmann wrote: > > > > On a semi-related note, Thierry asked about one aspect of the dma-ranges > > > > property recently, which is the behavior of dma_set_mask() and related > > > > functions when a driver sets a mask that is larger than the memory > > > > area in the bus-ranges but smaller than the available physical RAM. > > > > As I understood Thierry's problem and the current code, the generic > > > > dma_set_mask() will either reject the new mask entirely or override > > > > the mask set by of_dma_configure, but it fails to set a correct mask > > > > within the limitations of the parent bus in this case. > > > > > > There days dma_set_mask will only reject a mask if it is too small > > > to be supported by the hardware. Larger than required masks are now > > > always accepted. > > > > Summarizing why this came up: the memory subsystem on Tegra194 has a > > mechanism controlled by bit 39 of physical addresses. This is used to > > support two variants of sector ordering for block linear formats. The > > GPU uses a slightly different ordering than other MSS clients, so the > > drivers have to set this bit depending on who they interoperate with. > > > > I was running into this as I was adding support for IOMMU support for > > the Ethernet controller on Tegra194. The controller has a HW feature > > register that contains how many address bits it supports. This is 40 > > for Tegra194, corresponding to the number of address bits to the MSS. > > Without IOMMU support that's not a problem because there are no systems > > with 40 bits of system memory. However, if we enable IOMMU support, the > > DMA/IOMMU code will allocate from the top of a 48-bit (constrained to > > 40 bits via the DMA mask) input address space. This causes bit 39 to be > > set, which in turn will make the MSS reorder sectors and break network > > communications. > > > > Since this reordering takes place at the MSS level, this applies to all > > MSS clients. Most of these clients always want bit 39 to be 0, whereas > > the clients that can and want to make use of the reordering always want > > bit 39 to be under their control, so they can control in a fine-grained > > way when to set it. > > > > This means that effectively all MSS clients can only address 39 bits, so > > instead of hard-coding that for each driver I thought it'd make sense to > > have a central place to configure this, so that all devices by default > > are restricted to 39-bit addressing. However, with the current DMA API > > implementation this causes a problem because the default 39-bit DMA mask > > would get overwritten by the driver (as in the example of the Ethernet > > controller setting a 40-bit DMA mask because that's what the hardware > > supports). > > > > I realize that this is somewhat exotic. On one hand it is correct for a > > driver to say that the hardware supports 40-bit addressing (i.e. the > > Ethernet controller can address bit 39), but from a system integration > > point of view, using bit 39 is wrong, except in a very restricted set of > > cases. > > > > If I understand correctly, describing this with a dma-ranges property is > > the right thing to do, but it wouldn't work with the current > > implementation because drivers can still override a lower DMA mask with > > a higher one. > > But that sounds like exactly the situation for which we introduced > bus_dma_mask. If "dma-ranges" is found, then we should initialise that to > reflect the limitation. Drivers may subsequently set a larger mask based on > what the device is natively capable of, but the DMA API internals should > quietly clamp that down to the bus mask wherever it matters. > > Since that change, the initial value of dma_mask and coherent_dma_mask > doesn't really matter much, as we expect drivers to reset them anyway (and > in general they have to be able to enlarge them from a 32-bit default > value). > > As far as I'm aware this has been working fine (albeit in equivalent ACPI > form) for at least one SoC with 64-bit device masks, a 48-bit IOMMU, and a > 44-bit interconnect in between - indeed if I avoid distraction long enough > to set up the big new box under my desk, the sending of future emails will > depend on it ;) After applying this series it does indeed seem to be working. The only thing I had to do was add a dma-ranges property to the DMA parent. I ended up doing that via an interconnects property because the default DMA parent on Tegra194 is /cbb which restricts #address-cells = <1> and #size-cells = <1>, so it can't actually translate anything beyond 32 bits of system memory. So I basically ended up making the memory controller an interconnect provider, increasing #address-cells = <2> and #size-cells = <2> again and then using a dma-ranges property like this: dma-ranges = <0x0 0x0 0x0 0x80 0x0>; to specify that only 39 bits should be used for addressing, leaving the special bit 39 up to the driver to set as required. Coincidentally making the memory controller an interconnect provider is something that I was planning to do anyway in order to support memory frequency scaling, so this all actually fits together pretty elegantly. Thierry [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 833 bytes --] ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2019-09-30 13:35 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20190927002455.13169-1-robh@kernel.org>
[not found] ` <CAK8P3a0oct0EOMi5t4BmpgdkiBM+LjC+2pTST4hcvNCa3MGLmw@mail.gmail.com>
[not found] ` <20190930082055.GA21971@infradead.org>
2019-09-30 8:56 ` [PATCH 00/11] of: dma-ranges fixes and improvements Thierry Reding
2019-09-30 9:55 ` Robin Murphy
2019-09-30 13:35 ` Thierry Reding
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).