linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: swarren@wwwdotorg.org (Stephen Warren)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC] Describing arbitrary bus mastering relationships in DT
Date: Mon, 12 May 2014 12:29:16 -0600	[thread overview]
Message-ID: <537112FC.1040204@wwwdotorg.org> (raw)
In-Reply-To: <25333129.urqEa0mCI8@wuerfel>

On 05/12/2014 12:10 PM, Arnd Bergmann wrote:
> On Monday 12 May 2014 10:19:16 Stephen Warren wrote:
>> On 05/09/2014 04:56 AM, Dave Martin wrote:
>>> On Fri, May 02, 2014 at 09:06:43PM +0200, Arnd Bergmann wrote:
>>>> On Friday 02 May 2014 12:50:17 Stephen Warren wrote:
>> ...
>>>>> Now, perhaps there are devices which themselves control whether
>>>>> transactions are sent to the IOMMU or direct to RAM, but I'm not
>>>>> familiar with them. Is the GPU in that category, since it has its own
>>>>> GMMU, albeit chained into the SMMU IIRC?
>>>>
>>>> Devices with a built-in IOMMU such as most GPUs are also easy enough
>>>> to handle: There is no reason to actually show the IOMMU in DT and
>>>> we can just treat the GPU as a black box.
>>>
>>> It's impossible for such a built-in IOMMU to be shared with other
>>> devices, so that's probably reasonable.
>>
>> I don't believe that's true.
>>
>> For example, on Tegra, the CPU (and likely anything that can bus-master
>> the relevant bus) can send transactions into the GPU, which can then
>> turn them around towards RAM, and those likely then go through the MMU
>> inside the GPU.
>>
>> IIRC, the current Nouveau support for Tegra even makes use of that
>> feature, although I think that's a temporary thing that we're hoping to
>> get rid of once the Tegra support in Nouveau gets more mature.
> 
> But the important point here is that you wouldn't use the dma-mapping
> API to manage this. First of all, the CPU is special anyway, but also
> if you do a device-to-device DMA into the GPU address space and that
> ends up being redirected to memory through the IOMMU, you still wouldn't
> manage the I/O page tables through the interfaces of the device doing the
> DMA, but through some private interface of the GPU.

Why not? If something wants to DMA to a memory region, irrespective of
whether the GPU MMU (or any MMU) is in between those master transactions
and the RAM or not, surely the driver should always use the DMA mapping
API to set that up? Anything else just means using custom APIs, and
isn't the whole point of the DMA mapping API to provide a standard API
for that purpose?

  reply	other threads:[~2014-05-12 18:29 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-01 17:32 [RFC] Describing arbitrary bus mastering relationships in DT Dave Martin
2014-05-02 11:05 ` Thierry Reding
2014-05-02 12:32   ` Arnd Bergmann
2014-05-02 13:23     ` Thierry Reding
2014-05-02 15:19       ` Arnd Bergmann
2014-05-02 17:43         ` Dave Martin
2014-05-05 15:14           ` Arnd Bergmann
2014-05-09 10:33             ` Dave Martin
2014-05-09 11:15               ` Arnd Bergmann
2014-05-09 14:59               ` Grant Grundler
2014-05-02 18:55         ` Stephen Warren
2014-05-02 19:02           ` Arnd Bergmann
2014-05-09 10:45             ` Dave Martin
2014-05-02 18:50       ` Stephen Warren
2014-05-02 19:06         ` Arnd Bergmann
2014-05-09 10:56           ` Dave Martin
2014-05-12 16:19             ` Stephen Warren
2014-05-12 18:10               ` Arnd Bergmann
2014-05-12 18:29                 ` Stephen Warren [this message]
2014-05-12 19:53                   ` Arnd Bergmann
2014-05-12 20:02                   ` Grant Grundler
2014-05-02 16:19   ` Dave Martin
2014-05-02 16:14 ` Arnd Bergmann
2014-05-02 17:31   ` Dave Martin
2014-05-02 18:17     ` Jason Gunthorpe
2014-05-09 14:16       ` Dave Martin
2014-05-09 17:10         ` Jason Gunthorpe
2014-05-02 20:36     ` Arnd Bergmann
2014-05-09 13:26       ` Dave Martin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=537112FC.1040204@wwwdotorg.org \
    --to=swarren@wwwdotorg.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).