linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: thierry.reding@gmail.com (Thierry Reding)
To: linux-arm-kernel@lists.infradead.org
Subject: [RFC PATCH v3 7/7] arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_ops
Date: Thu, 25 Sep 2014 08:40:23 +0200	[thread overview]
Message-ID: <20140925064022.GC12423@ulmo> (raw)
In-Reply-To: <20140924163338.GF16244@arm.com>

On Wed, Sep 24, 2014 at 05:33:38PM +0100, Will Deacon wrote:
> On Tue, Sep 23, 2014 at 08:14:01AM +0100, Thierry Reding wrote:
> > On Mon, Sep 22, 2014 at 06:43:37PM +0100, Will Deacon wrote:
> > > Yup. In this case, the iommu_dma_mapping passed to arch_setup_dma_ops
> > > contains a domain and an allocator for each IOMMU instance in the system.
> > > It would then be up to the architecture how it makes use of those, but
> > > the most obvious thing to do would be to attach devices mastering through
> > > an IOMMU instance to that per-instance domain.
> > > 
> > > The other use-case is isolation (one domain per device), which I guess
> > > matches what the ARM code is doing at the moment.
> > 
> > I think there are two cases here. You can have a composite device that
> > wants to manage a single domain (using its own allocator) for a set of
> > hardware devices. At the same time a set of devices (think 2D and 3D
> > engines) could want to use a multiple domains for process separation.
> > In that case I'd expect a logical DRM device to allocate one domain per
> > process and then associate the 2D and 3D engines with that same domain
> > on process switch.
> 
> Sure, but that's well outside of what the dma-mapping API is going to setup
> as a default domain. These specialist setups are certainly possible, but I
> think they should be driven by, for example, the DRM code as opposed to
> being in the core dma-mapping code.

I completely agree that these special cases should be driven by the
drivers that need them. However the problem here is that the current
patch will already attach the device to an IOMMU domain by default.

So I think what we're going to need is a way to prevent the default
attachment to DMA/IOMMU. Or alternatively not associate devices with
IOMMU domains by default but let drivers explicitly make the decision.
Either of those two alternatives would require driver-specific
knowledge, which would be another strong argument against doing the
whole IOMMU initialization at device creation time.

> > What I proposed a while back was to leave it up to the IOMMU driver to
> > choose an allocator for the device. Or rather, choose whether to use a
> > custom allocator or the DMA/IOMMU integration allocator. The way this
> > worked was to keep a list of devices in the IOMMU driver. Devices in
> > this list would be added to domain reserved for DMA/IOMMU integration.
> > Those would typically be devices such as SD/MMC, audio, ... devices that
> > are in-kernel and need no per-process separation. By default devices
> > wouldn't be added to a domain, so devices forming a composite DRM device
> > would be able to manage their own domain.
> 
> I'd live to have as little of this as possible in the IOMMU drivers, as we
> should leave those to deal with the IOMMU hardware and not domain
> management. Having subsystems manage their own dma ops is an extension to
> the dma-mapping API.

It's not an extension, really. It's more that both need to be able to
coexist. For some devices you may want to create an IOMMU domain and
hook it up with the DMA mapping functions, for others you don't and
handle mapping to IOVA space explicitly.

There is another issue with the approach you propose. I'm not sure if
Tegra is special in this case (I'd expect not), but what we do is make
an IOMMU domain correspond to an address space. Address spaces are a
pretty limited resource (earlier generations have 4, newer have 128)
and each address space can be up to 4 GiB. So I've always envisioned
that we should be using a single IOMMU domain for devices that don't
expose direct buffer access to userspace (SATA, PCIe, audio, SD/MMC,
USB, ...). All of those would typically need only a small number of
small buffers, so using a separate address space for each seems like a
big waste.

Doing so would leave a large number of address spaces available for
things like a GPU driver to keep per-process address spaces for
isolation.

I don't see how we'd be able to do that with the approach that you
propose in this series since it assumes that each device will be
associated with a separate domain.

Thierry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20140925/dbd4fc19/attachment.sig>

  reply	other threads:[~2014-09-25  6:40 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-12 16:34 [RFC PATCH v3 0/7] Introduce automatic DMA configuration for IOMMU masters Will Deacon
2014-09-12 16:34 ` [RFC PATCH v3 1/7] iommu: provide early initialisation hook for IOMMU drivers Will Deacon
2014-09-18 14:31   ` Robin Murphy
2014-09-22 17:35     ` Will Deacon
2014-09-12 16:34 ` [RFC PATCH v3 2/7] dma-mapping: replace set_arch_dma_coherent_ops with arch_setup_dma_ops Will Deacon
2014-09-12 16:34 ` [RFC PATCH v3 3/7] iommu: add new iommu_ops callback for adding an OF device Will Deacon
2014-09-15 11:57   ` Marek Szyprowski
2014-09-17  1:39     ` Will Deacon
2014-09-12 16:34 ` [RFC PATCH v3 4/7] iommu: provide helper function to configure an IOMMU for an of master Will Deacon
2014-09-18 11:13   ` Laurent Pinchart
2014-09-22 17:13     ` Will Deacon
2014-10-14 13:12       ` Laurent Pinchart
2014-09-12 16:34 ` [RFC PATCH v3 5/7] dma-mapping: detect and configure IOMMU in of_dma_configure Will Deacon
2014-09-18 11:17   ` Laurent Pinchart
2014-09-22  9:29     ` Thierry Reding
2014-09-22 17:50       ` Will Deacon
2014-10-14 12:53         ` Laurent Pinchart
2014-10-27 10:51           ` Will Deacon
2014-10-27 11:12             ` Marek Szyprowski
2014-10-27 11:30             ` Laurent Pinchart
2014-10-27 16:02               ` Will Deacon
2014-10-27 16:33                 ` jroedel at suse.de
2014-09-22 17:46     ` Will Deacon
2014-09-12 16:34 ` [RFC PATCH v3 6/7] arm: call iommu_init before of_platform_populate Will Deacon
2014-09-22  9:36   ` Thierry Reding
2014-09-22 11:08     ` Arnd Bergmann
2014-09-22 11:40       ` Thierry Reding
2014-09-22 16:03         ` Arnd Bergmann
2014-09-23  7:02           ` Thierry Reding
2014-09-23  7:44             ` Arnd Bergmann
2014-09-23  8:59               ` Thierry Reding
2014-10-14 13:07               ` Laurent Pinchart
2014-10-14 13:20                 ` Arnd Bergmann
2014-10-14 13:37                   ` Thierry Reding
2014-10-14 15:01                     ` Laurent Pinchart
2014-10-14 15:05                       ` Thierry Reding
2014-10-14 15:10                         ` Laurent Pinchart
2014-09-12 16:34 ` [RFC PATCH v3 7/7] arm: dma-mapping: plumb our iommu mapping ops into arch_setup_dma_ops Will Deacon
2014-09-22  9:19   ` Thierry Reding
2014-09-22  9:22     ` Laurent Pinchart
2014-09-22 17:43     ` Will Deacon
2014-09-23  7:14       ` Thierry Reding
2014-09-24 16:33         ` Will Deacon
2014-09-25  6:40           ` Thierry Reding [this message]
2014-09-30 16:00             ` Will Deacon
2014-10-01  8:46               ` Thierry Reding
2014-10-03 15:08                 ` Will Deacon
2014-10-06  9:52                   ` Thierry Reding
2014-10-06 10:50                     ` Laurent Pinchart
2014-10-06 13:05                       ` Thierry Reding
2014-09-16 11:40 ` [RFC PATCH v3 0/7] Introduce automatic DMA configuration for IOMMU masters Robin Murphy
2014-09-17  1:19   ` Will Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140925064022.GC12423@ulmo \
    --to=thierry.reding@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).