linux-sh.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
To: linux-sh@vger.kernel.org
Subject: Re: [PATCH 1/2] dmaengine: rcar-dmac: add iommu support for slave transfers
Date: Wed, 03 Feb 2016 12:04:06 +0000	[thread overview]
Message-ID: <2811183.V0AnxIFt71@avalon> (raw)
In-Reply-To: <1452478667-30966-2-git-send-email-niklas.soderlund+renesas@ragnatech.se>

Hi Vinod,

On Monday 25 January 2016 00:38:33 Laurent Pinchart wrote:
> On Monday 18 January 2016 19:06:29 Vinod Koul wrote:
> > On Thu, Jan 14, 2016 at 03:59:40PM +0200, Laurent Pinchart wrote:
> > > On Thursday 14 January 2016 09:22:25 Vinod Koul wrote:
> > >> On Thu, Jan 14, 2016 at 01:13:20AM +0200, Laurent Pinchart wrote:
> > >>> On Wednesday 13 January 2016 14:55:50 Niklas Söderlund wrote:
> > >>>> * Vinod Koul <vinod.koul@intel.com> [2016-01-13 19:06:01 +0530]:
> > >>>>> On Mon, Jan 11, 2016 at 03:17:46AM +0100, Niklas Söderlund wrote:
> > >>>>>> Enable slave transfers to devices behind IPMMU:s by mapping the
> > >>>>>> slave addresses using the dma-mapping API.
> > >>>>>> 
> > >>>>>> Signed-off-by: Niklas Söderlund
> > >>>>>> <niklas.soderlund+renesas@ragnatech.se>
> > >>>>>> ---
> > >>>>>> 
> > >>>>>>  drivers/dma/sh/rcar-dmac.c | 64 +++++++++++++++++++++++++++++++---
> > >>>>>>  1 file changed, 60 insertions(+), 4 deletions(-)
> > >>>>>> 
> > >>>>>> diff --git a/drivers/dma/sh/rcar-dmac.c
> > >>>>>> b/drivers/dma/sh/rcar-dmac.c
> > >>>>>> index 7820d07..da94809 100644
> > >>>>>> --- a/drivers/dma/sh/rcar-dmac.c
> > >>>>>> +++ b/drivers/dma/sh/rcar-dmac.c
> > >>>>>> @@ -13,6 +13,7 @@
> > >>>>>>  #include <linux/dma-mapping.h>
> > >>>>>>  #include <linux/dmaengine.h>
> > >>>>>>  #include <linux/interrupt.h>
> > >>>>>> +#include <linux/iommu.h>
> > >>>>>>  #include <linux/list.h>
> > >>>>>>  #include <linux/module.h>
> > >>>>>>  #include <linux/mutex.h>
> > >>>>>> @@ -1101,6 +1102,24 @@ rcar_dmac_prep_dma_cyclic(struct dma_chan
> > >>>>>> *chan, dma_addr_t buf_addr,
> > >>>>>>  	return desc;
> > >>>>>>  }
> > >>>>>> 
> > >>>>>> +static dma_addr_t __rcar_dmac_dma_map(struct dma_chan *chan,
> > >>>>>> phys_addr_t addr,
> > >>>>>> +		size_t size, enum dma_data_direction dir)
> > >>>>>> +{
> > >>>>>> +	struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
> > >>>>>> +	struct page *page = phys_to_page(addr);
> > >>>>>> +	size_t offset = addr - page_to_phys(page);
> > >>>>>> +	dma_addr_t map = dma_map_page(chan->device->dev, page, offset,
> > >>>>>> size,
> > >>>>>> +			dir);
> > >>>>> 
> > >>>>> Hmmmm, dmaengine APIs for slave cases expect that client has already
> > >>>>> ammped and provided an address which the dmaengine understands. So
> > >>>>> doing this in driver here does not sound good to me
> > >>>> 
> > >>>> It was my understanding that clients do not do this mapping and in
> > >>>> fact are expected not to. Is this not what Linus Walleij is trying to
> > >>>> address in '[PATCH] dmaengine: use phys_addr_t for slave
> > >>>> configuration'?
> > >>> 
> > >>> There's a problem somewhere and we need to fix it. Clients currently
> > >>> pass physical addresses and the DMA engine API expects a DMA address.
> > >>> There's only two ways to fix that, either modify the API to expect a
> > >>> phys_addr_t, or modify the clients to provide a dma_addr_t.
> > >> 
> > >> Okay I am in two minds for this, doing phys_addr_t seems okay but
> > >> somehow I feel we should rather pass dma_addr_t and dmaengien driver
> > >> get
> > >> a right dma address to use and thus fix the clients, that maybe the
> > >> right thing to do here, thoughts...?
> > > 
> > > Given that there should be more clients than DMA engine drivers, and
> > > given that knowledge of what has to be done to map a physical address to
> > > a DMA address accessible by the DMA engine should not be included in
> > > client drivers (in most case I assume using the DMA mapping API will be
> > > enough, but details may vary), I believe it makes more sense to pass a
> > > phys_addr_t and let the DMA engine drivers handle it.
> > > 
> > > There's another issue I just remembered. Consider the following cases.
> > > 
> > > 1. DMA engine channel that has an optional IOMMU covering both the src
> > > and dst side. In that case mapping can be performed by the client or DMA
> > > engine driver, the DMA mapping API will handle the IOMMU behind the
> > > scene.
> > > 
> > > 2. DMA engine channel that has an optional IOMMU on the memory side and
> > > no support for IOMMU on the slave (in the sense of the register in front
> > > of the client's FIFO) side. In that case a client mapping buffers on
> > > both the src and dst side would set an IOMMU mapped address for the
> > > slave side, which wouldn't work. If the DMA engine driver were to
> > > perform the mapping then it could skip it on the slave side, knowing
> > > that the slave side has no IOMMU.
> > > 
> > > 3. DMA engine channel that has independently optional IOMMUs on both
> > > sides. This can't be supported today as we have a single struct device
> > > per channel and thus can't configure the IOMMU independently on the two
> > > sides.
> > > 
> > > It's getting messy :-)
> > 
> > Yes I do agree on that, but the problem is today none of the slave drivers
> > expect or do the mapping, changing that will cause issues...
> > 
> > And how many do really have an IOMMU behind them, few out of large set we
> > have...
> 
> Today neither the DMA engine drivers nor the client drivers do the mapping,
> so we have any issue anyway. The question is on which side to solve it. If
> I understand correctly you fear that mapping the address in the DMA engine
> drivers would cause issues with client drivers that don't expect that
> behaviour, but I don't really see where the issue is. Could you please
> elaborate ?

Ping. I don't think we're very far from finding an agreement on this topic. If 
you prefer we could discuss it on IRC, it can be faster than e-mail.

> >>> The assumption from API was always that the client should perform the
> >>> mapping...
> >>> 
> >>>> The struct device used to map buffer through the DMA mapping API needs
> >>>> to be the DMA engine struct device, not the client struct device. As
> >>>> the client is not expected to have access to the DMA engine device I
> >>> would argue that DMA engines should perform the mapping and the API
> >>>> should take a phys_addr_t.
> >>> 
> >>> That is not a right assumption. Once the client gets a channel, they
> >>> have access to dmaengine device and should use that to map. Yes the key
> >>> is to map using dmaengine device and not client device. You can use
> >>> chan->device->dev.
> >> 
> >> Right, that's required by the DMA engine API even when not using slave
> >> transfers. Which raises an interesting consistency issue in the API, I
> >> agree about that.
> >> 
> >>>> Vinod, unless you have reasons to do it otherwise, can we get your ack
> >>>> on this approach and start hammering at the code ? The problem has
> >>>> remained known and unfixed for too long, we need to move on.

-- 
Regards,

Laurent Pinchart


  parent reply	other threads:[~2016-02-03 12:04 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-11  2:17 [PATCH 1/2] dmaengine: rcar-dmac: add iommu support for slave transfers Niklas Söderlund
2016-01-11  2:37 ` kbuild test robot
2016-01-11  7:55 ` Geert Uytterhoeven
2016-01-11 18:59 ` Niklas Söderlund
2016-01-13 13:48 ` Vinod Koul
2016-01-13 13:55 ` Niklas Söderlund
2016-01-13 23:13 ` Laurent Pinchart
2016-01-13 23:15 ` Laurent Pinchart
2016-01-13 23:37 ` Laurent Pinchart
2016-01-14  3:48 ` Vinod Koul
2016-01-14 13:59 ` Laurent Pinchart
2016-01-14 21:37 ` Niklas Söderlund
2016-01-14 23:27 ` Laurent Pinchart
2016-01-18 13:48 ` Vinod Koul
2016-01-24 22:38 ` Laurent Pinchart
2016-02-03 12:04 ` Laurent Pinchart [this message]
2016-02-08  3:47 ` Vinod Koul
2016-02-10 23:51 ` Laurent Pinchart
2016-02-10 23:59 ` Laurent Pinchart
2016-02-15 17:41 ` Vinod Koul

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2811183.V0AnxIFt71@avalon \
    --to=laurent.pinchart@ideasonboard.com \
    --cc=linux-sh@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).