From: hdoyu@nvidia.com (Hiroshi Doyu)
To: linux-arm-kernel@lists.infradead.org
Subject: [Linaro-mm-sig] [RFC 2/3] ARM: dma-mapping: Pass DMA attrs as IOMMU prot
Date: Thu, 20 Jun 2013 10:24:39 +0200 [thread overview]
Message-ID: <20130620.112439.1330557591655135630.hdoyu@nvidia.com> (raw)
In-Reply-To: <CAMcxFTRnnCnPnvMi89r7rUsN4xeYzZGqy9s5v5ukm_fJ_d_RGg@mail.gmail.com>
Hi Nishanth,
Nishanth Peethambaran <nishanth.p@gmail.com> wrote @ Thu, 20 Jun 2013 10:07:00 +0200:
> It would be better to define a prot flag bit in iommu API and convert
> the attrs to prot flag bit in dma-mapping aPI before calling the iommu
> API.
That's the 1st option.
> On Thu, Jun 20, 2013 at 11:19 AM, Hiroshi Doyu <hdoyu@nvidia.com> wrote:
....
> > @@ -1280,7 +1281,7 @@ ____iommu_create_mapping(struct device *dev, dma_addr_t *req,
> > break;
> >
> > len = (j - i) << PAGE_SHIFT;
> > - ret = iommu_map(mapping->domain, iova, phys, len, 0);
> > + ret = iommu_map(mapping->domain, iova, phys, len, (int)attrs);
>
> Use dma_get_attr and translate the READ_ONLY attr to a new READ_ONLY
> prot flag bit which needs to be defined in iommu.h
Both DMA_ATTR_READ_ONLY and IOMMU_READ are just logical bit in their
layers respectively and eventually it's converted to H/W dependent
bit.
If IOMMU is considered as one of specific case of DMA, sharing
dma_attr between IOMMU and DMA may not be so bad. IIRC, ARM:
dma-mapping API was implemented based on this concept(?).
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d8f98b1..161a1b0 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -755,7 +755,7 @@ int iommu_domain_has_cap(struct iommu_domain *domain,
EXPORT_SYMBOL_GPL(iommu_domain_has_cap);
int iommu_map(struct iommu_domain *domain, unsigned long iova,
- phys_addr_t paddr, size_t size, int prot)
+ phys_addr_t paddr, size_t size, struct dma_attr *attrs)
{
unsigned long orig_iova = iova;
unsigned int min_pagesz;
next prev parent reply other threads:[~2013-06-20 8:24 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-20 5:49 [RFC 0/3] How to pass IOMMU map attr via DMA API? Hiroshi Doyu
[not found] ` <1371707384-30037-4-git-send-email-hdoyu@nvidia.com>
2013-06-20 6:50 ` [Linaro-mm-sig] [RFC 3/3] iommu/tegra: smmu: Support read-only mapping Kyungmin Park
2013-06-20 7:27 ` Hiroshi Doyu
[not found] ` <1371707384-30037-3-git-send-email-hdoyu@nvidia.com>
2013-06-20 8:07 ` [Linaro-mm-sig] [RFC 2/3] ARM: dma-mapping: Pass DMA attrs as IOMMU prot Nishanth Peethambaran
2013-06-20 8:24 ` Hiroshi Doyu [this message]
2013-06-20 8:55 ` Nishanth Peethambaran
2013-06-20 9:54 ` Hiroshi Doyu
2013-06-20 10:13 ` Arnd Bergmann
2013-06-20 10:34 ` Hiroshi Doyu
2013-06-20 10:57 ` Arnd Bergmann
2013-06-20 12:49 ` Nishanth Peethambaran
2013-06-20 9:31 ` [RFC 0/3] How to pass IOMMU map attr via DMA API? Will Deacon
2013-06-20 9:59 ` Hiroshi Doyu
2013-06-20 10:11 ` Catalin Marinas
2013-06-20 11:00 ` Hiroshi Doyu
2013-06-20 10:12 ` Will Deacon
2013-06-21 7:17 ` Marek Szyprowski
2013-06-21 16:03 ` Joerg Roedel
2013-06-24 5:17 ` Hiroshi Doyu
2013-06-24 7:21 ` Joerg Roedel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130620.112439.1330557591655135630.hdoyu@nvidia.com \
--to=hdoyu@nvidia.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).