linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: yong.wu@mediatek.com (Yong Wu)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v2 2/4] iommu: Implement common IOMMU ops for DMA mapping
Date: Tue, 7 Jul 2015 15:37:58 +0800	[thread overview]
Message-ID: <1436254678.18552.7.camel@mhfsdcap03> (raw)
In-Reply-To: <5596BBD3.1040708@arm.com>

On Fri, 2015-07-03 at 17:44 +0100, Robin Murphy wrote:
> On 03/07/15 10:27, Yong Wu wrote:
> [...]
> >> +/**
> >> + * iommu_dma_alloc - Allocate and map a buffer contiguous in IOVA space
> >> + * @dev: Device to allocate memory for. Must be a real device
> >> + *	 attached to an iommu_dma_domain
> >> + * @size: Size of buffer in bytes
> >> + * @gfp: Allocation flags
> >> + * @prot: IOMMU mapping flags
> >> + * @coherent: Which dma_mask to base IOVA allocation on
> >> + * @handle: Out argument for allocated DMA handle
> >> + * @flush_page: Arch callback to flush a single page from caches as
> >> + *		necessary. May be NULL for coherent allocations
> >> + *
> >> + * If @size is less than PAGE_SIZE, then a full CPU page will be allocated,
> >> + * but an IOMMU which supports smaller pages might not map the whole thing.
> >> + * For now, the buffer is unconditionally zeroed for compatibility
> >> + *
> >> + * Return: Array of struct page pointers describing the buffer,
> >> + *	   or NULL on failure.
> >> + */
> >> +struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> >> +		int prot, bool coherent, dma_addr_t *handle,
> >> +		void (*flush_page)(const void *, phys_addr_t))
> >> +{
> >> +	struct iommu_dma_domain *dom = arch_get_dma_domain(dev);
> >> +	struct iova_domain *iovad = dom->iovad;
> >> +	struct iova *iova;
> >> +	struct page **pages;
> >> +	struct sg_table sgt;
> >> +	struct sg_mapping_iter miter;
> >> +	dma_addr_t dma_addr;
> >> +	unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> >> +
> >> +	*handle = DMA_ERROR_CODE;
> >
> > Hi Robin,
> >
> >    Compare with the dma before, You delete this line here.
> >
> >            size = PAGE_ALIGN(size);
> >
> >    Do you expect the user should make sure the size must be aligned?
> >
> >    so do __iommu_free_attrs.
> 
> Yeah, there is an oversight here due to some back-and-forth refactoring 
> - both the page allocation and the IOVA allocation do get suitably 
> aligned as they should, but the segments of the temporary scatterlist 
> don't. Somehow I managed not to hit an iommu_map_sg failure due to that 
> until some time after this posting (it seems most of the drivers I test 
> with only make page-sized allocations...)
> 
> I've currently got the fixup below waiting for the next posting.
> 
> Robin.

Hi Robin,
    Could we know the status and ETA of the DMA next version.
    We are preparing the mtk-iommu next patch as our project request.
    We could help test it if you send the new patch.
    Thanks.

> 
> >> +
> >> +	/* IOMMU can map any pages, so himem can also be used here */
> >> +	gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
> >> +	pages = __iommu_dma_alloc_pages(count, gfp);
> >> +	if (!pages)
> >> +		return NULL;
> >> +
> >> +	iova = __alloc_iova(dev, size, coherent);
> >> +	if (!iova)
> >> +		goto out_free_pages;
> >> +
> 
> +	size = iova_align(iovad, size);
> 
> >> +	if (sg_alloc_table_from_pages(&sgt, pages, count, 0, size, GFP_KERNEL))
> >> +		goto out_free_iova;
> >> +
> >> +	dma_addr = iova_dma_addr(iovad, iova);
> >> +	if (iommu_map_sg(dom->domain, dma_addr, sgt.sgl, sgt.orig_nents, prot)
> >> +			< size)
> >> +		goto out_free_sg;
> >> +
> >> +	/* Using the non-flushing flag since we're doing our own */
> >> +	sg_miter_start(&miter, sgt.sgl, sgt.orig_nents, SG_MITER_FROM_SG);
> >> +	while (sg_miter_next(&miter)) {
> >> +		memset(miter.addr, 0, PAGE_SIZE);
> >> +		if (flush_page)
> >> +			flush_page(miter.addr, page_to_phys(miter.page));
> >> +	}
> >> +	sg_miter_stop(&miter);
> >> +	sg_free_table(&sgt);
> >> +
> >> +	*handle = dma_addr;
> >> +	return pages;
> >> +
> >> +out_free_sg:
> >> +	sg_free_table(&sgt);
> >> +out_free_iova:
> >> +	__free_iova(iovad, iova);
> >> +out_free_pages:
> >> +	__iommu_dma_free_pages(pages, count);
> >> +	return NULL;
> >> +}
> >> +
> > [...]
> >> +
> >> +#endif	/* __KERNEL__ */
> >> +#endif	/* __DMA_IOMMU_H */
> >
> >
> 

  reply	other threads:[~2015-07-07  7:37 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-11 15:54 [PATCH v2 0/4] arm64: IOMMU-backed DMA mapping Robin Murphy
2015-06-11 15:54 ` [PATCH v2 1/4] iommu/iova: Avoid over-allocating when size-aligned Robin Murphy
2015-06-11 15:54 ` [PATCH v2 2/4] iommu: Implement common IOMMU ops for DMA mapping Robin Murphy
2015-06-18 15:00   ` Yong Wu
2015-06-18 16:54     ` Robin Murphy
2015-06-19  1:06       ` Yong Wu
2015-07-03  9:27   ` Yong Wu
2015-07-03 16:44     ` Robin Murphy
2015-07-07  7:37       ` Yong Wu [this message]
2015-06-11 15:54 ` [PATCH v2 3/4] arm64: Add IOMMU dma_ops Robin Murphy
2015-06-18 15:00   ` Yong Wu
2015-06-11 15:54 ` [PATCH v2 4/4] arm64: Hook up " Robin Murphy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1436254678.18552.7.camel@mhfsdcap03 \
    --to=yong.wu@mediatek.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).