* [PATCH] iommu: Decouple iommu_map_sg from CPU page size
@ 2014-11-25 17:50 Robin Murphy
2014-12-02 12:05 ` Joerg Roedel
0 siblings, 1 reply; 2+ messages in thread
From: Robin Murphy @ 2014-11-25 17:50 UTC (permalink / raw)
To: linux-arm-kernel
If the IOMMU supports pages smaller than the CPU page size, segments
which lie at offsets within the CPU page may be mapped based on the
finer-grained IOMMU page boundaries. This minimises the amount of
non-buffer memory between the CPU page boundary and the start of the
segment which must be mapped and therefore exposed to the device, and
brings the default iommu_map_sg implementation in line with
iommu_map/unmap with respect to alignment.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
---
Hi Joerg,
I noticed this whilst wiring up DMA mapping to this new API - on arm64
we anticipate running 4k IOMMU pages with 64k CPU pages, in which case
the alignment check ends up being unnecessarily strict.
Regards,
Robin.
drivers/iommu/iommu.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 08c53c5..5c4101a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1129,14 +1129,24 @@ size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
{
struct scatterlist *s;
size_t mapped = 0;
- unsigned int i;
+ unsigned int i, min_pagesz;
int ret;
+ if (unlikely(domain->ops->pgsize_bitmap == 0UL))
+ return 0;
+
+ min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);
+
for_each_sg(sg, s, nents, i) {
- phys_addr_t phys = page_to_phys(sg_page(s));
+ phys_addr_t phys = page_to_phys(sg_page(s)) + s->offset;
- /* We are mapping on page boundarys, so offset must be 0 */
- if (s->offset)
+ /*
+ * We are mapping on IOMMU page boundaries, so offset within
+ * the page must be 0. However, the IOMMU may support pages
+ * smaller than PAGE_SIZE, so s->offset may still represent
+ * an offset of that boundary within the CPU page.
+ */
+ if (!IS_ALIGNED(s->offset, min_pagesz))
goto out_err;
ret = iommu_map(domain, iova + mapped, phys, s->length, prot);
--
1.9.1
^ permalink raw reply related [flat|nested] 2+ messages in thread* [PATCH] iommu: Decouple iommu_map_sg from CPU page size
2014-11-25 17:50 [PATCH] iommu: Decouple iommu_map_sg from CPU page size Robin Murphy
@ 2014-12-02 12:05 ` Joerg Roedel
0 siblings, 0 replies; 2+ messages in thread
From: Joerg Roedel @ 2014-12-02 12:05 UTC (permalink / raw)
To: linux-arm-kernel
On Tue, Nov 25, 2014 at 05:50:55PM +0000, Robin Murphy wrote:
> If the IOMMU supports pages smaller than the CPU page size, segments
> which lie at offsets within the CPU page may be mapped based on the
> finer-grained IOMMU page boundaries. This minimises the amount of
> non-buffer memory between the CPU page boundary and the start of the
> segment which must be mapped and therefore exposed to the device, and
> brings the default iommu_map_sg implementation in line with
> iommu_map/unmap with respect to alignment.
>
> Signed-off-by: Robin Murphy <robin.murphy@arm.com>
> ---
>
> Hi Joerg,
>
> I noticed this whilst wiring up DMA mapping to this new API - on arm64
> we anticipate running 4k IOMMU pages with 64k CPU pages, in which case
> the alignment check ends up being unnecessarily strict.
Applied to the core branch, thanks.
Joerg
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2014-12-02 12:05 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-25 17:50 [PATCH] iommu: Decouple iommu_map_sg from CPU page size Robin Murphy
2014-12-02 12:05 ` Joerg Roedel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).