From mboxrd@z Thu Jan 1 00:00:00 1970 From: Joerg Roedel Subject: [PATCH 04/13] iommu/dma: Use sychronized interface of the IOMMU-API Date: Thu, 17 Aug 2017 14:56:27 +0200 Message-ID: <1502974596-23835-5-git-send-email-joro@8bytes.org> References: <1502974596-23835-1-git-send-email-joro@8bytes.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1502974596-23835-1-git-send-email-joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Errors-To: iommu-bounces-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Cc: Joerg Roedel , Will Deacon , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-Id: iommu@lists.linux-foundation.org From: Joerg Roedel The map and unmap functions of the IOMMU-API changed their semantics: They do no longer guarantee that the hardware TLBs are synchronized with the page-table updates they made. To make conversion easier, new synchronized functions have been introduced which give these guarantees again until the code is converted to use the new TLB-flush interface of the IOMMU-API, which allows certain optimizations. But for now, just convert this code to use the synchronized functions so that it will behave as before. Cc: Robin Murphy Cc: Will Deacon Cc: Nate Watterson Cc: Eric Auger Cc: Mitchel Humpherys Signed-off-by: Joerg Roedel --- drivers/iommu/dma-iommu.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 9d1cebe..38c41a2 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -417,7 +417,7 @@ static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr, dma_addr -= iova_off; size = iova_align(iovad, size + iova_off); - WARN_ON(iommu_unmap(domain, dma_addr, size) != size); + WARN_ON(iommu_unmap_sync(domain, dma_addr, size) != size); iommu_dma_free_iova(cookie, dma_addr, size); } @@ -572,7 +572,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp, sg_miter_stop(&miter); } - if (iommu_map_sg(domain, iova, sgt.sgl, sgt.orig_nents, prot) + if (iommu_map_sg_sync(domain, iova, sgt.sgl, sgt.orig_nents, prot) < size) goto out_free_sg; @@ -631,7 +631,7 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys, if (!iova) return IOMMU_MAPPING_ERROR; - if (iommu_map(domain, iova, phys - iova_off, size, prot)) { + if (iommu_map_sync(domain, iova, phys - iova_off, size, prot)) { iommu_dma_free_iova(cookie, iova, size); return IOMMU_MAPPING_ERROR; } @@ -791,7 +791,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, * We'll leave any physical concatenation to the IOMMU driver's * implementation - it knows better than we do. */ - if (iommu_map_sg(domain, iova, sg, nents, prot) < iova_len) + if (iommu_map_sg_sync(domain, iova, sg, nents, prot) < iova_len) goto out_free_iova; return __finalise_sg(dev, sg, nents, iova); -- 2.7.4