From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 131D7C282E2 for ; Sun, 21 Apr 2019 01:29:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DCD3E2087B for ; Sun, 21 Apr 2019 01:29:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728460AbfDUB3d (ORCPT ); Sat, 20 Apr 2019 21:29:33 -0400 Received: from mga09.intel.com ([134.134.136.24]:18104 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726351AbfDUB2s (ORCPT ); Sat, 20 Apr 2019 21:28:48 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 20 Apr 2019 18:28:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,376,1549958400"; d="scan'208";a="136008192" Received: from allen-box.sh.intel.com ([10.239.159.136]) by orsmga008.jf.intel.com with ESMTP; 20 Apr 2019 18:24:10 -0700 From: Lu Baolu To: David Woodhouse , Joerg Roedel Cc: ashok.raj@intel.com, jacob.jun.pan@intel.com, alan.cox@intel.com, kevin.tian@intel.com, mika.westerberg@linux.intel.com, pengfei.xu@intel.com, Konrad Rzeszutek Wilk , Christoph Hellwig , Marek Szyprowski , Robin Murphy , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu , Jacob Pan Subject: [PATCH v3 09/10] iommu/vt-d: Add dma sync ops for untrusted devices Date: Sun, 21 Apr 2019 09:17:18 +0800 Message-Id: <20190421011719.14909-10-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190421011719.14909-1-baolu.lu@linux.intel.com> References: <20190421011719.14909-1-baolu.lu@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This adds the dma sync ops for dma buffers used by any untrusted device. We need to sync such buffers because they might have been mapped with bounce pages. Cc: Ashok Raj Cc: Jacob Pan Cc: Kevin Tian Signed-off-by: Lu Baolu Tested-by: Xu Pengfei Tested-by: Mika Westerberg --- drivers/iommu/Kconfig | 1 + drivers/iommu/intel-iommu.c | 96 +++++++++++++++++++++++++++++++++---- 2 files changed, 88 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig index b918c22ca25b..f3191ec29e45 100644 --- a/drivers/iommu/Kconfig +++ b/drivers/iommu/Kconfig @@ -194,6 +194,7 @@ config INTEL_IOMMU select IOMMU_IOVA select NEED_DMA_MAP_STATE select DMAR_TABLE + select IOMMU_BOUNCE_PAGE help DMA remapping (DMAR) devices support enables independent address translations for Direct Memory Access (DMA) from devices. diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 0d80f26b8a72..ed941ec9b9d5 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3683,16 +3683,94 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele return nelems; } +static void +intel_sync_single_for_cpu(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir) +{ + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + iommu_bounce_sync_single(dev, addr, size, dir, SYNC_FOR_CPU); +} + +static void +intel_sync_single_for_device(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir) +{ + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + iommu_bounce_sync_single(dev, addr, size, dir, SYNC_FOR_DEVICE); +} + +static void +intel_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, + int nelems, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + for_each_sg(sglist, sg, nelems, i) + iommu_bounce_sync_single(dev, sg_dma_address(sg), + sg_dma_len(sg), dir, SYNC_FOR_CPU); +} + +static void +intel_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, + int nelems, enum dma_data_direction dir) +{ + struct scatterlist *sg; + int i; + + if (WARN_ON(dir == DMA_NONE)) + return; + + if (!device_needs_bounce(dev)) + return; + + if (iommu_no_mapping(dev)) + return; + + for_each_sg(sglist, sg, nelems, i) + iommu_bounce_sync_single(dev, sg_dma_address(sg), + sg_dma_len(sg), dir, SYNC_FOR_DEVICE); +} + static const struct dma_map_ops intel_dma_ops = { - .alloc = intel_alloc_coherent, - .free = intel_free_coherent, - .map_sg = intel_map_sg, - .unmap_sg = intel_unmap_sg, - .map_page = intel_map_page, - .unmap_page = intel_unmap_page, - .map_resource = intel_map_resource, - .unmap_resource = intel_unmap_page, - .dma_supported = dma_direct_supported, + .alloc = intel_alloc_coherent, + .free = intel_free_coherent, + .map_sg = intel_map_sg, + .unmap_sg = intel_unmap_sg, + .map_page = intel_map_page, + .unmap_page = intel_unmap_page, + .sync_single_for_cpu = intel_sync_single_for_cpu, + .sync_single_for_device = intel_sync_single_for_device, + .sync_sg_for_cpu = intel_sync_sg_for_cpu, + .sync_sg_for_device = intel_sync_sg_for_device, + .map_resource = intel_map_resource, + .unmap_resource = intel_unmap_page, + .dma_supported = dma_direct_supported, }; static inline int iommu_domain_cache_init(void) -- 2.17.1