From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 78620C369DC for ; Mon, 28 Apr 2025 13:04:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=nmbXPb0X9HjPUrdGqA+ZMXt9h17Klq2hlC94dBEqTcE=; b=nF4cTtoaJrN/qXFojyk3vVr0Yl GfBHEn4GnzR0nI82ImBrjAA4JRQD8LJar2Ze4MWmXCKdcEXUwCCIwM/aXBkrwMjGZdp2GP/KdbTx6 bwgNtCE8do7acWR3KfDronHoGa0YHlFFIueYB+SnEICL4zV7oxnJXgqXajR/wqHPSP3UJzkje7IS5 8hHl2Pk7GRJhpkbKeT9fw3Fa5WUccPTZPnzwxAJmF/EqTBCQmYs1IVfGk+pAGJdgdmVgBwSLeOYBG FbA1bsKEudjKVsdMqKSQTpEzAqay7vh0/q6IpxWLDf0kIOuPkh/MbOsqPDyb4kNkJu4UD0ZN44Db/ 9mSUOdkw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1u9O9n-00000006Jh2-2MKt; Mon, 28 Apr 2025 13:04:23 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1u9Khf-00000005aLe-3Ir2 for linux-nvme@lists.infradead.org; Mon, 28 Apr 2025 09:23:09 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 230D35C62B1; Mon, 28 Apr 2025 09:20:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1B7CFC4CEE4; Mon, 28 Apr 2025 09:23:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745832187; bh=0JHkJ3yc+gAWUe6Cn00MEeufheTpl4eLgTynVx9oj5I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PqmGfJdcNXSWtK9ZUr/VZuQVATSjN4FCICu9QvOjQ5DFimCvpDJNBXHHRfXKE4d0f JSfdtg+/yURTLqlGkkjWhIP/+bLkaey8ndkjjCWv7Pmda9drOgqolG5sO0/xK8phy1 M1bamGvI/HUYk59FrSxjJodLMEVZQAN0v0cnSkOGKPCsab0uBqMn4YH97NLy505u+y NPLiCd86bDRwc6qTWqjtnq+cD0ihAbz1ZorWks4/3uOYAZsA5mAb9TFncOXGZM5Gnj IOohfCcCDYKXmzfgsFUUlIffxZ68XVFQiK2Q4uEACJ73pihFUgrcFqu4MHI/n1tHxp roqnmUkSZ/rbw== From: Leon Romanovsky To: Marek Szyprowski , Jens Axboe , Christoph Hellwig , Keith Busch Cc: Jake Edge , Jonathan Corbet , Jason Gunthorpe , Zhu Yanjun , Robin Murphy , Joerg Roedel , Will Deacon , Sagi Grimberg , Bjorn Helgaas , Logan Gunthorpe , Yishai Hadas , Shameer Kolothum , Kevin Tian , Alex Williamson , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-rdma@vger.kernel.org, iommu@lists.linux.dev, linux-nvme@lists.infradead.org, linux-pci@vger.kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, Niklas Schnelle , Chuck Lever , Luis Chamberlain , Matthew Wilcox , Dan Williams , Kanchan Joshi , Chaitanya Kulkarni , Leon Romanovsky Subject: [PATCH v10 06/24] iommu/dma: Factor out a iommu_dma_map_swiotlb helper Date: Mon, 28 Apr 2025 12:22:12 +0300 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250428_022307_904887_19A80235 X-CRM114-Status: GOOD ( 18.54 ) X-Mailman-Approved-At: Mon, 28 Apr 2025 05:19:59 -0700 X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org From: Christoph Hellwig Split the iommu logic from iommu_dma_map_page into a separate helper. This not only keeps the code neatly separated, but will also allow for reuse in another caller. Signed-off-by: Christoph Hellwig Tested-by: Jens Axboe Reviewed-by: Luis Chamberlain Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 73 ++++++++++++++++++++++----------------- 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index d3211a8d755e..d7684024c439 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1138,6 +1138,43 @@ void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sgl, arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); } +static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + struct iommu_domain *domain = iommu_get_dma_domain(dev); + struct iova_domain *iovad = &domain->iova_cookie->iovad; + + if (!is_swiotlb_active(dev)) { + dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); + return (phys_addr_t)DMA_MAPPING_ERROR; + } + + trace_swiotlb_bounced(dev, phys, size); + + phys = swiotlb_tbl_map_single(dev, phys, size, iova_mask(iovad), dir, + attrs); + + /* + * Untrusted devices should not see padding areas with random leftover + * kernel data, so zero the pre- and post-padding. + * swiotlb_tbl_map_single() has initialized the bounce buffer proper to + * the contents of the original memory buffer. + */ + if (phys != (phys_addr_t)DMA_MAPPING_ERROR && dev_is_untrusted(dev)) { + size_t start, virt = (size_t)phys_to_virt(phys); + + /* Pre-padding */ + start = iova_align_down(iovad, virt); + memset((void *)start, 0, virt - start); + + /* Post-padding */ + start = virt + size; + memset((void *)start, 0, iova_align(iovad, start) - start); + } + + return phys; +} + dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) @@ -1151,42 +1188,14 @@ dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, dma_addr_t iova, dma_mask = dma_get_mask(dev); /* - * If both the physical buffer start address and size are - * page aligned, we don't need to use a bounce page. + * If both the physical buffer start address and size are page aligned, + * we don't need to use a bounce page. */ if (dev_use_swiotlb(dev, size, dir) && iova_offset(iovad, phys | size)) { - if (!is_swiotlb_active(dev)) { - dev_warn_once(dev, "DMA bounce buffers are inactive, unable to map unaligned transaction.\n"); - return DMA_MAPPING_ERROR; - } - - trace_swiotlb_bounced(dev, phys, size); - - phys = swiotlb_tbl_map_single(dev, phys, size, - iova_mask(iovad), dir, attrs); - - if (phys == DMA_MAPPING_ERROR) + phys = iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); + if (phys == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; - - /* - * Untrusted devices should not see padding areas with random - * leftover kernel data, so zero the pre- and post-padding. - * swiotlb_tbl_map_single() has initialized the bounce buffer - * proper to the contents of the original memory buffer. - */ - if (dev_is_untrusted(dev)) { - size_t start, virt = (size_t)phys_to_virt(phys); - - /* Pre-padding */ - start = iova_align_down(iovad, virt); - memset((void *)start, 0, virt - start); - - /* Post-padding */ - start = virt + size; - memset((void *)start, 0, - iova_align(iovad, start) - start); - } } if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) -- 2.49.0