From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 46E76CCD18E for ; Wed, 15 Oct 2025 09:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc: To:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Ky3bkn+5NnSV4K0vzDHptm86RepNqnORfPOmC0Otaa4=; b=AS05fJyRgsbjGEagUqtdKOFil1 10wdpogdz6tM6LP06Oq2s0c1DPg0yvw54vPGhEhbbb+9Ikt+iVQqLWkeYebYhnZMQjpHYI6it4x+K IlmXCYX8DUW/3ifdIIFXIdHEOoEd/699o91ZJbYcPZAnZAlZok75rd6HulCdDupINIxtqeb1WUFsB 6Kwtq4HJ/gsVzERhF5N5T3mFo7rcHpJFUXiAtdOxGra4H2yStyuOwGzPHKg8ty4EP7LYF8UosrQal dflVokOtNavD0I1F1pF2pw1ZS7hrwoEFmQ2gTho2+O9ZpaUj0PCC7FQ32x+6GNIrIgsnZmTkEYMye qqfD2Mtg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1v8xZo-000000014di-3ncs; Wed, 15 Oct 2025 09:13:44 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1v8xZh-000000014Ri-2ceg for linux-arm-kernel@lists.infradead.org; Wed, 15 Oct 2025 09:13:41 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 2D0CC417E5; Wed, 15 Oct 2025 09:13:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58D1EC4CEF9; Wed, 15 Oct 2025 09:13:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519617; bh=CDM9cRqlJbJhV+/XZfBHBL1U6v3SXFrUWdt80oFtUYw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tw71SVUT8XfKMlm1/r+zP7oJkAnZln+/ND2VKAPg1Fib6wjOCgnhmfHEo36OrBu0d b9dArUbq9Mu5UObgbXUJpRImY3vUlA3YHJV7XlCL1PrZuREFVRiKW/Lpk/ajHEklGO e76Q4clZxNno+qokZDv/kuPlTY4ELLlc0hGecwwogWcgug2MpXiJhryH+DiYEES6z0 bs9hkk4oHRsSWVybZZmy68VpF9MBIMDvvR//BLFqrFcosfeyCdJyBfTfKMOXxsVqCp vpbi+cET5ZU7rNLb5tudr+kd7UXcMUnSRUPxNKuCJxExRGqgI1DNsQsH6x5au+8Ihh B6aSY9FCWKG7g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 04/14] ARM: dma-mapping: Switch to physical address mapping callbacks Date: Wed, 15 Oct 2025 12:12:50 +0300 Message-ID: <20251015-remove-map-page-v5-4-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251015_021337_837180_A7614158 X-CRM114-Status: GOOD ( 18.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Leon Romanovsky Combine resource and page mappings routines to one function, which handles both these flows at the same manner. This conversion allows us to remove .map_resource/.unmap_resource callbacks completely. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/arm/mm/dma-mapping.c | 100 +++++++++++----------------------------------- 1 file changed, 23 insertions(+), 77 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index b0310d6762d5..a4c765d24692 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -732,6 +732,9 @@ static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs) if (attrs & DMA_ATTR_PRIVILEGED) prot |= IOMMU_PRIV; + if (attrs & DMA_ATTR_MMIO) + prot |= IOMMU_MMIO; + switch (dir) { case DMA_BIDIRECTIONAL: return prot | IOMMU_READ | IOMMU_WRITE; @@ -1350,25 +1353,27 @@ static void arm_iommu_sync_sg_for_device(struct device *dev, } /** - * arm_iommu_map_page + * arm_iommu_map_phys * @dev: valid struct device pointer - * @page: page that buffer resides in - * @offset: offset into page for start of buffer + * @phys: physical address that buffer resides in * @size: size of buffer to map * @dir: DMA transfer direction + * @attrs: DMA mapping attributes * * IOMMU aware version of arm_dma_map_page() */ -static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t arm_iommu_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); + int len = PAGE_ALIGN(size + offset_in_page(phys)); + phys_addr_t addr = phys & PAGE_MASK; dma_addr_t dma_addr; - int ret, prot, len = PAGE_ALIGN(size + offset); + int ret, prot; - if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(page_to_phys(page), offset, size, dir); + if (!dev->dma_coherent && + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) + arch_sync_dma_for_device(phys, size, dir); dma_addr = __alloc_iova(mapping, len); if (dma_addr == DMA_MAPPING_ERROR) @@ -1376,12 +1381,11 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, prot = __dma_info_to_prot(dir, attrs); - ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, - prot, GFP_KERNEL); + ret = iommu_map(mapping->domain, dma_addr, addr, len, prot, GFP_KERNEL); if (ret < 0) goto fail; - return dma_addr + offset; + return dma_addr + offset_in_page(phys); fail: __free_iova(mapping, dma_addr, len); return DMA_MAPPING_ERROR; @@ -1393,10 +1397,11 @@ static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, * @handle: DMA address of buffer * @size: size of buffer (same as passed to dma_map_page) * @dir: DMA transfer direction (same as passed to dma_map_page) + * @attrs: DMA mapping attributes * - * IOMMU aware version of arm_dma_unmap_page() + * IOMMU aware version of arm_dma_unmap_phys() */ -static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, +static void arm_iommu_unmap_phys(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); @@ -1407,7 +1412,8 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, if (!iova) return; - if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { + if (!dev->dma_coherent && + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { phys_addr_t phys = iommu_iova_to_phys(mapping->domain, iova); arch_sync_dma_for_cpu(phys + offset, size, dir); @@ -1417,63 +1423,6 @@ static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, __free_iova(mapping, iova, len); } -/** - * arm_iommu_map_resource - map a device resource for DMA - * @dev: valid struct device pointer - * @phys_addr: physical address of resource - * @size: size of resource to map - * @dir: DMA transfer direction - */ -static dma_addr_t arm_iommu_map_resource(struct device *dev, - phys_addr_t phys_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); - dma_addr_t dma_addr; - int ret, prot; - phys_addr_t addr = phys_addr & PAGE_MASK; - unsigned int offset = phys_addr & ~PAGE_MASK; - size_t len = PAGE_ALIGN(size + offset); - - dma_addr = __alloc_iova(mapping, len); - if (dma_addr == DMA_MAPPING_ERROR) - return dma_addr; - - prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO; - - ret = iommu_map(mapping->domain, dma_addr, addr, len, prot, GFP_KERNEL); - if (ret < 0) - goto fail; - - return dma_addr + offset; -fail: - __free_iova(mapping, dma_addr, len); - return DMA_MAPPING_ERROR; -} - -/** - * arm_iommu_unmap_resource - unmap a device DMA resource - * @dev: valid struct device pointer - * @dma_handle: DMA address to resource - * @size: size of resource to map - * @dir: DMA transfer direction - */ -static void arm_iommu_unmap_resource(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); - dma_addr_t iova = dma_handle & PAGE_MASK; - unsigned int offset = dma_handle & ~PAGE_MASK; - size_t len = PAGE_ALIGN(size + offset); - - if (!iova) - return; - - iommu_unmap(mapping->domain, iova, len); - __free_iova(mapping, iova, len); -} - static void arm_iommu_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { @@ -1510,8 +1459,8 @@ static const struct dma_map_ops iommu_ops = { .mmap = arm_iommu_mmap_attrs, .get_sgtable = arm_iommu_get_sgtable, - .map_page = arm_iommu_map_page, - .unmap_page = arm_iommu_unmap_page, + .map_phys = arm_iommu_map_phys, + .unmap_phys = arm_iommu_unmap_phys, .sync_single_for_cpu = arm_iommu_sync_single_for_cpu, .sync_single_for_device = arm_iommu_sync_single_for_device, @@ -1519,9 +1468,6 @@ static const struct dma_map_ops iommu_ops = { .unmap_sg = arm_iommu_unmap_sg, .sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu, .sync_sg_for_device = arm_iommu_sync_sg_for_device, - - .map_resource = arm_iommu_map_resource, - .unmap_resource = arm_iommu_unmap_resource, }; /** -- 2.51.0