From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECEB126AA83; Wed, 25 Jun 2025 13:19:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857587; cv=none; b=YVMr7MSxPcg1HVKzWe5vEa2AnsFA17MbIfkwAiP1ArM0+TJl5aVDkvTdCItzcgjqVEk19KS6hn8LZ1dGnAyHXhuNkT6A+OYYxCDx1t91LVuMHL/bUovBMp7uo9OVEAdzQkrqPdxkQYbB4dmzjk0WwT9PJs1KsehOH45Xj5uZ2Jc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857587; c=relaxed/simple; bh=s3ricX6F0gnoDUjq0E8NJAWhkL9Tw8iVfIdzfwt4kfE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A498rlDc8LlFPIFRapoqaWwSSUGqn1+DyxmGvjCSs/BNYcEBLXJI1EB5sn10rc4F8DWj0fQZBx4GpAuRZde9M3ry/VGUEZ9XLdkItzTaYwd6opNAxjs19cPEqKtYl31dS+CPooh+QfL3XPIDAnakH64zATCreJnzMYTZe5JbUIM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cnTVV8FS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cnTVV8FS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 592E4C4CEEE; Wed, 25 Jun 2025 13:19:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857586; bh=s3ricX6F0gnoDUjq0E8NJAWhkL9Tw8iVfIdzfwt4kfE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cnTVV8FSWRgTINDsBkK+iotwyFKWOB54N/KKiBJwLkRN14YJAdgWdogyeUVp7oYnS JQF5cLFJZPI6gXGVsijnU1DUn6cDooA9QKF0JLQlCux/vHk8s/QC9qvVG2ato2Jy9Z /MJsSnpP1evmTTc5iPOEKZtQRZLSiF2QWSP0sWS1zQnw07TNmRSMsyfbaQIy79hYVS 3E2bNHn79TeUYQSZ+MekASQWX0u51qZPgs+5KBB7c/JI61L5WzuQJ2XY8AUNyXt0eY Fo41SZbgHn68WZRUMguZ35VM+86zZejCu/SQgHrRC4PKYvU90HLDEllnWVq8lFzC/8 jwx/UnEJ4Xgzg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/8] dma-mapping: convert dma_direct_*map_page to be phys_addr_t based Date: Wed, 25 Jun 2025 16:19:01 +0300 Message-ID: <1165abafc7d4bd2eed2cc89480b68111fe6fd13d.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Leon Romanovsky Convert the DMA direct mapping functions to accept physical addresses directly instead of page+offset parameters. The functions were already operating on physical addresses internally, so this change eliminates the redundant page-to-physical conversion at the API boundary. The functions dma_direct_map_page() and dma_direct_unmap_page() are renamed to dma_direct_map_phys() and dma_direct_unmap_phys() respectively, with their calling convention changed from (struct page *page, unsigned long offset) to (phys_addr_t phys). Architecture-specific functions arch_dma_map_page_direct() and arch_dma_unmap_page_direct() are similarly renamed to arch_dma_map_phys_direct() and arch_dma_unmap_phys_direct(). The is_pci_p2pdma_page() checks are replaced with pfn_valid() checks using PHYS_PFN(phys). This provides more accurate validation for non-page backed memory regions without need to have "faked" struct page. Signed-off-by: Leon Romanovsky --- arch/powerpc/kernel/dma-iommu.c | 4 ++-- include/linux/dma-map-ops.h | 8 ++++---- kernel/dma/direct.c | 6 +++--- kernel/dma/direct.h | 13 ++++++------- kernel/dma/mapping.c | 8 ++++---- 5 files changed, 19 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c index 4d64a5db50f3..0359ab72cd3b 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -14,7 +14,7 @@ #define can_map_direct(dev, addr) \ ((dev)->bus_dma_limit >= phys_to_dma((dev), (addr))) -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr) +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr) { if (likely(!dev->bus_dma_limit)) return false; @@ -24,7 +24,7 @@ bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr) #define is_direct_handle(dev, h) ((h) >= (dev)->archdata.dma_offset) -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle) +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle) { if (likely(!dev->bus_dma_limit)) return false; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f48e5fb88bd5..71f5b3025415 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -392,15 +392,15 @@ void *arch_dma_set_uncached(void *addr, size_t size); void arch_dma_clear_uncached(void *addr, size_t size); #ifdef CONFIG_ARCH_HAS_DMA_MAP_DIRECT -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr); -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle); +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr); +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle); bool arch_dma_map_sg_direct(struct device *dev, struct scatterlist *sg, int nents); bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, int nents); #else -#define arch_dma_map_page_direct(d, a) (false) -#define arch_dma_unmap_page_direct(d, a) (false) +#define arch_dma_map_phys_direct(d, a) (false) +#define arch_dma_unmap_phys_direct(d, a) (false) #define arch_dma_map_sg_direct(d, s, n) (false) #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 24c359d9c879..fa75e3070073 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -453,7 +453,7 @@ void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl, if (sg_dma_is_bus_address(sg)) sg_dma_unmark_bus_address(sg); else - dma_direct_unmap_page(dev, sg->dma_address, + dma_direct_unmap_phys(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); } } @@ -476,8 +476,8 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents, */ break; case PCI_P2PDMA_MAP_NONE: - sg->dma_address = dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); + sg->dma_address = dma_direct_map_phys(dev, sg_phys(sg), + sg->length, dir, attrs); if (sg->dma_address == DMA_MAPPING_ERROR) { ret = -EIO; goto out_unmap; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index d2c0b7e632fc..10c1ba73c482 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -80,22 +80,21 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev, arch_dma_mark_clean(paddr, size); } -static inline dma_addr_t dma_direct_map_page(struct device *dev, - struct page *page, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) +static inline dma_addr_t dma_direct_map_phys(struct device *dev, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) { - phys_addr_t phys = page_to_phys(page) + offset; dma_addr_t dma_addr = phys_to_dma(dev, phys); if (is_swiotlb_force_bounce(dev)) { - if (is_pci_p2pdma_page(page)) + if (!pfn_valid(PHYS_PFN(phys))) return DMA_MAPPING_ERROR; return swiotlb_map(dev, phys, size, dir, attrs); } if (unlikely(!dma_capable(dev, dma_addr, size, true)) || dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_pci_p2pdma_page(page)) + if (!pfn_valid(PHYS_PFN(phys))) return DMA_MAPPING_ERROR; if (is_swiotlb_active(dev)) return swiotlb_map(dev, phys, size, dir, attrs); @@ -111,7 +110,7 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev, return dma_addr; } -static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr, +static inline void dma_direct_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { phys_addr_t phys = dma_to_phys(dev, addr); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58482536db9b..80481a873340 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -166,8 +166,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, return DMA_MAPPING_ERROR; if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, phys + size)) - addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); + arch_dma_map_phys_direct(dev, phys + size)) + addr = dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr = iommu_dma_map_phys(dev, phys, size, dir, attrs); else @@ -187,8 +187,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_page_direct(dev, addr + size)) - dma_direct_unmap_page(dev, addr, size, dir, attrs); + arch_dma_unmap_phys_direct(dev, addr + size)) + dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else -- 2.49.0