From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 51B16CA0FE0 for ; Thu, 21 Aug 2025 03:44:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Owner; bh=V7lyGHPVpToHBO4xk7veSlctoonDh3C6qltAZuVkH4M=; b=Ll3WzbIjprksD5awFyiLn8PyrA +tsSJTGtxGFN+Gr0zPFG6Qoiu5T1vvE3Raug7Sla83KIa87LxiKa/3RqL3qFnFusmLxiZrjuFxAu0 ZZnp2un5X+bKAuuRrQU/BkJEgcwKmRcaFId+cAx9heKnJn+f33G7XYR/MRz6/bxELKJi3gVVxYRpf xwd2yBZ5h4IuPXGpdeuXes6pFdjuRo3T7RaOg8YFdrNM7uj//jRYR85XsZkujD0szQH9e8qQCcbn5 Zs6HVz2R/9C0D44B7NwfyV4acfVsGobGXvEVUvgBgf3UiKNYMcAYa78FM3M/faYwcVEy7sgQB9skq 50gDgH3Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uowDq-0000000FhDx-3gGs; Thu, 21 Aug 2025 03:44:18 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uoQH8-0000000B7Wm-2oyd for linux-nvme@lists.infradead.org; Tue, 19 Aug 2025 17:37:36 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 176855C64A7; Tue, 19 Aug 2025 17:37:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C7B6C113D0; Tue, 19 Aug 2025 17:37:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625042; bh=PY9smWWJ4JNRFUAjSvtMNJOBqFCsxBLYO5QlYdxwbLE=; h=From:To:Cc:Subject:Date:From; b=nt8pdRowHseoXvExKasdI0tEDbHSrgizbMB2qlH1aCU/xuu2CZSaOV8+REt27f9GO 87AyElpEj/wX92YaeYkB2ZUeSYzFX7ugQoGONRfzj6gLwHZfXjbZvX+XecnRMSzKxt XWFXqQ7FRrUCmWN75OAdzO2Bw50CvLG0hQxnk0yj1TPViX5VQGWTcqYi++OUdBQ/MI 77ZymbBVIJT1fBt6mwrJPXADDaD1SHQ2OEMF2BgRWv5a+KqnO7FRcxsxJl834mNp8D 2IaJ0ZC7YC5w/kVg1P0H0R1EvS8ZJSRiF91vJogJyrKxdQR7Akp5/4TKu+iWzc343Q 0jqutLlV5YsRQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 00/16] dma-mapping: migrate to physical address-based API Date: Tue, 19 Aug 2025 20:36:44 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250819_103734_795119_900DABA4 X-CRM114-Status: GOOD ( 21.11 ) X-Mailman-Approved-At: Wed, 20 Aug 2025 20:44:16 -0700 X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org Changelog: v4: * Fixed kbuild error with mismatch in kmsan function declaration due to rebase error. v3: https://lore.kernel.org/all/cover.1755193625.git.leon@kernel.org * Fixed typo in "cacheable" word * Simplified kmsan patch a lot to be simple argument refactoring v2: https://lore.kernel.org/all/cover.1755153054.git.leon@kernel.org * Used commit messages and cover letter from Jason * Moved setting IOMMU_MMIO flag to dma_info_to_prot function * Micro-optimized the code * Rebased code on v6.17-rc1 v1: https://lore.kernel.org/all/cover.1754292567.git.leon@kernel.org * Added new DMA_ATTR_MMIO attribute to indicate PCI_P2PDMA_MAP_THRU_HOST_BRIDGE path. * Rewrote dma_map_* functions to use thus new attribute v0: https://lore.kernel.org/all/cover.1750854543.git.leon@kernel.org/ ------------------------------------------------------------------------ This series refactors the DMA mapping to use physical addresses as the primary interface instead of page+offset parameters. This change aligns the DMA API with the underlying hardware reality where DMA operations work with physical addresses, not page structures. The series maintains export symbol backward compatibility by keeping the old page-based API as wrapper functions around the new physical address-based implementations. This series refactors the DMA mapping API to provide a phys_addr_t based, and struct-page free, external API that can handle all the mapping cases we want in modern systems: - struct page based cachable DRAM - struct page MEMORY_DEVICE_PCI_P2PDMA PCI peer to peer non-cachable MMIO - struct page-less PCI peer to peer non-cachable MMIO - struct page-less "resource" MMIO Overall this gets much closer to Matthew's long term wish for struct-pageless IO to cachable DRAM. The remaining primary work would be in the mm side to allow kmap_local_pfn()/phys_to_virt() to work on phys_addr_t without a struct page. The general design is to remove struct page usage entirely from the DMA API inner layers. For flows that need to have a KVA for the physical address they can use kmap_local_pfn() or phys_to_virt(). This isolates the struct page requirements to MM code only. Long term all removals of struct page usage are supporting Matthew's memdesc project which seeks to substantially transform how struct page works. Instead make the DMA API internals work on phys_addr_t. Internally there are still dedicated 'page' and 'resource' flows, except they are now distinguished by a new DMA_ATTR_MMIO instead of by callchain. Both flows use the same phys_addr_t. When DMA_ATTR_MMIO is specified things work similar to the existing 'resource' flow. kmap_local_pfn(), phys_to_virt(), phys_to_page(), pfn_valid(), etc are never called on the phys_addr_t. This requires rejecting any configuration that would need swiotlb. CPU cache flushing is not required, and avoided, as ATTR_MMIO also indicates the address have no cachable mappings. This effectively removes any DMA API side requirement to have struct page when DMA_ATTR_MMIO is used. In the !DMA_ATTR_MMIO mode things work similarly to the 'page' flow, except on the common path of no cache flush, no swiotlb it never touches a struct page. When cache flushing or swiotlb copying kmap_local_pfn()/phys_to_virt() are used to get a KVA for CPU usage. This was already the case on the unmap side, now the map side is symmetric. Callers are adjusted to set DMA_ATTR_MMIO. Existing 'resource' users must set it. The existing struct page based MEMORY_DEVICE_PCI_P2PDMA path must also set it. This corrects some existing bugs where iommu mappings for P2P MMIO were improperly marked IOMMU_CACHE. Since ATTR_MMIO is made to work with all the existing DMA map entry points, particularly dma_iova_link(), this finally allows a way to use the new DMA API to map PCI P2P MMIO without creating struct page. The VFIO DMABUF series demonstrates how this works. This is intended to replace the incorrect driver use of dma_map_resource() on PCI BAR addresses. This series does the core code and modern flows. A followup series will give the same treatment to the legacy dma_ops implementation. Thanks Leon Romanovsky (16): dma-mapping: introduce new DMA attribute to indicate MMIO memory iommu/dma: implement DMA_ATTR_MMIO for dma_iova_link(). dma-debug: refactor to use physical addresses for page mapping dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys iommu/dma: extend iommu_dma_*map_phys API to handle MMIO memory dma-mapping: convert dma_direct_*map_page to be phys_addr_t based kmsan: convert kmsan_handle_dma to use physical addresses dma-mapping: handle MMIO flow in dma_map|unmap_page xen: swiotlb: Open code map_resource callback dma-mapping: export new dma_*map_phys() interface mm/hmm: migrate to physical address-based DMA mapping API mm/hmm: properly take MMIO path block-dma: migrate to dma_map_phys instead of map_page block-dma: properly take MMIO path nvme-pci: unmap MMIO pages with appropriate interface Documentation/core-api/dma-api.rst | 4 +- Documentation/core-api/dma-attributes.rst | 18 ++++ arch/powerpc/kernel/dma-iommu.c | 4 +- block/blk-mq-dma.c | 15 ++- drivers/iommu/dma-iommu.c | 61 +++++------ drivers/nvme/host/pci.c | 18 +++- drivers/virtio/virtio_ring.c | 4 +- drivers/xen/swiotlb-xen.c | 21 +++- include/linux/blk-mq-dma.h | 6 +- include/linux/blk_types.h | 2 + include/linux/dma-direct.h | 2 - include/linux/dma-map-ops.h | 8 +- include/linux/dma-mapping.h | 33 ++++++ include/linux/iommu-dma.h | 11 +- include/linux/kmsan.h | 9 +- include/trace/events/dma.h | 9 +- kernel/dma/debug.c | 71 ++++--------- kernel/dma/debug.h | 37 ++----- kernel/dma/direct.c | 22 +--- kernel/dma/direct.h | 52 ++++++---- kernel/dma/mapping.c | 117 +++++++++++++--------- kernel/dma/ops_helpers.c | 6 +- mm/hmm.c | 19 ++-- mm/kmsan/hooks.c | 5 +- rust/kernel/dma.rs | 3 + tools/virtio/linux/kmsan.h | 2 +- 26 files changed, 305 insertions(+), 254 deletions(-) -- 2.50.1