From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7B16DCA0EDC for ; Thu, 14 Aug 2025 16:15:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=SxVJ7EcVndf9TgWXko8eFo+h8Krf23TH2nyXAvhfQkE=; b=ItKBL562xc0eoDaiDMsApzXSa5 WfEkm8YdD/NwBb9zic/PG5GZbcHe84/Fb0VK6QQNMCCk3CozMGaHiXQFK/2aUpInD9G997DxEuVb5 PCuZsHBGWkwGWe58Iy38XSKB00RwKE+mUIGh/9aAEpHSdvCA2p+eqRiYKGVYlqUxPDhl/N+O90mI1 X/M4O+KghZ/cjFV1AE2Ffi9tNjMdLWKxzWVPLahNhZfGyMmuoKCaOHvPByIWXr/fRMbrJOajwfhg2 cFw/oktpeNza9ODtkN6+3FYPNDMCzZvEoDAmsbMFn0FyUu2ik8N5suqgi/el//DlCIahLuhssdXzT T1sVLpyw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1umacD-0000000Hb0x-3y5o; Thu, 14 Aug 2025 16:15:45 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1umUyR-0000000GXlk-0Dpj for linux-nvme@lists.infradead.org; Thu, 14 Aug 2025 10:14:20 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id B0AEE45FFC; Thu, 14 Aug 2025 10:14:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D5592C4CEFC; Thu, 14 Aug 2025 10:14:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755166458; bh=we72YOP3RUYaEE6ejOmXbWl+ckHyZi8LIoo6/UPYld4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SmpX8PXuuxNsaz49Sv/BhvNODQ4TzMXqNubGzpKjR0dsA/JlegVIW7ZYpMP3neeXo 6ysow+joFGXbris2T1w1bPKLajBRqw5mhLds3i95hlC6XAjezXtulUZr+VABJIMJSB yhFjt55CINSHs1xWT301t//Fwqm4P0mYOLiPWlay2oiADWpapVZRRShOUUaHscpQjR 5pWPQPUWCmWT7zKihbu48EPfLMT5IfO3LiRbfS9Hm//Sx73FEjHbqulbxA5564Lw33 eHtpUKmiASaPE+/oHNo1MnUXBlXVI6jC0cH/9L6mOtLxocU6cjNTRKKr/AIXiTa6VY iPn1oycvVDCKw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v2 08/16] kmsan: convert kmsan_handle_dma to use physical addresses Date: Thu, 14 Aug 2025 13:13:26 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250814_031419_133561_28D9E913 X-CRM114-Status: GOOD ( 23.25 ) X-Mailman-Approved-At: Thu, 14 Aug 2025 08:29:04 -0700 X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org From: Leon Romanovsky Convert the KMSAN DMA handling function from page-based to physical address-based interface. The refactoring renames kmsan_handle_dma() parameters from accepting (struct page *page, size_t offset, size_t size) to (phys_addr_t phys, size_t size). A PFN_VALID check is added to prevent KMSAN operations on non-page memory, preventing from non struct page backed address, As part of this change, support for highmem addresses is implemented using kmap_local_page() to handle both lowmem and highmem regions properly. All callers throughout the codebase are updated to use the new phys_addr_t based interface. Signed-off-by: Leon Romanovsky --- drivers/virtio/virtio_ring.c | 4 ++-- include/linux/kmsan.h | 12 +++++++----- kernel/dma/mapping.c | 2 +- mm/kmsan/hooks.c | 36 +++++++++++++++++++++++++++++------- tools/virtio/linux/kmsan.h | 2 +- 5 files changed, 40 insertions(+), 16 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index f5062061c408..c147145a6593 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -378,7 +378,7 @@ static int vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist * is initialized by the hardware. Explicitly check/unpoison it * depending on the direction. */ - kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); + kmsan_handle_dma(sg_phys(sg), sg->length, direction); *addr = (dma_addr_t)sg_phys(sg); return 0; } @@ -3157,7 +3157,7 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virtqueue *_vq, void *ptr, struct vring_virtqueue *vq = to_vvq(_vq); if (!vq->use_dma_api) { - kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir); + kmsan_handle_dma(virt_to_phys(ptr), size, dir); return (dma_addr_t)virt_to_phys(ptr); } diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 2b1432cc16d5..6f27b9824ef7 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -182,8 +182,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); /** * kmsan_handle_dma() - Handle a DMA data transfer. - * @page: first page of the buffer. - * @offset: offset of the buffer within the first page. + * @phys: physical address of the buffer. * @size: buffer size. * @dir: one of possible dma_data_direction values. * @@ -191,8 +190,11 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); * * checks the buffer, if it is copied to device; * * initializes the buffer, if it is copied from device; * * does both, if this is a DMA_BIDIRECTIONAL transfer. + * + * The function handles page lookup internally and supports both lowmem + * and highmem addresses. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir); /** @@ -372,8 +374,8 @@ static inline void kmsan_iounmap_page_range(unsigned long start, { } -static inline void kmsan_handle_dma(struct page *page, size_t offset, - size_t size, enum dma_data_direction dir) +static inline void kmsan_handle_dma(phys_addr_t phys, size_t size, + enum dma_data_direction dir) { } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 80481a873340..709405d46b2b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -172,7 +172,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, addr = iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr = ops->map_page(dev, page, offset, size, dir, attrs); - kmsan_handle_dma(page, offset, size, dir); + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 97de3d6194f0..eab7912a3bf0 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -336,25 +336,48 @@ static void kmsan_handle_dma_page(const void *addr, size_t size, } /* Helper function to handle DMA data transfers. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { u64 page_offset, to_go, addr; + struct page *page; + void *kaddr; - if (PageHighMem(page)) + if (!pfn_valid(PHYS_PFN(phys))) return; - addr = (u64)page_address(page) + offset; + + page = phys_to_page(phys); + page_offset = offset_in_page(phys); + /* * The kernel may occasionally give us adjacent DMA pages not belonging * to the same allocation. Process them separately to avoid triggering * internal KMSAN checks. */ while (size > 0) { - page_offset = offset_in_page(addr); to_go = min(PAGE_SIZE - page_offset, (u64)size); + + if (PageHighMem(page)) + /* Handle highmem pages using kmap */ + kaddr = kmap_local_page(page); + else + /* Lowmem pages can be accessed directly */ + kaddr = page_address(page); + + addr = (u64)kaddr + page_offset; kmsan_handle_dma_page((void *)addr, to_go, dir); - addr += to_go; + + if (PageHighMem(page)) + kunmap_local(page); + + phys += to_go; size -= to_go; + + /* Move to next page if needed */ + if (size > 0) { + page = phys_to_page(phys); + page_offset = offset_in_page(phys); + } } } EXPORT_SYMBOL_GPL(kmsan_handle_dma); @@ -366,8 +389,7 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, int i; for_each_sg(sg, item, nents, i) - kmsan_handle_dma(sg_page(item), item->offset, item->length, - dir); + kmsan_handle_dma(sg_phys(item), item->length, dir); } /* Functions from kmsan-checks.h follow. */ diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h index 272b5aa285d5..6cd2e3efd03d 100644 --- a/tools/virtio/linux/kmsan.h +++ b/tools/virtio/linux/kmsan.h @@ -4,7 +4,7 @@ #include -inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +inline void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { } -- 2.50.1