From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A100631985A; Thu, 14 Aug 2025 17:55:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755194117; cv=none; b=c3D7QhTET4nnw1CGYZEU/mzuipTKaKkAJGpN2I6wwcae194m1qrp1tImyBc0uEYdSf5PIkCY/XiM5e+naci23NF1vw57hF4o1nqBzkSPdgmLQhVVWKNORkMfoqUrNFbnAahko8TjKngNpfHQ1s1pu0JGM98CizL9Yx8bhm3fR9E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755194117; c=relaxed/simple; bh=shzOb9LxCd4j9ffIAYJARd3Ll/bIHN63Rdsm8jF6HN0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=H496/F7YYNd20S6gg8cKe92ij37GAzpU+jK9Mt7d2X08FDQ00LXLqg3fgz8WWTrKidnj+iwLrTeaREDfIN3U06iSLaVFHk6aJL7jRqIWmlFl3Q3XKyiDQlSXw6VtsnvR96f8e+w4DHDYj1w3Iv4e5xYiJi1RkwBAbqPkspYF8GE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=A2rSNS72; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="A2rSNS72" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4E86C4CEED; Thu, 14 Aug 2025 17:55:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755194117; bh=shzOb9LxCd4j9ffIAYJARd3Ll/bIHN63Rdsm8jF6HN0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A2rSNS72zGetBKpBZhRpgqD/rhRKvperIaIjrbRs5wRy9RZU2ScEPi/IdUXlel8t2 hDgSN0G3a2Rr04NiqmiYV9jLKqGT5IVplnGBul4pnKCtb46Ped2E5ecDGN1YyZ9EdT rXEjC1b3JvJImZhlMYQO6QlVt3EMPsCUGfWZr+AhLTcRk7pnn5IH6GnI2xlLYD7zEg XZ0PFeMCvymXzBoe83iYKupWfZGsEJm8bskOyv5EZlnO8619JC8K6d0uu61BKChHoS EV2FsRD9b3uIVJtaqF6rm99phCdO9U9AfY6/vt89TghuDzKHyGWoqQ9sfISwsumlQI OJ+zBWNq9+6YA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v3 16/16] nvme-pci: unmap MMIO pages with appropriate interface Date: Thu, 14 Aug 2025 20:54:07 +0300 Message-ID: <16e541279b4b030de54a0a2f1829e601b7923523.1755193625.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Leon Romanovsky Block layer maps MMIO memory through dma_map_phys() interface with help of DMA_ATTR_MMIO attribute. There is a need to unmap that memory with the appropriate unmap function, something which wasn't possible before adding new REQ attribute to block layer in previous patch. Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 2c6d9506b172..f8ecc0e0f576 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq = req->mq_hctx->driver_data; + unsigned int attrs = 0; unsigned int i; + if (req->cmd_flags & REQ_MMIO) + attrs = DMA_ATTR_MMIO; + for (i = 0; i < iod->nr_dma_vecs; i++) - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, - iod->dma_vecs[i].len, rq_dma_dir(req)); + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool); } @@ -699,15 +703,19 @@ static void nvme_free_sgls(struct request *req) unsigned int sqe_dma_len = le32_to_cpu(iod->cmd.common.dptr.sgl.length); struct nvme_sgl_desc *sg_list = iod->descriptors[0]; enum dma_data_direction dir = rq_dma_dir(req); + unsigned int attrs = 0; + + if (req->cmd_flags & REQ_MMIO) + attrs = DMA_ATTR_MMIO; if (iod->nr_descriptors) { unsigned int nr_entries = sqe_dma_len / sizeof(*sg_list), i; for (i = 0; i < nr_entries; i++) - dma_unmap_page(dma_dev, le64_to_cpu(sg_list[i].addr), - le32_to_cpu(sg_list[i].length), dir); + dma_unmap_phys(dma_dev, le64_to_cpu(sg_list[i].addr), + le32_to_cpu(sg_list[i].length), dir, attrs); } else { - dma_unmap_page(dma_dev, sqe_dma_addr, sqe_dma_len, dir); + dma_unmap_phys(dma_dev, sqe_dma_addr, sqe_dma_len, dir, attrs); } } -- 2.50.1