linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Keith Busch <kbusch@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>,
	Ben Copeland <ben.copeland@linaro.org>,
	linux-kernel@vger.kernel.org, lkft-triage@lists.linaro.org,
	regressions@lists.linux.dev, linux-nvme@lists.infradead.org,
	Dan Carpenter <dan.carpenter@linaro.org>,
	axboe@kernel.dk, sagi@grimberg.me, iommu@lists.linux.dev,
	Leon Romanovsky <leonro@nvidia.com>
Subject: Re: next-20250627: IOMMU DMA warning during NVMe I/O completion after 06cae0e3f61c
Date: Tue, 1 Jul 2025 15:29:36 +0200	[thread overview]
Message-ID: <20250701132936.GA18807@lst.de> (raw)
In-Reply-To: <aGLyswGYD6Zai_sI@kbusch-mbp>

On Mon, Jun 30, 2025 at 02:25:23PM -0600, Keith Busch wrote:
> I think the PRP handling is broken. At the very least, handling the last
> element is wrong if it appears at the end of the list, so I think we
> need something like this:

Yeah.

> But even that, the PRP setup doesn't match the teardown. We're calling
> dma_map_page() on each PRP even if consecutive PRP's came from the same
> dma mapping segment. So even if it had been coalesced, but if the device
> doesn't support SGLs, then it would use the prp unmap path.

Yes, that's broken, and I remember fixing it before.  A little digging
shows that my fixes disappeared between the oct 30 version of Leon's
dma-split branch and the latest one somewhere.  Below is what should
restore it, but at least when forcing my Intel IOMMU down this path it
still has issues with VPTEs already set.  So maybe Bob should not try
it quite yet.  I'll try to get to it, but my availability today and
tomorrow is a bit limited.


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 38be1505dbd9..02bb5cf5db1a 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -678,40 +678,55 @@ static void nvme_free_prps(struct request *req)
 	enum dma_data_direction dir = rq_dma_dir(req);
 	int length = iod->total_len;
 	dma_addr_t dma_addr;
-	int i, desc;
+	int prp_len, i, desc;
 	__le64 *prp_list;
+	dma_addr_t dma_start;
 	u32 dma_len;
 
 	dma_addr = le64_to_cpu(iod->cmd.common.dptr.prp1);
-	dma_len = min_t(u32, length,
-		NVME_CTRL_PAGE_SIZE - (dma_addr & (NVME_CTRL_PAGE_SIZE - 1)));
-	length -= dma_len;
+	prp_len = NVME_CTRL_PAGE_SIZE - (dma_addr & (NVME_CTRL_PAGE_SIZE - 1));
+	prp_len = min(length, prp_len);
+	length -= prp_len;
 	if (!length) {
-		dma_unmap_page(dma_dev, dma_addr, dma_len, dir);
+		dma_unmap_page(dma_dev, dma_addr, prp_len, dir);
 		return;
 	}
 
+	dma_start = dma_addr;
+	dma_len = prp_len;
+	dma_addr = le64_to_cpu(iod->cmd.common.dptr.prp2);
+
 	if (length <= NVME_CTRL_PAGE_SIZE) {
-		dma_unmap_page(dma_dev, dma_addr, dma_len, dir);
-		dma_addr = le64_to_cpu(iod->cmd.common.dptr.prp2);
-		dma_unmap_page(dma_dev, dma_addr, length, dir);
-		return;
+		if (dma_addr != dma_start + dma_len) {
+			dma_unmap_page(dma_dev, dma_start, dma_len, dir);
+			dma_start = dma_addr;
+			dma_len = 0;
+		}
+		dma_len += length;
+		goto done;
 	}
 
 	i = 0;
 	desc = 0;
 	prp_list = iod->descriptors[desc];
 	do {
-		dma_unmap_page(dma_dev, dma_addr, dma_len, dir);
 		if (i == NVME_CTRL_PAGE_SIZE >> 3) {
 			prp_list = iod->descriptors[++desc];
 			i = 0;
 		}
 
 		dma_addr = le64_to_cpu(prp_list[i++]);
-		dma_len = min(length, NVME_CTRL_PAGE_SIZE);
-		length -= dma_len;
+		if (dma_addr != dma_start + dma_len) {
+			dma_unmap_page(dma_dev, dma_start, dma_len, dir);
+			dma_start = dma_addr;
+			dma_len = 0;
+		}
+		prp_len = min(length, NVME_CTRL_PAGE_SIZE);
+		dma_len += prp_len;
+		length -= prp_len;
 	} while (length);
+done:
+	dma_unmap_page(dma_dev, dma_start, dma_len, dir);
 }
 
 static void nvme_free_sgls(struct request *req)


  parent reply	other threads:[~2025-07-01 15:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20250630075218epcas5p1f3d467fffa468cac0ccd012c193e94df@epcas5p1.samsung.com>
2025-06-30  7:50 ` next-20250627: IOMMU DMA warning during NVMe I/O completion after 06cae0e3f61c Ben Copeland
2025-06-30 13:33   ` Christoph Hellwig
2025-06-30 19:51     ` Ben Copeland
2025-07-01 19:43       ` Keith Busch
2025-06-30 20:25     ` Keith Busch
2025-07-01  0:54       ` Keith Busch
2025-07-01 13:05       ` Ben Copeland
2025-07-01 14:20         ` Keith Busch
2025-07-01 13:29       ` Christoph Hellwig [this message]
2025-07-01 15:58         ` Leon Romanovsky
2025-07-01 21:00         ` Keith Busch
2025-07-03  9:30           ` Christoph Hellwig
2025-07-03 14:29             ` Keith Busch
2025-07-03 15:13               ` Ben Copeland
2025-07-03  5:54   ` Kanchan Joshi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250701132936.GA18807@lst.de \
    --to=hch@lst.de \
    --cc=axboe@kernel.dk \
    --cc=ben.copeland@linaro.org \
    --cc=dan.carpenter@linaro.org \
    --cc=iommu@lists.linux.dev \
    --cc=kbusch@kernel.org \
    --cc=leonro@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=lkft-triage@lists.linaro.org \
    --cc=regressions@lists.linux.dev \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).