public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Keith Busch <kbusch@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>,
	Christoph Hellwig <hch@lst.de>,
	Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>,
	axboe@kernel.dk, sagi@grimberg.me,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
	nitin.rawat@oss.qualcomm.com, Leon Romanovsky <leon@kernel.org>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	iommu@lists.linux.dev
Subject: Re: [PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
Date: Mon, 2 Feb 2026 18:36:24 +0100	[thread overview]
Message-ID: <20260202173624.GA32713@lst.de> (raw)
In-Reply-To: <aYDbNC1z81D3lZ2r@kbusch-mbp>

On Mon, Feb 02, 2026 at 10:13:24AM -0700, Keith Busch wrote:
> > "This function must be called after all mappings that might
> >  need to be unmapped have been performed."
> > 
> > Trying to infer anything from it beforehand is definitely a bug in the
> > caller.
> 
> Well that doesn't really make sense. No matter how many mappings the
> driver has done, there will always be more. ?

Yeah.  It's more like if this returns true, all future calls, plus
the previous one (which might have caused this).  For that something
like the patch below should work in nvme.  Totally untested as I'm
about to head away from the desk and prepare dinner.


diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..f944b747e1bd 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -816,6 +816,22 @@ static void nvme_unmap_data(struct request *req)
 		nvme_free_descriptors(req);
 }
 
+static bool nvme_pci_alloc_dma_vecs(struct request *req,
+		struct blk_dma_iter *iter)
+{
+	struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+	struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+
+	iod->dma_vecs = mempool_alloc(nvmeq->dev->dmavec_mempool,
+			GFP_ATOMIC);
+	if (!iod->dma_vecs)
+		return false;
+	iod->dma_vecs[0].addr = iter->addr;
+	iod->dma_vecs[0].len = iter->len;
+	iod->nr_dma_vecs = 1;
+	return true;
+}
+
 static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
 		struct blk_dma_iter *iter)
 {
@@ -826,6 +842,8 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
 	if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
 		return false;
 	if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+		if (!iod->nr_dma_vecs && !nvme_pci_alloc_dma_vecs(req, iter))
+			return false;
 		iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
 		iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
 		iod->nr_dma_vecs++;
@@ -844,13 +862,8 @@ static blk_status_t nvme_pci_setup_data_prp(struct request *req,
 	__le64 *prp_list;
 
 	if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(nvmeq->dev->dev)) {
-		iod->dma_vecs = mempool_alloc(nvmeq->dev->dmavec_mempool,
-				GFP_ATOMIC);
-		if (!iod->dma_vecs)
+		if (!nvme_pci_alloc_dma_vecs(req, iter))
 			return BLK_STS_RESOURCE;
-		iod->dma_vecs[0].addr = iter->addr;
-		iod->dma_vecs[0].len = iter->len;
-		iod->nr_dma_vecs = 1;
 	}
 
 	/*



  reply	other threads:[~2026-02-02 17:36 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-02 12:57 [PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next Pradeep P V K
2026-02-02 14:35 ` Christoph Hellwig
2026-02-02 15:16   ` Robin Murphy
2026-02-02 15:58     ` Leon Romanovsky
2026-02-02 17:13     ` Keith Busch
2026-02-02 17:36       ` Christoph Hellwig [this message]
2026-02-02 18:59         ` Keith Busch
2026-02-03  5:27           ` Christoph Hellwig
2026-02-03  6:14             ` Keith Busch
2026-02-03  6:23               ` Christoph Hellwig
2026-02-03 14:05             ` Pradeep Pragallapati
2026-02-04 14:04               ` Pradeep Pragallapati
2026-02-04 14:27                 ` Keith Busch
2026-02-03  9:42           ` Leon Romanovsky
2026-02-03 13:50             ` Robin Murphy
2026-02-03 17:41               ` Keith Busch
2026-02-02 17:39       ` Robin Murphy
2026-02-02 15:22   ` Leon Romanovsky
2026-02-02 15:26     ` Robin Murphy
2026-02-02 17:18 ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260202173624.GA32713@lst.de \
    --to=hch@lst.de \
    --cc=axboe@kernel.dk \
    --cc=iommu@lists.linux.dev \
    --cc=kbusch@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=m.szyprowski@samsung.com \
    --cc=nitin.rawat@oss.qualcomm.com \
    --cc=pradeep.pragallapati@oss.qualcomm.com \
    --cc=robin.murphy@arm.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox