public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next
@ 2026-02-02 12:57 Pradeep P V K
  2026-02-02 14:35 ` Christoph Hellwig
  2026-02-02 17:18 ` Keith Busch
  0 siblings, 2 replies; 20+ messages in thread
From: Pradeep P V K @ 2026-02-02 12:57 UTC (permalink / raw)
  To: kbusch, axboe, hch, sagi
  Cc: linux-nvme, linux-kernel, nitin.rawat, Pradeep P V K

Fix a NULL pointer dereference that occurs in nvme_pci_prp_iter_next()
when SWIOTLB bounce buffering becomes active during runtime.

The issue occurs when SWIOTLB activation changes the device's DMA
mapping requirements at runtime, creating a mismatch between
iod->dma_vecs allocation and access logic.

The problem manifests when:
1. Device initially operates with dma_skip_sync=true
   (coherent DMA assumed)
2. First SWIOTLB mapping occurs due to DMA address limitations,
   memory encryption, or IOMMU bounce buffering requirements
3. SWIOTLB calls dma_reset_need_sync(), permanently setting
   dma_skip_sync=false
4. Subsequent I/Os now have dma_need_unmap()=true, requiring
   iod->dma_vecs

The issue arises from the timing of allocation versus access:
- nvme_pci_setup_data_prp() allocates iod->dma_vecs only when both
  (!dma_use_iova() && dma_need_unmap()) conditions are met
- nvme_pci_prp_iter_next() assumes iod->dma_vecs is valid whenever
  the same conditions are true, without NULL checking
- This creates a race where the device's DMA requirements change
  dynamically after the initial allocation decision, leading to NULL
  pointer access

  Unable to handle kernel NULL pointer dereference at
  virtual address 0000000000000000
  pc : nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
  Call trace:
   nvme_pci_prp_iter_next+0xe4/0x128 [nvme]
   nvme_prep_rq+0x5f4/0xa6c [nvme]
   nvme_queue_rqs+0xa8/0x18c [nvme]
   blk_mq_dispatch_queue_requests.constprop.0+0x108/0x120
   blk_mq_flush_plug_list+0x8c/0x174
   __blk_flush_plug+0xe4/0x140
   blk_finish_plug+0x38/0x4c
   read_pages+0x184/0x288
   page_cache_ra_order+0x1e0/0x3a4
   filemap_fault+0x518/0xa90
   __do_fault+0x3c/0x22c
   __handle_mm_fault+0x10ec/0x19b8
   handle_mm_fault+0xb4/0x294

Fix this by:
1. Initialize iod->dma_vecs to NULL in nvme_prep_rq()
2. Add NULL pointer check before accessing iod->dma_vecs in
   nvme_pci_prp_iter_next()
3. Set iod->dma_vecs to NULL after freeing for defensive programming

Fixes: b8b7570a7ec8 ("nvme-pci: fix dma unmapping when using PRPs and not using the IOVA mapping")
Co-developed-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Nitin Rawat <nitin.rawat@oss.qualcomm.com>
Signed-off-by: Pradeep P V K <pradeep.pragallapati@oss.qualcomm.com>
---
 drivers/nvme/host/pci.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 2a52cf46d960..e235654e7ee0 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -720,6 +720,7 @@ static void nvme_free_prps(struct request *req, unsigned int attrs)
 		dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr,
 			       iod->dma_vecs[i].len, rq_dma_dir(req), attrs);
 	mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool);
+	iod->dma_vecs = NULL;
 }
 
 static void nvme_free_sgls(struct request *req, struct nvme_sgl_desc *sge,
@@ -825,7 +826,7 @@ static bool nvme_pci_prp_iter_next(struct request *req, struct device *dma_dev,
 		return true;
 	if (!blk_rq_dma_map_iter_next(req, dma_dev, iter))
 		return false;
-	if (!dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
+	if (iod->dma_vecs && !dma_use_iova(&iod->dma_state) && dma_need_unmap(dma_dev)) {
 		iod->dma_vecs[iod->nr_dma_vecs].addr = iter->addr;
 		iod->dma_vecs[iod->nr_dma_vecs].len = iter->len;
 		iod->nr_dma_vecs++;
@@ -1218,6 +1219,7 @@ static blk_status_t nvme_prep_rq(struct request *req)
 	iod->nr_descriptors = 0;
 	iod->total_len = 0;
 	iod->meta_total_len = 0;
+	iod->dma_vecs = NULL;
 
 	ret = nvme_setup_cmd(req->q->queuedata, req);
 	if (ret)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-02-04 14:27 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-02 12:57 [PATCH V1] nvme-pci: Fix NULL pointer dereference in nvme_pci_prp_iter_next Pradeep P V K
2026-02-02 14:35 ` Christoph Hellwig
2026-02-02 15:16   ` Robin Murphy
2026-02-02 15:58     ` Leon Romanovsky
2026-02-02 17:13     ` Keith Busch
2026-02-02 17:36       ` Christoph Hellwig
2026-02-02 18:59         ` Keith Busch
2026-02-03  5:27           ` Christoph Hellwig
2026-02-03  6:14             ` Keith Busch
2026-02-03  6:23               ` Christoph Hellwig
2026-02-03 14:05             ` Pradeep Pragallapati
2026-02-04 14:04               ` Pradeep Pragallapati
2026-02-04 14:27                 ` Keith Busch
2026-02-03  9:42           ` Leon Romanovsky
2026-02-03 13:50             ` Robin Murphy
2026-02-03 17:41               ` Keith Busch
2026-02-02 17:39       ` Robin Murphy
2026-02-02 15:22   ` Leon Romanovsky
2026-02-02 15:26     ` Robin Murphy
2026-02-02 17:18 ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox