public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH] nvme-pci: Fix mempool alloc size
@ 2022-12-19 18:59 ` Keith Busch
  2022-12-19 19:07   ` Jens Axboe
                     ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Keith Busch @ 2022-12-19 18:59 UTC (permalink / raw)
  To: linux-nvme, hch; +Cc: sagi, Keith Busch, Jens Axboe

From: Keith Busch <kbusch@kernel.org>

Convert the max size to bytes to match the units of the divisor that
calculates the worst-case number of PRP entries.

The result is used to determine how many PRP Lists are required. The
code was previously rounding this to 1 list, but we can require 2 in the
worst case. In that scenario, the driver would corrupt memory beyond the
size provided by the mempool.

While unlikely to occur (you'd need a 4MB in exactly 127 phys segments
on a queue that doesn't support SGLs), this memory corruption has been
observed by kfence.

Cc: Jens Axboe <axboe@kernel.dk>
Fixes: 943e942e6266f ("nvme-pci: limit max IO size and segments to avoid high order allocations")
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
 drivers/nvme/host/pci.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index f0f8027644bbf..fa182fcd4c3e8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -380,8 +380,8 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
  */
 static int nvme_pci_npages_prp(void)
 {
-	unsigned nprps = DIV_ROUND_UP(NVME_MAX_KB_SZ + NVME_CTRL_PAGE_SIZE,
-				      NVME_CTRL_PAGE_SIZE);
+	unsigned max_bytes = (NVME_MAX_KB_SZ * 1024) + NVME_CTRL_PAGE_SIZE;
+	unsigned nprps = DIV_ROUND_UP(max_bytes, NVME_CTRL_PAGE_SIZE);
 	return DIV_ROUND_UP(8 * nprps, PAGE_SIZE - 8);
 }
 
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-pci: Fix mempool alloc size
  2022-12-19 18:59 ` [PATCH] nvme-pci: Fix mempool alloc size Keith Busch
@ 2022-12-19 19:07   ` Jens Axboe
  2022-12-20  6:08   ` Chaitanya Kulkarni
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2022-12-19 19:07 UTC (permalink / raw)
  To: Keith Busch, linux-nvme, hch; +Cc: sagi, Keith Busch

On 12/19/22 11:59 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Convert the max size to bytes to match the units of the divisor that
> calculates the worst-case number of PRP entries.
> 
> The result is used to determine how many PRP Lists are required. The
> code was previously rounding this to 1 list, but we can require 2 in the
> worst case. In that scenario, the driver would corrupt memory beyond the
> size provided by the mempool.
> 
> While unlikely to occur (you'd need a 4MB in exactly 127 phys segments
> on a queue that doesn't support SGLs), this memory corruption has been
> observed by kfence.

Good catch!

Reviewed-by: Jens Axboe <axboe@kernel.dk>

-- 
Jens Axboe




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-pci: Fix mempool alloc size
  2022-12-19 18:59 ` [PATCH] nvme-pci: Fix mempool alloc size Keith Busch
  2022-12-19 19:07   ` Jens Axboe
@ 2022-12-20  6:08   ` Chaitanya Kulkarni
  2022-12-21  8:02   ` Kanchan Joshi
  2022-12-21  8:04   ` Christoph Hellwig
  3 siblings, 0 replies; 5+ messages in thread
From: Chaitanya Kulkarni @ 2022-12-20  6:08 UTC (permalink / raw)
  To: Keith Busch
  Cc: sagi@grimberg.me, linux-nvme@lists.infradead.org, Keith Busch,
	hch@lst.de

On 12/19/22 10:59, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
> 
> Convert the max size to bytes to match the units of the divisor that
> calculates the worst-case number of PRP entries.
> 
> The result is used to determine how many PRP Lists are required. The
> code was previously rounding this to 1 list, but we can require 2 in the
> worst case. In that scenario, the driver would corrupt memory beyond the
> size provided by the mempool.
> 
> While unlikely to occur (you'd need a 4MB in exactly 127 phys segments
> on a queue that doesn't support SGLs), this memory corruption has been
> observed by kfence.
> 
> Cc: Jens Axboe <axboe@kernel.dk>
> Fixes: 943e942e6266f ("nvme-pci: limit max IO size and segments to avoid high order allocations")
> Signed-off-by: Keith Busch <kbusch@kernel.org>
> ---

hmm, surprising to see that we never caught this until today...

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>

-ck


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-pci: Fix mempool alloc size
  2022-12-19 18:59 ` [PATCH] nvme-pci: Fix mempool alloc size Keith Busch
  2022-12-19 19:07   ` Jens Axboe
  2022-12-20  6:08   ` Chaitanya Kulkarni
@ 2022-12-21  8:02   ` Kanchan Joshi
  2022-12-21  8:04   ` Christoph Hellwig
  3 siblings, 0 replies; 5+ messages in thread
From: Kanchan Joshi @ 2022-12-21  8:02 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-nvme, hch, sagi, Keith Busch, Jens Axboe

[-- Attachment #1: Type: text/plain, Size: 1458 bytes --]

On Mon, Dec 19, 2022 at 10:59:06AM -0800, Keith Busch wrote:
>From: Keith Busch <kbusch@kernel.org>
>
>Convert the max size to bytes to match the units of the divisor that
>calculates the worst-case number of PRP entries.
>
>The result is used to determine how many PRP Lists are required. The
>code was previously rounding this to 1 list, but we can require 2 in the
>worst case. In that scenario, the driver would corrupt memory beyond the
>size provided by the mempool.
>
>While unlikely to occur (you'd need a 4MB in exactly 127 phys segments
>on a queue that doesn't support SGLs), this memory corruption has been
>observed by kfence.
>
>Cc: Jens Axboe <axboe@kernel.dk>
>Fixes: 943e942e6266f ("nvme-pci: limit max IO size and segments to avoid high order allocations")
>Signed-off-by: Keith Busch <kbusch@kernel.org>
>---
> drivers/nvme/host/pci.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
>index f0f8027644bbf..fa182fcd4c3e8 100644
>--- a/drivers/nvme/host/pci.c
>+++ b/drivers/nvme/host/pci.c
>@@ -380,8 +380,8 @@ static bool nvme_dbbuf_update_and_check_event(u16 value, u32 *dbbuf_db,
>  */
> static int nvme_pci_npages_prp(void)
> {
>-	unsigned nprps = DIV_ROUND_UP(NVME_MAX_KB_SZ + NVME_CTRL_PAGE_SIZE,
>-				      NVME_CTRL_PAGE_SIZE);

Similar calculation is present in apple.c too.
Regardless, this looks good.
Reviewed-by: Kanchan Joshi <joshi.k@samsung.com>


[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] nvme-pci: Fix mempool alloc size
  2022-12-19 18:59 ` [PATCH] nvme-pci: Fix mempool alloc size Keith Busch
                     ` (2 preceding siblings ...)
  2022-12-21  8:02   ` Kanchan Joshi
@ 2022-12-21  8:04   ` Christoph Hellwig
  3 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2022-12-21  8:04 UTC (permalink / raw)
  To: Keith Busch; +Cc: linux-nvme, hch, sagi, Keith Busch, Jens Axboe

Thanks,

applied to nvme-6.2.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-12-21  8:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CGME20221219195515epcas5p200e1e1f3430ac6f494a66740bfb6a3b3@epcas5p2.samsung.com>
2022-12-19 18:59 ` [PATCH] nvme-pci: Fix mempool alloc size Keith Busch
2022-12-19 19:07   ` Jens Axboe
2022-12-20  6:08   ` Chaitanya Kulkarni
2022-12-21  8:02   ` Kanchan Joshi
2022-12-21  8:04   ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox