public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Keith Busch <kbusch@kernel.org>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Ranjan Kumar <ranjan.kumar@broadcom.com>,
	linux-scsi@vger.kernel.org, martin.petersen@oracle.com,
	sathya.prakash@broadcom.com, chandrakanth.patil@broadcom.com,
	stable@vger.kernel.org, Mira Limbeck <m.limbeck@proxmox.com>
Subject: Re: [PATCH v1] mpt3sas: Limit NVMe request size to 2 MiB
Date: Fri, 10 Apr 2026 10:51:45 -0600	[thread overview]
Message-ID: <adkqob9VZ-G6mrsW@kbusch-mbp> (raw)
In-Reply-To: <eec3bbd3-5503-423b-bb3e-22657026d573@kernel.org>

On Fri, Apr 10, 2026 at 06:11:22AM +0200, Damien Le Moal wrote:
> On 2026/04/09 20:42, Ranjan Kumar wrote:
> 
> >  		if (pcie_device->nvme_mdts)
> > -			lim->max_hw_sectors = pcie_device->nvme_mdts / 512;
> > +			lim->max_hw_sectors = min_t(u32,
> > +					pcie_device->nvme_mdts / 512,
> > +					(SZ_2M / 512) - 8);
> > +		else
> > +			lim->max_hw_sectors = (SZ_2M / 512) - 8;
> 
> I am very confused here: SZ_2MB assumes that you have an SSD with a minimum page
> size of 4K, which can fit 4K / 8 = 512 PRP entries, each referencing 4K (one
> page), so a maximum of 2MiB. However, if I am not mistaken, there is nothing in
> nvme specs that forces the MPS field to be 0 (which leads to a page size of 4K).
>
> So this seems incorrect to me, even though that will probably work for the vast
> majority of SSDs out there, some exotic ones will not be correctly supported.
> 
> Keith ? Am I missing something here ?
> 
> Or do we simply do not care about SSDs with a minimum page size > 4K having
> their maximum command size truncated ?

Spec doesn't require it, but industry converged on that as always being
the minimum supported page size. The nvme driver rejects any device that
doesn't support 4k pages because they can't be reliably supported on a
lot of archs, even ones with larger page sizes. So it should be a safe
assumption that everyone supports 4k since no on is complaining. :)

On the patch, I initially left the "- 8" in the calculation to account
for page offsets. But it's not necessary because that gets absorbed in
PRP1 within the command, so we'd have at most 512 entries in the PRP
list for a 2M transfer.

  reply	other threads:[~2026-04-10 16:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-09 18:42 [PATCH v1] mpt3sas: Limit NVMe request size to 2 MiB Ranjan Kumar
2026-04-10  4:11 ` Damien Le Moal
2026-04-10 16:51   ` Keith Busch [this message]
2026-04-11  5:49     ` Ranjan Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adkqob9VZ-G6mrsW@kbusch-mbp \
    --to=kbusch@kernel.org \
    --cc=chandrakanth.patil@broadcom.com \
    --cc=dlemoal@kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=m.limbeck@proxmox.com \
    --cc=martin.petersen@oracle.com \
    --cc=ranjan.kumar@broadcom.com \
    --cc=sathya.prakash@broadcom.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox