From: Christoph Hellwig <hch@lst.de>
To: Keith Busch <kbusch@kernel.org>
Cc: Abhishek Bapat <abhishekbapat@google.com>,
Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
Sagi Grimberg <sagi@grimberg.me>,
Prashant Malani <pmalani@google.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme-sysfs: display max_hw_sectors_kb without requiring namespaces
Date: Fri, 18 Oct 2024 07:14:10 +0200 [thread overview]
Message-ID: <20241018051410.GE19831@lst.de> (raw)
In-Reply-To: <ZxE-BE4hLVRR2Zcp@kbusch-mbp.dhcp.thefacebook.com>
On Thu, Oct 17, 2024 at 10:40:36AM -0600, Keith Busch wrote:
> On Wed, Oct 16, 2024 at 09:31:08PM +0000, Abhishek Bapat wrote:
> > max_hw_sectors based on DMA optimized limitation") introduced a
> > limitation on the value of max_hw_sectors_kb, restricting it to 128KiB
> > (MDTS = 5). This restricion was implemented to mitigate lockups
> > encountered in high-core count AMD servers.
>
> There are other limits that can constrain transfer sizes below the
> device's MDTS. For example, the driver can only preallocate so much
> space for DMA and SGL descriptors, so 8MB is the current max transfer
> sizes the driver can support, and a device's MDTS can be much bigger
> than that.
Yes. Plus the virt boundary for PRPs, and for non-PCIe tranfers
there's also plenty of other hardware limits due to e.g. the FC HBA
and the RDMA HCA limit. There's also been some talk of a new PCIe
SGL variant with hard limits.
So I agree that exposting limits on I/O would be very useful, but it's
also kinda non-trivial.
> Anyway, yeah, I guess having a controller generic way to export this
> sounds like a good idea, but I wonder if the nvme driver is the right
> place to do it. The request_queue has all the limits you need to know
> about, but these are only exported if a gendisk is attached to it.
> Maybe we can create a queue subdirectory to the char dev too.
If we want it controller wide to e.g. include the admin queue the
gendisk won't really help unfortunately.
next prev parent reply other threads:[~2024-10-18 5:14 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-16 21:31 [PATCH] nvme-sysfs: display max_hw_sectors_kb without requiring namespaces Abhishek Bapat
2024-10-16 21:54 ` Prashant Malani
2024-10-17 21:09 ` Abhishek Bapat
2024-10-17 16:40 ` Keith Busch
2024-10-17 17:01 ` Caleb Sander
2024-10-17 21:32 ` Abhishek Bapat
2024-10-22 14:53 ` Keith Busch
2024-10-22 15:35 ` Sagi Grimberg
2024-10-22 15:51 ` Keith Busch
2024-10-23 9:47 ` Sagi Grimberg
2024-10-23 5:24 ` Christoph Hellwig
2024-10-23 9:46 ` Sagi Grimberg
2024-10-18 5:14 ` Christoph Hellwig [this message]
2024-10-20 21:25 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241018051410.GE19831@lst.de \
--to=hch@lst.de \
--cc=abhishekbapat@google.com \
--cc=axboe@kernel.dk \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=pmalani@google.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox