From: Keith Busch <kbusch@kernel.org>
To: Abhishek Bapat <abhishekbapat@google.com>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
Sagi Grimberg <sagi@grimberg.me>,
Prashant Malani <pmalani@google.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme-sysfs: display max_hw_sectors_kb without requiring namespaces
Date: Tue, 22 Oct 2024 08:53:47 -0600 [thread overview]
Message-ID: <Zxe8e2zS5dA61Jou@kbusch-mbp> (raw)
In-Reply-To: <CAL41Mv4_UjsD1ycpNU1xuQJdGWMf2L-SQYs=LupoM9BKurNXCg@mail.gmail.com>
On Thu, Oct 17, 2024 at 02:32:18PM -0700, Abhishek Bapat wrote:
> On Thu, Oct 17, 2024 at 9:40 AM Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Wed, Oct 16, 2024 at 09:31:08PM +0000, Abhishek Bapat wrote:
> > > max_hw_sectors based on DMA optimized limitation") introduced a
> > > limitation on the value of max_hw_sectors_kb, restricting it to 128KiB
> > > (MDTS = 5). This restricion was implemented to mitigate lockups
> > > encountered in high-core count AMD servers.
> >
> > There are other limits that can constrain transfer sizes below the
> > device's MDTS. For example, the driver can only preallocate so much
> > space for DMA and SGL descriptors, so 8MB is the current max transfer
> > sizes the driver can support, and a device's MDTS can be much bigger
> > than that.
> >
> > Anyway, yeah, I guess having a controller generic way to export this
> > sounds like a good idea, but I wonder if the nvme driver is the right
> > place to do it. The request_queue has all the limits you need to know
> > about, but these are only exported if a gendisk is attached to it.
> > Maybe we can create a queue subdirectory to the char dev too.
>
> Are you suggesting that all the files from the queue subdirectory should
> be included in the char dev (/sys/class/nvme/nvmeX/queue/)? Or that
> just the max_hw_sectors_kb value should be shared within the queue
> subdirectory? And if not the nvme driver, where else can this be done
> from?
You'd may want to know max_sectors_kb, dma_alignment, nr_requests,
virt_boundary_mask. Maybe some others.
The request_queue is owned by the block layer, so that seems like an
okay place to export it, but attached to some other device's sysfs
directory instead of a gendisk.
I'm just suggesting this because it doesn't sound like this is an nvme
specific problem.
next prev parent reply other threads:[~2024-10-22 14:53 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-16 21:31 [PATCH] nvme-sysfs: display max_hw_sectors_kb without requiring namespaces Abhishek Bapat
2024-10-16 21:54 ` Prashant Malani
2024-10-17 21:09 ` Abhishek Bapat
2024-10-17 16:40 ` Keith Busch
2024-10-17 17:01 ` Caleb Sander
2024-10-17 21:32 ` Abhishek Bapat
2024-10-22 14:53 ` Keith Busch [this message]
2024-10-22 15:35 ` Sagi Grimberg
2024-10-22 15:51 ` Keith Busch
2024-10-23 9:47 ` Sagi Grimberg
2024-10-23 5:24 ` Christoph Hellwig
2024-10-23 9:46 ` Sagi Grimberg
2024-10-18 5:14 ` Christoph Hellwig
2024-10-20 21:25 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zxe8e2zS5dA61Jou@kbusch-mbp \
--to=kbusch@kernel.org \
--cc=abhishekbapat@google.com \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=pmalani@google.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox