linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Keith Busch <kbusch@meta.com>
Cc: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
	hch@lst.de, axboe@kernel.dk, Keith Busch <kbusch@kernel.org>
Subject: Re: [PATCHv3 2/2] nvme: remove virtual boundary for sgl capable devices
Date: Mon, 25 Aug 2025 06:49:23 -0700	[thread overview]
Message-ID: <aKxp49SdTknkO1fb@infradead.org> (raw)
In-Reply-To: <20250821204420.2267923-3-kbusch@meta.com>

On Thu, Aug 21, 2025 at 01:44:20PM -0700, Keith Busch wrote:
> +	/*
> +	 * The virtual boundary mask is not necessary for PCI controllers that
> +	 * support SGL for DMA. It's only necessary when using PRP. Admin
> +	 * queues only support PRP, and fabrics drivers currently don't report
> +	 * what boundaries they require, so set the virtual boundary for
> +	 * either.
> +	 */
> +	if (!nvme_ctrl_sgl_supported(ctrl) || admin ||
> +	    ctrl->ops->flags & NVME_F_FABRICS)
> +		lim->virt_boundary_mask = NVME_CTRL_PAGE_SIZE - 1;

Fabrics itself never needs the virt boundary.  And for TCP which is a
software only transport I think we can just do away with it.  For
FC I suspect we can do away with it as well, as all the FC HBA support
proper SGLs.  For RDMA the standard MR methods do require the virtual
boundary, but somewhat recent Mellanox / Nvidia hardware does not.

No need for you to update all these, but I think having the transport
advertise the capability is probably better than a bunch of random
conditions in the core code.



      reply	other threads:[~2025-08-25 15:03 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-21 20:44 [PATCHv3 0/2] block+nvme: reducing virtual boundary mask reliance Keith Busch
2025-08-21 20:44 ` [PATCHv3 1/2] block: accumulate segment page gaps per bio Keith Busch
2025-08-25 13:46   ` Christoph Hellwig
2025-08-25 14:10     ` Keith Busch
2025-08-26 13:03       ` Christoph Hellwig
2025-08-26 13:47         ` Keith Busch
2025-08-26 13:57           ` Christoph Hellwig
2025-08-26 22:33             ` Keith Busch
2025-08-27  7:37               ` Christoph Hellwig
2025-08-30  1:47                 ` Keith Busch
2025-09-02  5:36                   ` Christoph Hellwig
2025-08-21 20:44 ` [PATCHv3 2/2] nvme: remove virtual boundary for sgl capable devices Keith Busch
2025-08-25 13:49   ` Christoph Hellwig [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aKxp49SdTknkO1fb@infradead.org \
    --to=hch@infradead.org \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kbusch@meta.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).