public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Leon Romanovsky <leon@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Keith Busch <kbusch@kernel.org>,
	Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, Gal Pressman <gal@nvidia.com>
Subject: Re: [PATCH 2/2] nvme-pci: use dma_alloc_noncontigous if possible
Date: Tue, 3 Dec 2024 01:33:12 +0100	[thread overview]
Message-ID: <20241203003312.GA6890@lst.de> (raw)
In-Reply-To: <20241202190541.GA2434798@unreal>

On Mon, Dec 02, 2024 at 09:05:41PM +0200, Leon Romanovsky wrote:
> On Fri, Nov 01, 2024 at 05:40:05AM +0100, Christoph Hellwig wrote:
> > Use dma_alloc_noncontigous to allocate a single IOVA-contigous segment
> > when backed by an IOMMU.  This allow to easily use bigger segments and
> > avoids running into segment limits if we can avoid it.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  drivers/nvme/host/pci.c | 58 +++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 53 insertions(+), 5 deletions(-)
> 
> <...>
> 
> > +static int nvme_alloc_host_mem_multi(struct nvme_dev *dev, u64 preferred,
> >  		u32 chunk_size)
> >  {
> >  	struct nvme_host_mem_buf_desc *descs;
> > @@ -2049,9 +2086,18 @@ static int nvme_alloc_host_mem(struct nvme_dev *dev, u64 min, u64 preferred)
> >  	u64 hmminds = max_t(u32, dev->ctrl.hmminds * 4096, PAGE_SIZE * 2);
> >  	u64 chunk_size;
> >  
> > +	/*
> > +	 * If there is an IOMMU that can merge pages, try a virtually
> > +	 * non-contiguous allocation for a single segment first.
> > +	 */
> > +	if (!(PAGE_SIZE & dma_get_merge_boundary(dev->dev))) {
> > +		if (!nvme_alloc_host_mem_single(dev, preferred))
> > +			return 0;
> > +	}
> 
> We assume that the addition of the lines above are the root cause of the
> following panic during boot. It is happening when we are trying to allocate
> 61 MiB chunk.

We should not hit this path for dma-direct, but only dma-iommu that can
do discotigous allocations.  If we hit it with dma-direct I got my math
on the boundary check above wrong.  I'll try to figure out what is
wrong, but I'm actually in Japan for meetings, so things will not only
be delayed but also in a weird time zone.

> -       if (!(PAGE_SIZE & dma_get_merge_boundary(dev->dev))) {
> +       if (!(PAGE_SIZE & dma_get_merge_boundary(dev->dev)) && preferred < max_chunk) {

That would basically be a revert of the new funtionality..


  parent reply	other threads:[~2024-12-03  0:34 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-01  4:40 create single-segment HMBs when using IOMMU Christoph Hellwig
2024-11-01  4:40 ` [PATCH 1/2] nvme-pci: fix freeing of the HMB descriptor table Christoph Hellwig
2024-11-01  4:40 ` [PATCH 2/2] nvme-pci: use dma_alloc_noncontigous if possible Christoph Hellwig
2024-12-02 19:05   ` Leon Romanovsky
2024-12-02 19:15     ` Keith Busch
2024-12-02 19:20       ` Leon Romanovsky
2024-12-03  0:33     ` Christoph Hellwig [this message]
2024-11-05 15:59 ` create single-segment HMBs when using IOMMU Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241203003312.GA6890@lst.de \
    --to=hch@lst.de \
    --cc=gal@nvidia.com \
    --cc=kbusch@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox