public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Niklas Cassel <cassel@kernel.org>
To: Bob Beckett <bob.beckett@collabora.com>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	kernel@collabora.com, linux-nvme@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Gwendal Grignou <gwendal@chromium.org>
Subject: Re: [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk
Date: Mon, 25 Nov 2024 12:01:20 +0100	[thread overview]
Message-ID: <Z0RZAKgUA7jS1U_m@ryzen> (raw)
In-Reply-To: <20241112195053.3939762-1-bob.beckett@collabora.com>

On Tue, Nov 12, 2024 at 07:50:00PM +0000, Bob Beckett wrote:
> From: Robert Beckett <bob.beckett@collabora.com>
> 
> We initially put in a quick fix of limiting the queue depth to 1
> as experimentation showed that it fixed data corruption on 64GB
> steamdecks.
> 
> After further experimentation, it appears that the corruption
> is fixed by aligning the small dma pool segments to 512 bytes.
> Testing via desync image verification shows that it now passes
> thousands of verification loops, where previously
> it never managed above 7.
> 
> Currently it is not known why this fixes the corruption.
> Perhaps it is doing something nasty like using an mmc page
> as a cache for the prp lists (mmc min. page size is 512 bytes)
> and not invalidating properly, so that the dma pool change to
> treats segment list as a stack ends up giving a previous
> segment in the same cached page.
> 
> This fixes the previous queue depth limitation as it fixes
> the corruption without incurring a 37% tested performance
> degredation.
> 
> Fixes: 83bdfcbdbe5d ("nvme-pci: qdepth 1 quirk")
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/nvme/host/nvme.h | 5 +++++
>  drivers/nvme/host/pci.c  | 6 +++---
>  2 files changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 093cb423f536..61bba5513de0 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -173,6 +173,11 @@ enum nvme_quirks {
>  	 * MSI (but not MSI-X) interrupts are broken and never fire.
>  	 */
>  	NVME_QUIRK_BROKEN_MSI			= (1 << 21),
> +
> +	/*
> +	 * Align dma pool segment size to 512 bytes
> +	 */
> +	NVME_QUIRK_DMAPOOL_ALIGN_512		= (1 << 22),
>  };
>  
>  /*
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 4b9fda0b1d9a..6fcd3bb413c4 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -2700,8 +2700,8 @@ static int nvme_setup_prp_pools(struct nvme_dev *dev)
>  		return -ENOMEM;
>  
>  	/* Optimisation for I/Os between 4k and 128k */
> -	dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev,
> -						256, 256, 0);
> +	dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev,256,
> +				       dev->ctrl.quirks & NVME_QUIRK_DMAPOOL_ALIGN_512 ? 512 : 256, 0);
>  	if (!dev->prp_small_pool) {
>  		dma_pool_destroy(dev->prp_page_pool);
>  		return -ENOMEM;
> @@ -3449,7 +3449,7 @@ static const struct pci_device_id nvme_id_table[] = {
>  	{ PCI_VDEVICE(REDHAT, 0x0010),	/* Qemu emulated controller */
>  		.driver_data = NVME_QUIRK_BOGUS_NID, },
>  	{ PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
> -		.driver_data = NVME_QUIRK_QDEPTH_ONE },
> +		.driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, },
>  	{ PCI_DEVICE(0x126f, 0x2262),	/* Silicon Motion generic */
>  		.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
>  				NVME_QUIRK_BOGUS_NID, },
> -- 
> 2.45.2
> 
> 

+CC: Gwendal

Since he sent out a patch to revert the original QD=1 quirk,
claiming that the quirk wasn't needed when using the same
NVMe to eMMC bridge with another eMMC device.

If the quirk is really per eMMC device (from reading about this
problem, it sure sounds like a controller issue...), but if this
problem is really eMMC device related, then probably the quirk
should be applied only for certain eMMC devices instead.


Kind regards,
Niklas


      parent reply	other threads:[~2024-11-25 11:02 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-12 19:50 [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk Bob Beckett
2024-11-12 20:47 ` Keith Busch
2024-11-13  4:31 ` Christoph Hellwig
2024-11-13 18:05   ` Keith Busch
2024-11-13 20:08     ` Robert Beckett
2024-11-14  5:55     ` Christoph Hellwig
2024-11-14 13:10       ` Robert Beckett
2024-11-14 11:38 ` Paweł Anikiel
2024-11-14 12:17   ` Christoph Hellwig
2024-11-14 15:37     ` Keith Busch
2024-11-14 13:24   ` Robert Beckett
2024-11-14 14:13     ` Paweł Anikiel
2024-11-14 16:28       ` Robert Beckett
2024-11-22 19:36         ` Keith Busch
2024-12-09 12:32           ` Robert Beckett
2024-12-09 15:33             ` Paweł Anikiel
2024-12-10 21:36               ` Keith Busch
2024-12-11 10:55                 ` Robert Beckett
2024-12-17 11:18                   ` Paweł Anikiel
2024-11-14 15:46   ` Keith Busch
2024-11-14 16:47     ` Robert Beckett
2024-11-14 18:00 ` Keith Busch
2024-11-14 18:01   ` Jens Axboe
2024-11-25 11:01 ` Niklas Cassel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z0RZAKgUA7jS1U_m@ryzen \
    --to=cassel@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=bob.beckett@collabora.com \
    --cc=gwendal@chromium.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kernel@collabora.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox