From: Keith Busch <kbusch@kernel.org>
To: Bob Beckett <bob.beckett@collabora.com>
Cc: Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
Sagi Grimberg <sagi@grimberg.me>,
kernel@collabora.com, linux-nvme@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk
Date: Thu, 14 Nov 2024 11:00:31 -0700 [thread overview]
Message-ID: <ZzY6v4d71jliy78w@kbusch-mbp> (raw)
In-Reply-To: <20241112195053.3939762-1-bob.beckett@collabora.com>
On Tue, Nov 12, 2024 at 07:50:00PM +0000, Bob Beckett wrote:
> From: Robert Beckett <bob.beckett@collabora.com>
>
> We initially put in a quick fix of limiting the queue depth to 1
> as experimentation showed that it fixed data corruption on 64GB
> steamdecks.
>
> After further experimentation, it appears that the corruption
> is fixed by aligning the small dma pool segments to 512 bytes.
> Testing via desync image verification shows that it now passes
> thousands of verification loops, where previously
> it never managed above 7.
>
> Currently it is not known why this fixes the corruption.
> Perhaps it is doing something nasty like using an mmc page
> as a cache for the prp lists (mmc min. page size is 512 bytes)
> and not invalidating properly, so that the dma pool change to
> treats segment list as a stack ends up giving a previous
> segment in the same cached page.
>
> This fixes the previous queue depth limitation as it fixes
> the corruption without incurring a 37% tested performance
> degredation.
>
> Fixes: 83bdfcbdbe5d ("nvme-pci: qdepth 1 quirk")
I had this queued up for the nvme-6.12 pull request, which I'm about to
send out, but I guess we should drop it until we conclude this
discussion. With 6.12 likely to be released on Sunday, this better
mitigation would need to target 6.13, then stable.
FWIW, given the current understanding with the last entry boundary
chaining idea, the QD1 mitigates that by always allocating a 0 page
offset prp list. So it effectively works around whatever errata we're
dealing with, albeit with an unsurprising performance penalty.
next prev parent reply other threads:[~2024-11-14 18:00 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-12 19:50 [PATCH] nvme-pci: 512 byte aligned dma pool segment quirk Bob Beckett
2024-11-12 20:47 ` Keith Busch
2024-11-13 4:31 ` Christoph Hellwig
2024-11-13 18:05 ` Keith Busch
2024-11-13 20:08 ` Robert Beckett
2024-11-14 5:55 ` Christoph Hellwig
2024-11-14 13:10 ` Robert Beckett
2024-11-14 11:38 ` Paweł Anikiel
2024-11-14 12:17 ` Christoph Hellwig
2024-11-14 15:37 ` Keith Busch
2024-11-14 13:24 ` Robert Beckett
2024-11-14 14:13 ` Paweł Anikiel
2024-11-14 16:28 ` Robert Beckett
2024-11-22 19:36 ` Keith Busch
2024-12-09 12:32 ` Robert Beckett
2024-12-09 15:33 ` Paweł Anikiel
2024-12-10 21:36 ` Keith Busch
2024-12-11 10:55 ` Robert Beckett
2024-12-17 11:18 ` Paweł Anikiel
2024-11-14 15:46 ` Keith Busch
2024-11-14 16:47 ` Robert Beckett
2024-11-14 18:00 ` Keith Busch [this message]
2024-11-14 18:01 ` Jens Axboe
2024-11-25 11:01 ` Niklas Cassel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZzY6v4d71jliy78w@kbusch-mbp \
--to=kbusch@kernel.org \
--cc=axboe@kernel.dk \
--cc=bob.beckett@collabora.com \
--cc=hch@lst.de \
--cc=kernel@collabora.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox