From: Christoph Hellwig <hch@lst.de>
To: Jon Hunter <jonathanh@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
"linux-tegra@vger.kernel.org" <linux-tegra@vger.kernel.org>,
Adrian Hunter <adrian.hunter@intel.com>,
Ulf Hansson <ulf.hansson@linaro.org>,
Thierry Reding <thierry.reding@gmail.com>
Subject: Re: [PATCH 17/17] mmc: pass queue_limits to blk_mq_alloc_disk
Date: Thu, 27 Jun 2024 14:44:20 +0200 [thread overview]
Message-ID: <20240627124420.GA11113@lst.de> (raw)
In-Reply-To: <9cb2b062-1b37-4d1d-8731-da69c2fe7a74@nvidia.com>
On Thu, Jun 27, 2024 at 01:30:03PM +0100, Jon Hunter wrote:
> I have been testing on both Tegra194 and Tegra234. Both of these set the
> above quirk. This would explain why the max_segment_size is rounded down to
> 65024 in the mmc_alloc_disk() function.
>
> We can check if this is needed but if it is needed then it is not clear
> if/how this can be fixed?
The older kernels did this:
if (max_size < PAGE_CACHE_SIZE) {
max_size = PAGE_CACHE_SIZE;
printk(KERN_INFO "%s: set to minimum %d\n",
__func__, max_size);
}
q->limits.max_segment_size = max_size;
so when these kernels actually worked despite the above warning it
must be ok(-ish) to just increase this value. If that is best done
by dropping the quirk, or changing the logic in sdhci.c is something
the maintainers that understand the hardware need to decide.
The patch below gives you the pre-6.9 behavior just without the
boot time warning, but it might not be what was intended by the
quirk:
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 746f4cf7ab0338..0dc3604ac6093a 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -4721,12 +4721,9 @@ int sdhci_setup_host(struct sdhci_host *host)
* be larger than 64 KiB though.
*/
if (host->flags & SDHCI_USE_ADMA) {
- if (host->quirks & SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC) {
+ if (host->quirks & SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC)
host->max_adma = 65532; /* 32-bit alignment */
- mmc->max_seg_size = 65535;
- } else {
- mmc->max_seg_size = 65536;
- }
+ mmc->max_seg_size = 65536;
} else {
mmc->max_seg_size = mmc->max_req_size;
}
next prev parent reply other threads:[~2024-06-27 12:44 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20240215070300.2200308-1-hch@lst.de>
[not found] ` <20240215070300.2200308-18-hch@lst.de>
2024-06-27 9:43 ` [PATCH 17/17] mmc: pass queue_limits to blk_mq_alloc_disk Jon Hunter
2024-06-27 9:49 ` Christoph Hellwig
2024-06-27 9:58 ` Jon Hunter
2024-06-27 11:19 ` Christoph Hellwig
2024-06-27 12:30 ` Jon Hunter
2024-06-27 12:44 ` Christoph Hellwig [this message]
2024-06-27 13:09 ` Jon Hunter
2024-06-27 13:44 ` Adrian Hunter
2024-06-27 14:47 ` Christoph Hellwig
2024-06-28 8:06 ` Adrian Hunter
2024-06-28 12:32 ` Christoph Hellwig
2024-06-28 12:37 ` Adrian Hunter
2024-06-28 12:51 ` Christoph Hellwig
2024-06-28 13:45 ` Adrian Hunter
2024-07-03 8:22 ` Jon Hunter
2024-07-04 7:24 ` Christoph Hellwig
2024-06-28 9:05 ` Jon Hunter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240627124420.GA11113@lst.de \
--to=hch@lst.de \
--cc=adrian.hunter@intel.com \
--cc=axboe@kernel.dk \
--cc=jonathanh@nvidia.com \
--cc=linux-tegra@vger.kernel.org \
--cc=thierry.reding@gmail.com \
--cc=ulf.hansson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).