From: <Sharp.Xia@mediatek.com>
To: <shawn.lin@rock-chips.com>
Cc: <Sharp.Xia@mediatek.com>,
<angelogioacchino.delregno@collabora.com>,
<linux-arm-kernel@lists.infradead.org>,
<linux-kernel@vger.kernel.org>,
<linux-mediatek@lists.infradead.org>, <linux-mmc@vger.kernel.org>,
<matthias.bgg@gmail.com>, <ulf.hansson@linaro.org>,
<wsd_upstream@mediatek.com>
Subject: Re: [PATCH 1/1] mmc: Set optimal I/O size when mmc_setip_queue
Date: Sun, 27 Aug 2023 00:26:35 +0800 [thread overview]
Message-ID: <20230826162635.617-1-Sharp.Xia@mediatek.com> (raw)
In-Reply-To: <769a67cb-1b32-fd4f-b37e-e3ec4dab5eb9@rock-chips.com>
On Fri, 2023-08-25 at 17:17 +0800, Shawn Lin wrote:
>
>
> On 2023/8/25 16:39, Sharp.Xia@mediatek.com wrote:
> > On Fri, 2023-08-25 at 16:11 +0800, Shawn Lin wrote:
> >>
> >> Hi Sharp,
>
> ...
>
> >>> 1024
> >>>
> > Hi Shawn,
> >
> > What is your readahead value before and after applying this patch?
> >
>
> The original readahead is 128, and after applying the patch is 1024
>
>
> cat /d/mmc0/ios
> clock: 200000000 Hz
> actual clock: 200000000 Hz
> vdd: 18 (3.0 ~ 3.1 V)
> bus mode: 2 (push-pull)
> chip select: 0 (don't care)
> power mode: 2 (on)
> bus width: 3 (8 bits)
> timing spec: 10 (mmc HS400 enhanced strobe)
> signal voltage: 1 (1.80 V)
> driver type: 0 (driver type B)
>
> The driver I used is sdhci-of-dwcmshc.c with a KLMBG2JETDB041 eMMC
> chip.
I tested with RK3568 and sdhci-of-dwcmshc.c driver, the performance improved by 2~3%.
Before:
root@OpenWrt:/mnt/mmcblk0p3# time dd if=test.img of=/dev/null
2097152+0 records in
2097152+0 records out
real 0m 6.01s
user 0m 0.84s
sys 0m 2.89s
root@OpenWrt:/mnt/mmcblk0p3# cat /sys/block/mmcblk0/queue/read_ahead_kb
128
After:
root@OpenWrt:/mnt/mmcblk0p3# echo 3 > /proc/sys/vm/drop_caches
root@OpenWrt:/mnt/mmcblk0p3# time dd if=test.img of=/dev/null
2097152+0 records in
2097152+0 records out
real 0m 5.86s
user 0m 1.04s
sys 0m 3.18s
root@OpenWrt:/mnt/mmcblk0p3# cat /sys/block/mmcblk0/queue/read_ahead_kb
1024
root@OpenWrt:/sys/kernel/debug/mmc0# cat ios
clock: 200000000 Hz
actual clock: 200000000 Hz
vdd: 18 (3.0 ~ 3.1 V)
bus mode: 2 (push-pull)
chip select: 0 (don't care)
power mode: 2 (on)
bus width: 3 (8 bits)
timing spec: 9 (mmc HS200)
signal voltage: 1 (1.80 V)
driver type: 0 (driver type B)
next prev parent reply other threads:[~2023-08-26 16:44 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-18 2:28 [PATCH 1/1] mmc: Set optimal I/O size when mmc_setip_queue Sharp.Xia
2023-08-24 10:55 ` Ulf Hansson
2023-08-25 7:10 ` Sharp Xia (夏宇彬)
2023-08-25 8:11 ` Shawn Lin
2023-08-25 8:39 ` Sharp.Xia
2023-08-25 9:17 ` Shawn Lin
2023-08-26 16:26 ` Sharp.Xia [this message]
2023-08-28 2:27 ` Shawn Lin
2023-08-28 9:04 ` Ulf Hansson
2023-08-25 12:23 ` Wenchao Chen
2023-08-26 16:54 ` Sharp.Xia
2023-08-25 7:25 ` Sharp.Xia
2023-08-25 7:26 ` Sharp.Xia
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230826162635.617-1-Sharp.Xia@mediatek.com \
--to=sharp.xia@mediatek.com \
--cc=angelogioacchino.delregno@collabora.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux-mmc@vger.kernel.org \
--cc=matthias.bgg@gmail.com \
--cc=shawn.lin@rock-chips.com \
--cc=ulf.hansson@linaro.org \
--cc=wsd_upstream@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox