From: Per Forlin <per.forlin@linaro.org>
To: Arnd Bergmann <arnd@arndb.de>
Cc: linux-mm@kvack.org, linux-mmc@vger.kernel.org,
linaro-kernel@lists.linaro.org
Subject: Re: mmc blkqueue is empty even if there are pending reads in do_generic_file_read()
Date: Sat, 7 May 2011 12:45:41 +0200 [thread overview]
Message-ID: <BANLkTimrN_T-nGws6T6baLPV+sWtFYC6Bw@mail.gmail.com> (raw)
In-Reply-To: <BANLkTi=omboE=fh16KSAa__JyG=hARmw=A@mail.gmail.com>
On 4 May 2011 21:13, Per Forlin <per.forlin@linaro.org> wrote:
> On 3 May 2011 22:02, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Tuesday 03 May 2011 20:54:43 Per Forlin wrote:
>>> >> page_not_up_to_date:
>>> >> /* Get exclusive access to the page ... */
>>> >> error = lock_page_killable(page);
>>> > I looked at the code in do_generic_file_read(). lock_page_killable
>>> > waits until the current read ahead is completed.
>>> > Is it possible to configure the read ahead to push multiple read
>>> > request to the block device queue?add
>>
>> I believe sleeping in __lock_page_killable is the best possible scenario.
>> Most cards I've seen work best when you use at least 64KB reads, so it will
>> be faster to wait there than to read smaller units.
>>
> Sleeping is ok but I don't wont the read execution to stop (mmc going
> to idle when there is actually more to read).
> I did an interesting discovery when I forced host mmc_req_size to 64k
> The reads now look like:
> dd if=/dev/mmcblk0 of=/dev/null bs=4k count=256
> [mmc_queue_thread] req d955f9b0 blocks 32
> [mmc_queue_thread] req (null) blocks 0
> [mmc_queue_thread] req (null) blocks 0
> [mmc_queue_thread] req d955f9b0 blocks 64
> [mmc_queue_thread] req (null) blocks 0
> [mmc_queue_thread] req d955f8d8 blocks 128
> [mmc_queue_thread] req (null) blocks 0
> [mmc_queue_thread] req d955f9b0 blocks 128
> [mmc_queue_thread] req d955f800 blocks 128
> [mmc_queue_thread] req d955f8d8 blocks 128
> [do_generic_file_read] lock_page_killable-wait sec 0 nsec 7811230
> [mmc_queue_thread] req d955fec0 blocks 128
> [mmc_queue_thread] req d955f800 blocks 128
> [do_generic_file_read] lock_page_killable-wait sec 0 nsec 7811492
> [mmc_queue_thread] req d955f9b0 blocks 128
> [mmc_queue_thread] req d967cd30 blocks 128
> [do_generic_file_read] lock_page_killable-wait sec 0 nsec 7810848
> [mmc_queue_thread] req d967cc58 blocks 128
> [mmc_queue_thread] req d967cb80 blocks 128
> [do_generic_file_read] lock_page_killable-wait sec 0 nsec 7810654
> [mmc_queue_thread] req d967caa8 blocks 128
> [mmc_queue_thread] req d967c9d0 blocks 128
> [mmc_queue_thread] req d967c8f8 blocks 128
> [do_generic_file_read] lock_page_killable-wait sec 0 nsec 7810652
> [mmc_queue_thread] req d967c820 blocks 128
> [mmc_queue_thread] req d967c748 blocks 128
> [do_generic_file_read] lock_page_killable-wait sec 0 nsec 7810952
> [mmc_queue_thread] req d967c670 blocks 128
> [mmc_queue_thread] req d967c598 blocks 128
> [mmc_queue_thread] req d967c4c0 blocks 128
> [mmc_queue_thread] req d967c3e8 blocks 128
> [mmc_queue_thread] req (null) blocks 0
> [mmc_queue_thread] req (null) blocks 0
> The mmc queue never runs empty until end of transfer.. The requests
> are 128 blocks (64k limit set in mmc host driver) compared to 256
> blocks before. This will not improve performance much since the
> transfer now are smaller than before. The latency is minimal but
> instead there extra number of transfer cause more mmc cmd overhead.
> I added prints to print the wait time in lock_page_killable too.
> I wonder if I can achieve a none empty mmc block queue without
> compromising the mmc host driver performance.
>
There is actually a performance increase from 16.5 MB/s to 18.4 MB/s
when lowering the max_req_size to 64k.
I run a dd test on a pandaboard using 2.6.39-rc5 kernel.
First case when block queue gets empty after every request:
root@(none):/ dd if=/dev/mmcblk0p3 of=/dev/null bs=4k count=25600
25600+0 records in
25600+0 records out
104857600 bytes (100.0MB) copied, 6.061107 seconds, 16.5MB/s
Second case when modifying omap_hsmmc to force request size is to half
(128 instead of 256). This results in queue is never empty
dd if=/dev/mmcblk0p3 of=/dev/null bs=4k count=25600
25600+0 records in
25600+0 records out
104857600 bytes (100.0MB) copied, 5.423362 seconds, 18.4MB/s
Regards,
Per
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2011-05-07 10:45 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-28 18:39 mmc blkqueue is empty even if there are pending reads in do_generic_file_read() Per Forlin
2011-05-03 13:16 ` Arnd Bergmann
2011-05-03 18:54 ` Per Forlin
2011-05-03 20:02 ` Arnd Bergmann
2011-05-03 20:11 ` Per Forlin
2011-05-04 13:01 ` Arnd Bergmann
2011-05-04 19:13 ` Per Forlin
2011-05-07 10:45 ` Per Forlin [this message]
2011-05-08 15:09 ` Arnd Bergmann
2011-05-08 16:23 ` Per Forlin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BANLkTimrN_T-nGws6T6baLPV+sWtFYC6Bw@mail.gmail.com \
--to=per.forlin@linaro.org \
--cc=arnd@arndb.de \
--cc=linaro-kernel@lists.linaro.org \
--cc=linux-mm@kvack.org \
--cc=linux-mmc@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).