linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Per Forlin <per.forlin@linaro.org>
To: Arnd Bergmann <arnd@arndb.de>
Cc: linux-mm@kvack.org, linux-mmc@vger.kernel.org,
	linaro-kernel@lists.linaro.org
Subject: Re: mmc blkqueue is empty even if there are pending reads in do_generic_file_read()
Date: Sun, 8 May 2011 18:23:24 +0200	[thread overview]
Message-ID: <BANLkTinDByrdEKrzHPysSP8giHgqFyJWtw@mail.gmail.com> (raw)
In-Reply-To: <201105081709.34416.arnd@arndb.de>

On 8 May 2011 17:09, Arnd Bergmann <arnd@arndb.de> wrote:
> On Saturday 07 May 2011, Per Forlin wrote:
>> > The mmc queue never runs empty until end of transfer.. The requests
>> > are 128 blocks (64k limit set in mmc host driver) compared to 256
>> > blocks before. This will not improve performance much since the
>> > transfer now are smaller than before. The latency is minimal but
>> > instead there extra number of transfer cause more mmc cmd overhead.
>> > I added prints to print the wait time in lock_page_killable too.
>> > I wonder if I can achieve a none empty mmc block queue without
>> > compromising the mmc host driver performance.
>> >
>> There is actually a performance increase from 16.5 MB/s to 18.4 MB/s
>> when lowering the max_req_size to 64k.
>> I run a dd test on a pandaboard using 2.6.39-rc5 kernel.
>
> I've noticed with a number of cards that using 64k writes is faster
> than any other size. What I could not figure out yet is whether this
> is a common hardware optimization for MS Windows (which always uses
> 64K I/O when it can), or if it's a software effect and we can actually
> make it go faster with Linux by tuning for other sizes.
>
Thanks for the tip I will keep that in mind.
In this case the increase in performance is due to parallel cache
handling. I did a test and set the mmc_max_req to 128k (same size as
the first test with low performance) and increase the read_ahead to
256k.
root@(none):/ echo 256 >
sys/devices/platform/omap/omap_hsmmc.0/mmc_host/mmc0/mmc0:80ca/block/mmcblk0/queue/read_ahead_kb
root@(none):/ dd if=/dev/mmcblk0p3 of=/dev/null bs=4k count=25600
25600+0 records in
25600+0 records out
104857600 bytes (100.0MB) copied, 5.138585 seconds, 19.5MB/s

Regards,
Per

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2011-05-08 16:23 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-28 18:39 mmc blkqueue is empty even if there are pending reads in do_generic_file_read() Per Forlin
2011-05-03 13:16 ` Arnd Bergmann
2011-05-03 18:54   ` Per Forlin
2011-05-03 20:02     ` Arnd Bergmann
2011-05-03 20:11       ` Per Forlin
2011-05-04 13:01         ` Arnd Bergmann
2011-05-04 19:13       ` Per Forlin
2011-05-07 10:45         ` Per Forlin
2011-05-08 15:09           ` Arnd Bergmann
2011-05-08 16:23             ` Per Forlin [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BANLkTinDByrdEKrzHPysSP8giHgqFyJWtw@mail.gmail.com \
    --to=per.forlin@linaro.org \
    --cc=arnd@arndb.de \
    --cc=linaro-kernel@lists.linaro.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-mmc@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).