linux-sh.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Horman <horms@verge.net.au>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Magnus Damm <magnus.damm@gmail.com>,
	linux-mmc@vger.kernel.org, linux-sh@vger.kernel.org,
	Arnd Hannemann <arnd@arndnet.de>,
	Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Subject: Re: SDHC Read Performance
Date: Tue, 22 Mar 2011 07:16:12 +0000	[thread overview]
Message-ID: <20110322071611.GC23009@verge.net.au> (raw)
In-Reply-To: <201103212355.36951.arnd@arndb.de>

On Mon, Mar 21, 2011 at 11:55:36PM +0100, Arnd Bergmann wrote:
> On Monday 21 March 2011 23:38:56 Simon Horman wrote:
> > Write Speed
> > # dd if=/dev/zero of=/dev/mmcblk0 bsQ2 count\x100000
> > SD1.1:                  2.5 MB/s   <-- Faster than expected
> > SD2.0:                  3.0 MB/s   <-- Faster than expected
> > SDHC Class 2:           2.3 MB/s   <-- Faster than expected
> > SDHC Class 10:          4.0 MB/s   <-- Slower than expected
> > SanDisk SDHC Class 10:  4.3 MB/s   <-- Slower than expected
> > MMC4.0:                 2.3 MB/s   <-- Clocked down to 12Mhz due to
> >                                        driver limitations
> 
> Please see https://lwn.net/Articles/428584/ and https://wiki.linaro.org/WorkingGroups/KernelConsolidation/Projects/FlashCardSurvey
> for possible explanations why your speed is not what you think
> it should be.
> 
> I have written a tool that will give you more conclusive
> data: git clone git://git.linaro.org/people/arnd/flashbench.git
> 
> Please Cc flashbench-results@lists.linaro.org when you have
> measurements from that. You will first need to find out
> the erase block size of each card (typically 4 MB), and then
> pass that to the --open-au --erasesize=${SIZE} --open-au-nr=${NR}
> benchmark to get useful results.
> 
> The write speed for writing full erase blocks (allocation units)
> is normally the best that a card can provide, and you will see
> how it gets worse with smaller block sizes. Try different
> values for ${NR} to find out what the maximum is that the card
> can sustain at full performance, most cards get really slow as
> soon as it runs out of segments (not the case with your
> benchmarking, since you write from start to end).

Thanks Arnd,

I'll report back once I have some numbers.


      reply	other threads:[~2011-03-22  7:16 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-19  0:09 SDHC Read Performance Simon Horman
2011-01-19  3:14 ` Magnus Damm
2011-01-19  8:05   ` Simon Horman
2011-01-20  3:30     ` Simon Horman
2011-01-20  4:01       ` Magnus Damm
2011-01-20  7:07         ` Simon Horman
2011-01-20  8:28           ` Magnus Damm
2011-01-20  8:38             ` Paul Mundt
2011-01-20  8:55               ` Magnus Damm
2011-03-21 22:38         ` Simon Horman
2011-03-21 22:55           ` Arnd Bergmann
2011-03-22  7:16             ` Simon Horman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110322071611.GC23009@verge.net.au \
    --to=horms@verge.net.au \
    --cc=arnd@arndb.de \
    --cc=arnd@arndnet.de \
    --cc=g.liakhovetski@gmx.de \
    --cc=linux-mmc@vger.kernel.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=magnus.damm@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).