* MMC Driver Throughput
@ 2013-01-30 18:35 Bruce Ford (bford)
2013-01-31 13:36 ` Konstantin Dorfman
0 siblings, 1 reply; 2+ messages in thread
From: Bruce Ford (bford) @ 2013-01-30 18:35 UTC (permalink / raw)
To: linux-mmc@vger.kernel.org
Question:
Has any testing been done to determine the maximum data throughput to/from
a MMC device; assuming the MMC device takes zero time to complete tasks?
Put another way - at what level of IOPS does the kernel/driver become the
bottleneck, instead of the storage device?
Sorry if the question is slightly off forum topic - but it is MMC driver related.
Worth a shot.
Thx --Bruce
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: MMC Driver Throughput
2013-01-30 18:35 MMC Driver Throughput Bruce Ford (bford)
@ 2013-01-31 13:36 ` Konstantin Dorfman
0 siblings, 0 replies; 2+ messages in thread
From: Konstantin Dorfman @ 2013-01-31 13:36 UTC (permalink / raw)
To: Bruce Ford (bford); +Cc: linux-mmc@vger.kernel.org
Hello Bruce,
On 1/30/2013 8:35 PM, Bruce Ford (bford) wrote:
> Question:
>
> Has any testing been done to determine the maximum data throughput to/from
> a MMC device; assuming the MMC device takes zero time to complete tasks?
>
> Put another way - at what level of IOPS does the kernel/driver become the
> bottleneck, instead of the storage device?
This should depend on specific host controller, hardware and driver.
You can look from another end:
kernel/driver overhead = "Row max TPT of of the MMC device" -
"lmdd/iozone/tiotest max TPT"
Where the row tpt can be found from card vendor data.
--
Konstantin Dorfman,
QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center,
Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-01-31 13:36 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-30 18:35 MMC Driver Throughput Bruce Ford (bford)
2013-01-31 13:36 ` Konstantin Dorfman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).