* [PATCH v5] mmc: documentation of mmc non-blocking request usage and design.
@ 2011-07-06 6:30 Per Forlin
[not found] ` <1309933806-2346-1-git-send-email-per.forlin-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2011-07-08 22:54 ` J Freyensee
0 siblings, 2 replies; 5+ messages in thread
From: Per Forlin @ 2011-07-06 6:30 UTC (permalink / raw)
To: linaro-dev, Nicolas Pitre, linux-arm-kernel, linux-kernel,
linux-mmc, linux-doc
Cc: Randy Dunlap, Per Forlin
Documentation about the background and the design of mmc non-blocking.
Host driver guidelines to minimize request preparation overhead.
Signed-off-by: Per Forlin <per.forlin@linaro.org>
Acked-by: Randy Dunlap <rdunlap@xenotime.net>
---
ChangeLog:
v2: - Minor updates after proofreading comments from Chris
v3: - Minor updates after more comments from Chris
v4: - Minor updates after comments from Randy
v5: - Fixed one more comment and Acked-by from Randy
Documentation/mmc/00-INDEX | 2 +
Documentation/mmc/mmc-async-req.txt | 86 +++++++++++++++++++++++++++++++++++
2 files changed, 88 insertions(+), 0 deletions(-)
create mode 100644 Documentation/mmc/mmc-async-req.txt
diff --git a/Documentation/mmc/00-INDEX b/Documentation/mmc/00-INDEX
index 93dd7a7..a9ba672 100644
--- a/Documentation/mmc/00-INDEX
+++ b/Documentation/mmc/00-INDEX
@@ -4,3 +4,5 @@ mmc-dev-attrs.txt
- info on SD and MMC device attributes
mmc-dev-parts.txt
- info on SD and MMC device partitions
+mmc-async-req.txt
+ - info on mmc asynchronous requests
diff --git a/Documentation/mmc/mmc-async-req.txt b/Documentation/mmc/mmc-async-req.txt
new file mode 100644
index 0000000..b7a52ea
--- /dev/null
+++ b/Documentation/mmc/mmc-async-req.txt
@@ -0,0 +1,86 @@
+Rationale
+=========
+
+How significant is the cache maintenance overhead?
+It depends. Fast eMMC and multiple cache levels with speculative cache
+pre-fetch makes the cache overhead relatively significant. If the DMA
+preparations for the next request are done in parallel with the current
+transfer, the DMA preparation overhead would not affect the MMC performance.
+The intention of non-blocking (asynchronous) MMC requests is to minimize the
+time between when an MMC request ends and another MMC request begins.
+Using mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and
+dma_unmap_sg are processing. Using non-blocking MMC requests makes it
+possible to prepare the caches for next job in parallel with an active
+MMC request.
+
+MMC block driver
+================
+
+The issue_rw_rq() in the MMC block driver is made non-blocking.
+The increase in throughput is proportional to the time it takes to
+prepare (major part of preparations are dma_map_sg and dma_unmap_sg)
+a request and how fast the memory is. The faster the MMC/SD is
+the more significant the prepare request time becomes. Roughly the expected
+performance gain is 5% for large writes and 10% on large reads on a L2 cache
+platform. In power save mode, when clocks run on a lower frequency, the DMA
+preparation may cost even more. As long as these slower preparations are run
+in parallel with the transfer performance won't be affected.
+
+Details on measurements from IOZone and mmc_test
+================================================
+
+https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
+
+MMC core API extension
+======================
+
+There is one new public function mmc_start_req().
+It starts a new MMC command request for a host. The function isn't
+truly non-blocking. If there is on ongoing async request it waits
+for completion of that request and starts the new one and returns. It
+doesn't wait for the new request to complete. If there is no ongoing
+request it starts the new request and returns immediately.
+
+MMC host extensions
+===================
+
+There are two optional hooks -- pre_req() and post_req() -- that the host
+driver may implement in order to move work to before and after the actual
+mmc_request function is called. In the DMA case pre_req() may do
+dma_map_sg() and prepare the DMA descriptor, and post_req runs
+the dma_unmap_sg.
+
+Optimize for the first request
+==============================
+
+The first request in a series of requests can't be prepared in parallel with
+the previous transfer, since there is no previous request.
+The argument is_first_req in pre_req() indicates that there is no previous
+request. The host driver may optimize for this scenario to minimize
+the performance loss. A way to optimize for this is to split the current
+request in two chunks, prepare the first chunk and start the request,
+and finally prepare the second chunk and start the transfer.
+
+Pseudocode to handle is_first_req scenario with minimal prepare overhead:
+if (is_first_req && req->size > threshold)
+ /* start MMC transfer for the complete transfer size */
+ mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE);
+
+ /*
+ * Begin to prepare DMA while cmd is being processed by MMC.
+ * The first chunk of the request should take the same time
+ * to prepare as the "MMC process command time".
+ * If prepare time exceeds MMC cmd time
+ * the transfer is delayed, guesstimate max 4k as first chunk size.
+ */
+ prepare_1st_chunk_for_dma(req);
+ /* flush pending desc to the DMAC (dmaengine.h) */
+ dma_issue_pending(req->dma_desc);
+
+ prepare_2nd_chunk_for_dma(req);
+ /*
+ * The second issue_pending should be called before MMC runs out
+ * of the first chunk. If the MMC runs out of the first data chunk
+ * before this call, the transfer is delayed.
+ */
+ dma_issue_pending(req->dma_desc);
--
1.7.4.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v5] mmc: documentation of mmc non-blocking request usage and design.
[not found] ` <1309933806-2346-1-git-send-email-per.forlin-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
@ 2011-07-08 22:48 ` J Freyensee
0 siblings, 0 replies; 5+ messages in thread
From: J Freyensee @ 2011-07-08 22:48 UTC (permalink / raw)
To: Per Forlin
Cc: Nicolas Pitre, Randy Dunlap, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
Arnd Bergmann, linux-doc-u79uwXL29TY76Z2rM5mHXA,
linux-mmc-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA, Kyungmin Park, Sourav Poddar,
linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
[-- Attachment #1.1: Type: text/plain, Size: 7109 bytes --]
On 07/05/2011 11:30 PM, Per Forlin wrote:
> Documentation about the background and the design of mmc non-blocking.
> Host driver guidelines to minimize request preparation overhead.
I'd like to make a couple suggestions on the documentation when
documenting actual function names. In general, really state the name of
the function. See below for issues.
> Signed-off-by: Per Forlin<per.forlin-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
> Acked-by: Randy Dunlap<rdunlap-/UHa2rfvQTnk1uMJSBkQmQ@public.gmane.org>
> ---
> ChangeLog:
> v2: - Minor updates after proofreading comments from Chris
> v3: - Minor updates after more comments from Chris
> v4: - Minor updates after comments from Randy
> v5: - Fixed one more comment and Acked-by from Randy
>
> Documentation/mmc/00-INDEX | 2 +
> Documentation/mmc/mmc-async-req.txt | 86 +++++++++++++++++++++++++++++++++++
> 2 files changed, 88 insertions(+), 0 deletions(-)
> create mode 100644 Documentation/mmc/mmc-async-req.txt
>
> diff --git a/Documentation/mmc/00-INDEX b/Documentation/mmc/00-INDEX
> index 93dd7a7..a9ba672 100644
> --- a/Documentation/mmc/00-INDEX
> +++ b/Documentation/mmc/00-INDEX
> @@ -4,3 +4,5 @@ mmc-dev-attrs.txt
> - info on SD and MMC device attributes
> mmc-dev-parts.txt
> - info on SD and MMC device partitions
> +mmc-async-req.txt
> + - info on mmc asynchronous requests
> diff --git a/Documentation/mmc/mmc-async-req.txt b/Documentation/mmc/mmc-async-req.txt
> new file mode 100644
> index 0000000..b7a52ea
> --- /dev/null
> +++ b/Documentation/mmc/mmc-async-req.txt
> @@ -0,0 +1,86 @@
> +Rationale
> +=========
> +
> +How significant is the cache maintenance overhead?
> +It depends. Fast eMMC and multiple cache levels with speculative cache
> +pre-fetch makes the cache overhead relatively significant. If the DMA
> +preparations for the next request are done in parallel with the current
> +transfer, the DMA preparation overhead would not affect the MMC performance.
> +The intention of non-blocking (asynchronous) MMC requests is to minimize the
> +time between when an MMC request ends and another MMC request begins.
> +Using mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and
> +dma_unmap_sg
if dma_unmap_sg/dma_map_sg are complete functions, please add a '()' to it.
> are processing. Using non-blocking MMC requests makes it
> +possible to prepare the caches for next job in parallel with an active
> +MMC request.
> +
> +MMC block driver
> +================
> +
> +The issue_rw_rq() in the MMC block driver is made non-blocking.
Could this be made *_issue_rw_rq() please? When I see 'issue_rw_rq()',
I assume it is referring to an entire function with that name. But I am
really thinking this is for functions ending with '_issue_rw_rq()',
right? Like in mmc_blk_issue_rw_rq()?
Actually, if mmc_blk_issue_rw_rq() is the only function, please just use
this.
> +The increase in throughput is proportional to the time it takes to
> +prepare (major part of preparations are dma_map_sg and dma_unmap_sg)
> +a request and how fast the memory is. The faster the MMC/SD is
> +the more significant the prepare request time becomes. Roughly the expected
> +performance gain is 5% for large writes and 10% on large reads on a L2 cache
> +platform. In power save mode, when clocks run on a lower frequency, the DMA
> +preparation may cost even more. As long as these slower preparations are run
> +in parallel with the transfer performance won't be affected.
> +
> +Details on measurements from IOZone and mmc_test
> +================================================
> +
> +https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
> +
> +MMC core API extension
> +======================
> +
> +There is one new public function mmc_start_req().
Is it really meant mmc_start_req*uest*()? That is what I see in core.c.
Also, is this the actual async API being introduced in this work that is
to be used by client drivers? I don't see it being exported with
EXPORT_SYMBOL()/EXPORT_SYMBOL_GPL() like mmc_request_done() is in the
linux-next tree (and I just recently pulled it because I had to fix my
own driver bug :-/).
> +It starts a new MMC command request for a host. The function isn't
> +truly non-blocking. If there is on ongoing async request it waits
> +for completion of that request and starts the new one and returns. It
> +doesn't wait for the new request to complete. If there is no ongoing
> +request it starts the new request and returns immediately.
> +
> +MMC host extensions
> +===================
> +
> +There are two optional hooks -- pre_req() and post_req() -- that the host
Same here...pre_req()/post_req()...are these functions meant to have
'pre_req()' in the name? Please use *_pre_req(). Otherwise, just state
the exact function name.
> +driver may implement in order to move work to before and after the actual
> +mmc_request function is called.
If there is only a couple of mmc request functions being referred to
here, please just type it out.
> In the DMA case pre_req() may do
> +dma_map_sg() and prepare the DMA descriptor, and post_req runs
> +the dma_unmap_sg.
> +
> +Optimize for the first request
> +==============================
> +
> +The first request in a series of requests can't be prepared in parallel with
> +the previous transfer, since there is no previous request.
> +The argument is_first_req in pre_req() indicates that there is no previous
Minor thing...if 'is_first_req' a function or macro, please add the '()'
to it.
And please use *_pre_req()/type-out-exact-pre_req() function please.
> +request. The host driver may optimize for this scenario to minimize
> +the performance loss. A way to optimize for this is to split the current
> +request in two chunks, prepare the first chunk and start the request,
> +and finally prepare the second chunk and start the transfer.
> +
> +Pseudocode to handle is_first_req scenario with minimal prepare overhead:
Please add a blank line here after the 'Pseduocode' statement. I'm only
suggesting it because there are blank lines in the pseudo-code itself to
help improve readability.
> +if (is_first_req&& req->size> threshold)
> + /* start MMC transfer for the complete transfer size */
> + mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE);
> +
> + /*
> + * Begin to prepare DMA while cmd is being processed by MMC.
> + * The first chunk of the request should take the same time
> + * to prepare as the "MMC process command time".
> + * If prepare time exceeds MMC cmd time
> + * the transfer is delayed, guesstimate max 4k as first chunk size.
> + */
> + prepare_1st_chunk_for_dma(req);
> + /* flush pending desc to the DMAC (dmaengine.h) */
> + dma_issue_pending(req->dma_desc);
> +
> + prepare_2nd_chunk_for_dma(req);
> + /*
> + * The second issue_pending should be called before MMC runs out
> + * of the first chunk. If the MMC runs out of the first data chunk
> + * before this call, the transfer is delayed.
> + */
> + dma_issue_pending(req->dma_desc);
[-- Attachment #1.2: Type: text/html, Size: 9159 bytes --]
[-- Attachment #2: Type: text/plain, Size: 175 bytes --]
_______________________________________________
linaro-dev mailing list
linaro-dev-cunTk1MwBs8s++Sfvej+rw@public.gmane.org
http://lists.linaro.org/mailman/listinfo/linaro-dev
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v5] mmc: documentation of mmc non-blocking request usage and design.
2011-07-06 6:30 [PATCH v5] mmc: documentation of mmc non-blocking request usage and design Per Forlin
[not found] ` <1309933806-2346-1-git-send-email-per.forlin-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
@ 2011-07-08 22:54 ` J Freyensee
2011-07-09 19:20 ` Per Forlin
1 sibling, 1 reply; 5+ messages in thread
From: J Freyensee @ 2011-07-08 22:54 UTC (permalink / raw)
To: Per Forlin
Cc: Nicolas Pitre, linux-mmc, Venkatraman S, Linus Walleij,
Kyungmin Park, Arnd Bergmann, Sourav Poddar, Chris Ball,
Randy Dunlap
On 07/05/2011 11:30 PM, Per Forlin wrote:
> Documentation about the background and the design of mmc non-blocking.
> Host driver guidelines to minimize request preparation overhead.
Resending this out on the linux-mmc list since that is what I am
subscribed to (and I had html format on so original got blocked).
I'd like to make a couple suggestions on the documentation when
documenting actual function names. In general, really state the name of
the function. See below for issues.
> Signed-off-by: Per Forlin<per.forlin@linaro.org>
> Acked-by: Randy Dunlap<rdunlap@xenotime.net>
> ---
> ChangeLog:
> v2: - Minor updates after proofreading comments from Chris
> v3: - Minor updates after more comments from Chris
> v4: - Minor updates after comments from Randy
> v5: - Fixed one more comment and Acked-by from Randy
>
> Documentation/mmc/00-INDEX | 2 +
> Documentation/mmc/mmc-async-req.txt | 86 +++++++++++++++++++++++++++++++++++
> 2 files changed, 88 insertions(+), 0 deletions(-)
> create mode 100644 Documentation/mmc/mmc-async-req.txt
>
> diff --git a/Documentation/mmc/00-INDEX b/Documentation/mmc/00-INDEX
> index 93dd7a7..a9ba672 100644
> --- a/Documentation/mmc/00-INDEX
> +++ b/Documentation/mmc/00-INDEX
> @@ -4,3 +4,5 @@ mmc-dev-attrs.txt
> - info on SD and MMC device attributes
> mmc-dev-parts.txt
> - info on SD and MMC device partitions
> +mmc-async-req.txt
> + - info on mmc asynchronous requests
> diff --git a/Documentation/mmc/mmc-async-req.txt b/Documentation/mmc/mmc-async-req.txt
> new file mode 100644
> index 0000000..b7a52ea
> --- /dev/null
> +++ b/Documentation/mmc/mmc-async-req.txt
> @@ -0,0 +1,86 @@
> +Rationale
> +=========
> +
> +How significant is the cache maintenance overhead?
> +It depends. Fast eMMC and multiple cache levels with speculative cache
> +pre-fetch makes the cache overhead relatively significant. If the DMA
> +preparations for the next request are done in parallel with the current
> +transfer, the DMA preparation overhead would not affect the MMC performance.
> +The intention of non-blocking (asynchronous) MMC requests is to minimize the
> +time between when an MMC request ends and another MMC request begins.
> +Using mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and
> +dma_unmap_sg
if dma_unmap_sg/dma_map_sg are complete functions, please
> are processing. Using non-blocking MMC requests makes it
> +possible to prepare the caches for next job in parallel with an active
> +MMC request.
> +
> +MMC block driver
> +================
> +
> +The issue_rw_rq() in the MMC block driver is made non-blocking.
Could this be made *_issue_rw_rq() please? When I see 'issue_rw_rq()',
I assume it is referring to an entire function with that name. But I am
really thinking this is for functions ending with '_issue_rw_rq()',
right? Like in mmc_blk_issue_rw_rq()?
Actually, if mmc_blk_issue_rw_rq() is the only function, please just use
this.
> +The increase in throughput is proportional to the time it takes to
> +prepare (major part of preparations are dma_map_sg and dma_unmap_sg)
> +a request and how fast the memory is. The faster the MMC/SD is
> +the more significant the prepare request time becomes. Roughly the expected
> +performance gain is 5% for large writes and 10% on large reads on a L2 cache
> +platform. In power save mode, when clocks run on a lower frequency, the DMA
> +preparation may cost even more. As long as these slower preparations are run
> +in parallel with the transfer performance won't be affected.
> +
> +Details on measurements from IOZone and mmc_test
> +================================================
> +
> +https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
> +
> +MMC core API extension
> +======================
> +
> +There is one new public function mmc_start_req().
Is it really meant mmc_start_req*uest*()? That is what I see in core.c.
Also, is this the actual async API being introduced in this work that is
to be used by client drivers? I don't see it being exported with
EXPORT_SYMBOL()/EXPORT_SYMBOL_GPL() like mmc_request_done() is in the
linux-next tree (and I just recently pulled it because I had to fix my
own driver bug :-/).
> +It starts a new MMC command request for a host. The function isn't
> +truly non-blocking. If there is on ongoing async request it waits
> +for completion of that request and starts the new one and returns. It
> +doesn't wait for the new request to complete. If there is no ongoing
> +request it starts the new request and returns immediately.
> +
> +MMC host extensions
> +===================
> +
> +There are two optional hooks -- pre_req() and post_req() -- that the host
Same here...pre_req()/post_req()...are these functions meant to have
'pre_req()' in the name? Please use *_pre_req(). Otherwise, just state
the exact function name.
> +driver may implement in order to move work to before and after the actual
> +mmc_request function is called.
If there is only a couple of mmc request functions being referred to
here, please just type it out.
> In the DMA case pre_req() may do
> +dma_map_sg() and prepare the DMA descriptor, and post_req runs
> +the dma_unmap_sg.
> +
> +Optimize for the first request
> +==============================
> +
> +The first request in a series of requests can't be prepared in parallel with
> +the previous transfer, since there is no previous request.
> +The argument is_first_req in pre_req() indicates that there is no previous
Minor thing...if 'is_first_req' a function or macro, please add the '()'
to it.
And please use *_pre_req()/type-out-exact-pre_req() function please.
> +request. The host driver may optimize for this scenario to minimize
> +the performance loss. A way to optimize for this is to split the current
> +request in two chunks, prepare the first chunk and start the request,
> +and finally prepare the second chunk and start the transfer.
> +
> +Pseudocode to handle is_first_req scenario with minimal prepare overhead:
Please add a blank line here after the 'Pseduocode' statement. I'm only
suggesting it because there are blank lines in the pseudo-code itself to
help improve readability.
> +if (is_first_req&& req->size> threshold)
> + /* start MMC transfer for the complete transfer size */
> + mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE);
> +
> + /*
> + * Begin to prepare DMA while cmd is being processed by MMC.
> + * The first chunk of the request should take the same time
> + * to prepare as the "MMC process command time".
> + * If prepare time exceeds MMC cmd time
> + * the transfer is delayed, guesstimate max 4k as first chunk size.
> + */
> + prepare_1st_chunk_for_dma(req);
> + /* flush pending desc to the DMAC (dmaengine.h) */
> + dma_issue_pending(req->dma_desc);
> +
> + prepare_2nd_chunk_for_dma(req);
> + /*
> + * The second issue_pending should be called before MMC runs out
> + * of the first chunk. If the MMC runs out of the first data chunk
> + * before this call, the transfer is delayed.
> + */
> + dma_issue_pending(req->dma_desc);
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v5] mmc: documentation of mmc non-blocking request usage and design.
2011-07-08 22:54 ` J Freyensee
@ 2011-07-09 19:20 ` Per Forlin
2011-07-09 22:10 ` Chris Ball
0 siblings, 1 reply; 5+ messages in thread
From: Per Forlin @ 2011-07-09 19:20 UTC (permalink / raw)
To: J Freyensee
Cc: Nicolas Pitre, linux-mmc, Venkatraman S, Linus Walleij,
Kyungmin Park, Arnd Bergmann, Sourav Poddar, Chris Ball,
Randy Dunlap
Chris, is the non-blocking patchset planned to be merged for 3.1?
James,
On 9 July 2011 00:54, J Freyensee <james_p_freyensee@linux.intel.com> wrote:
> On 07/05/2011 11:30 PM, Per Forlin wrote:
>>
>> Documentation about the background and the design of mmc non-blocking.
>> Host driver guidelines to minimize request preparation overhead.
>
> Resending this out on the linux-mmc list since that is what I am subscribed
> to (and I had html format on so original got blocked).
>
> I'd like to make a couple suggestions on the documentation when documenting
> actual function names. In general, really state the name of the function.
> See below for issues.
>
point taken.
>> Signed-off-by: Per Forlin<per.forlin@linaro.org>
>> Acked-by: Randy Dunlap<rdunlap@xenotime.net>
>> ---
>> ChangeLog:
>> v2: - Minor updates after proofreading comments from Chris
>> v3: - Minor updates after more comments from Chris
>> v4: - Minor updates after comments from Randy
>> v5: - Fixed one more comment and Acked-by from Randy
>>
>> Documentation/mmc/00-INDEX | 2 +
>> Documentation/mmc/mmc-async-req.txt | 86
>> +++++++++++++++++++++++++++++++++++
>> 2 files changed, 88 insertions(+), 0 deletions(-)
>> create mode 100644 Documentation/mmc/mmc-async-req.txt
>>
>> diff --git a/Documentation/mmc/00-INDEX b/Documentation/mmc/00-INDEX
>> index 93dd7a7..a9ba672 100644
>> --- a/Documentation/mmc/00-INDEX
>> +++ b/Documentation/mmc/00-INDEX
>> @@ -4,3 +4,5 @@ mmc-dev-attrs.txt
>> - info on SD and MMC device attributes
>> mmc-dev-parts.txt
>> - info on SD and MMC device partitions
>> +mmc-async-req.txt
>> + - info on mmc asynchronous requests
>> diff --git a/Documentation/mmc/mmc-async-req.txt
>> b/Documentation/mmc/mmc-async-req.txt
>> new file mode 100644
>> index 0000000..b7a52ea
>> --- /dev/null
>> +++ b/Documentation/mmc/mmc-async-req.txt
>> @@ -0,0 +1,86 @@
>> +Rationale
>> +=========
>> +
>> +How significant is the cache maintenance overhead?
>> +It depends. Fast eMMC and multiple cache levels with speculative cache
>> +pre-fetch makes the cache overhead relatively significant. If the DMA
>> +preparations for the next request are done in parallel with the current
>> +transfer, the DMA preparation overhead would not affect the MMC
>> performance.
>> +The intention of non-blocking (asynchronous) MMC requests is to minimize
>> the
>> +time between when an MMC request ends and another MMC request begins.
>> +Using mmc_wait_for_req(), the MMC controller is idle while dma_map_sg and
>> +dma_unmap_sg
>
> if dma_unmap_sg/dma_map_sg are complete functions, please
I'll make it "dma_map_sg() and dma_unmap_sg()"
>>
>> are processing. Using non-blocking MMC requests makes it
>> +possible to prepare the caches for next job in parallel with an active
>> +MMC request.
>> +
>> +MMC block driver
>> +================
>> +
>> +The issue_rw_rq() in the MMC block driver is made non-blocking.
>
> Could this be made *_issue_rw_rq() please? When I see 'issue_rw_rq()', I
> assume it is referring to an entire function with that name. But I am
> really thinking this is for functions ending with '_issue_rw_rq()', right?
> Like in mmc_blk_issue_rw_rq()?
>
> Actually, if mmc_blk_issue_rw_rq() is the only function, please just use
> this.
I agree.
>>
>> +The increase in throughput is proportional to the time it takes to
>> +prepare (major part of preparations are dma_map_sg and dma_unmap_sg)
>> +a request and how fast the memory is. The faster the MMC/SD is
>> +the more significant the prepare request time becomes. Roughly the
>> expected
>> +performance gain is 5% for large writes and 10% on large reads on a L2
>> cache
>> +platform. In power save mode, when clocks run on a lower frequency, the
>> DMA
>> +preparation may cost even more. As long as these slower preparations are
>> run
>> +in parallel with the transfer performance won't be affected.
>> +
>> +Details on measurements from IOZone and mmc_test
>> +================================================
>> +
>>
>> +https://wiki.linaro.org/WorkingGroups/Kernel/Specs/StoragePerfMMC-async-req
>> +
>> +MMC core API extension
>> +======================
>> +
>> +There is one new public function mmc_start_req().
>
> Is it really meant mmc_start_req*uest*()? That is what I see in core.c.
>
> Also, is this the actual async API being introduced in this work that is to
> be used by client drivers? I don't see it being exported with
> EXPORT_SYMBOL()/EXPORT_SYMBOL_GPL() like mmc_request_done() is in the
> linux-next tree (and I just recently pulled it because I had to fix my own
> driver bug :-/).
The patches are not merged yet, to be merged for 3.1 (I hope). This
patch https://patchwork.kernel.org/patch/936842 adds the mmc core
stuff.
>>
>> +It starts a new MMC command request for a host. The function isn't
>> +truly non-blocking. If there is on ongoing async request it waits
>> +for completion of that request and starts the new one and returns. It
>> +doesn't wait for the new request to complete. If there is no ongoing
>> +request it starts the new request and returns immediately.
>> +
>> +MMC host extensions
>> +===================
>> +
>> +There are two optional hooks -- pre_req() and post_req() -- that the host
>
> Same here...pre_req()/post_req()...are these functions meant to have
> 'pre_req()' in the name? Please use *_pre_req(). Otherwise, just state the
> exact function name.
These are members of the mmc_host_ops, added by the same patch.
I can clarify this by writing
+There are two optional members in the mmc_host_ops -- pre_req() and
post_req() -- that the host
>>
>> +driver may implement in order to move work to before and after the actual
>> +mmc_request function is called.
>
> If there is only a couple of mmc request functions being referred to here,
> please just type it out.
mmc_request is referring to the request member of mmc_host_ops.
I can change it to "mmc_host_ops.request()"
>>
>> In the DMA case pre_req() may do
>> +dma_map_sg() and prepare the DMA descriptor, and post_req runs
>> +the dma_unmap_sg.
>> +
>> +Optimize for the first request
>> +==============================
>> +
>> +The first request in a series of requests can't be prepared in parallel
>> with
>> +the previous transfer, since there is no previous request.
>> +The argument is_first_req in pre_req() indicates that there is no
>> previous
>
> Minor thing...if 'is_first_req' a function or macro, please add the '()' to
> it.
It's the second argument in mmc_host_ops.pre_req()
>
> And please use *_pre_req()/type-out-exact-pre_req() function please.
>>
>> +request. The host driver may optimize for this scenario to minimize
>> +the performance loss. A way to optimize for this is to split the current
>> +request in two chunks, prepare the first chunk and start the request,
>> +and finally prepare the second chunk and start the transfer.
>> +
>> +Pseudocode to handle is_first_req scenario with minimal prepare overhead:
>
> Please add a blank line here after the 'Pseduocode' statement. I'm only
> suggesting it because there are blank lines in the pseudo-code itself to
> help improve readability.
I agree.
>>
>> +if (is_first_req&& req->size> threshold)
>> + /* start MMC transfer for the complete transfer size */
>> + mmc_start_command(MMC_CMD_TRANSFER_FULL_SIZE);
>> +
>> + /*
>> + * Begin to prepare DMA while cmd is being processed by MMC.
>> + * The first chunk of the request should take the same time
>> + * to prepare as the "MMC process command time".
>> + * If prepare time exceeds MMC cmd time
>> + * the transfer is delayed, guesstimate max 4k as first chunk size.
>> + */
>> + prepare_1st_chunk_for_dma(req);
>> + /* flush pending desc to the DMAC (dmaengine.h) */
>> + dma_issue_pending(req->dma_desc);
>> +
>> + prepare_2nd_chunk_for_dma(req);
>> + /*
>> + * The second issue_pending should be called before MMC runs out
>> + * of the first chunk. If the MMC runs out of the first data chunk
>> + * before this call, the transfer is delayed.
>> + */
>> + dma_issue_pending(req->dma_desc);
>
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v5] mmc: documentation of mmc non-blocking request usage and design.
2011-07-09 19:20 ` Per Forlin
@ 2011-07-09 22:10 ` Chris Ball
0 siblings, 0 replies; 5+ messages in thread
From: Chris Ball @ 2011-07-09 22:10 UTC (permalink / raw)
To: Per Forlin
Cc: J Freyensee, Nicolas Pitre, linux-mmc, Venkatraman S,
Linus Walleij, Kyungmin Park, Arnd Bergmann, Sourav Poddar,
Randy Dunlap
Hi,
On Sat, Jul 09 2011, Per Forlin wrote:
> Chris, is the non-blocking patchset planned to be merged for 3.1?
Yes, I've merged it to mmc-next now and plan on pushing it in 3.1.
Thanks,
- Chris.
--
Chris Ball <cjb@laptop.org> <http://printf.net/>
One Laptop Per Child
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2011-07-09 22:11 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-07-06 6:30 [PATCH v5] mmc: documentation of mmc non-blocking request usage and design Per Forlin
[not found] ` <1309933806-2346-1-git-send-email-per.forlin-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2011-07-08 22:48 ` J Freyensee
2011-07-08 22:54 ` J Freyensee
2011-07-09 19:20 ` Per Forlin
2011-07-09 22:10 ` Chris Ball
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox