From: "Per Förlin" <per.forlin@stericsson.com>
To: Konstantin Dorfman <kdorfman@codeaurora.org>
Cc: Per Forlin <per.lkml@gmail.com>,
Venkatraman S <svenkatr@gmail.com>,
"cjb@laptop.org" <cjb@laptop.org>,
"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>
Subject: Re: [PATCH v1] mmc: fix async request mechanism for sequential read scenarios
Date: Mon, 19 Nov 2012 15:32:50 +0100 [thread overview]
Message-ID: <50AA4312.5000703@stericsson.com> (raw)
In-Reply-To: <50AA005B.8050903@codeaurora.org>
On 11/19/2012 10:48 AM, Konstantin Dorfman wrote:
> Hello Per,
>
> Thank you for giving me an example, below I'm trying to explain some
> logic issues of it and asking you some questions about your vision of
> the patch.
>
> On 11/15/2012 06:38 PM, Per Förlin wrote:
>> On 11/14/2012 04:15 PM, Konstantin Dorfman wrote:
>>> Hello Per,
>>>
>>> On 11/13/2012 11:10 PM, Per Forlin wrote:
>>>> On Tue, Oct 30, 2012 at 1:19 PM, Konstantin Dorfman
>>>> <kdorfman@codeaurora.org> wrote:
>>>>> Hello,
>>>>>
>>>>> On 10/29/2012 11:40 PM, Per Forlin wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I would like to move the focus of my concerns from root cause analysis
>>>>>> to the actual patch,
>>>>>> My first reflection is that the patch is relatively complex and some
>>>>>> of the code looks duplicated.
>>>>>> Would it be possible to simplify it and re-use the current execution flow.
>>>>>>
>>>>>> Is the following flow feasible?
>>>>>>
>>>>>> in mmc_start_req():
>>>>>> --------------
>>>>>> if (rqc == NULL && not_resend)
>>>>>> wait_for_both_mmc_and_arrival_of_new_req
>>>>>> else
>>>>>> wait_only_for_mmc
>>>>>>
>>>>>> if (arrival_of_new_req) {
>>>>>> set flag to indicate fetch-new_req
>>>>>> return NULL;
>>>>>> }
>>>>>> -----------------
>>>>>>
>>>>>> in queue.c:
>>>>>> if (fetch-new_req)
>>>>>> don't overwrite previous req.
>>>>>>
>>>>>> BR
>>>>>> Per
>>>>>
>>>>> You are talking about using both waiting mechanisms, old (wait on
>>>>> completion) and new - wait_event_interruptible()? But how done()
>>>>> callback, called on host controller irq context, will differentiate
>>>>> which one is relevant for the request?
>>>>>
>>>>> I think it is better to use single wait point for mmc thread.
>>>>
>>>> I have sketch a patch to better explain my point. It's not tested it
>>>> barely compiles :)
>>>> The patch tries to introduce your feature and still keep the same code
>>>> path. And it exposes an API that could be used by SDIO as well.
>>>> The intention of my sketch patch is only to explain what I tried to
>>>> visualize in the pseudo code previously in this thread.
>>>> The out come of your final patch should be documented here I think:
>>>> Documentation/mmc/mmc-async-req.txt
>>> This document is ready, attaching it to this mail and will be included
>>> in next version of the patch (or RESEND).
>>>>
>>>> Here follows my patch sketch:
>>>> ........................................................
>>>> diff --git a/drivers/mmc/card/queue.c b/drivers/mmc/card/queue.c
>>>> index e360a97..08036a1 100644
>>>> --- a/drivers/mmc/card/queue.c
>>>> +++ b/drivers/mmc/card/queue.c
>>>> @@ -66,6 +66,8 @@ static int mmc_queue_thread(void *d)
>>>> spin_unlock_irq(q->queue_lock);
>>>>
>>>> if (req || mq->mqrq_prev->req) {
>>>> + if (!req)
>>>> + mmc_prefetch_start(&mq->mqrq_prev->mmc_active, true);
>>>> set_current_state(TASK_RUNNING);
>>>> mq->issue_fn(mq, req);
>>>> } else {
>>>> @@ -79,11 +81,13 @@ static int mmc_queue_thread(void *d)
>>>> }
>>>>
>>>> /* Current request becomes previous request and vice versa. */
>>>> - mq->mqrq_prev->brq.mrq.data = NULL;
>>>> - mq->mqrq_prev->req = NULL;
>>>> - tmp = mq->mqrq_prev;
>>>> - mq->mqrq_prev = mq->mqrq_cur;
>>>> - mq->mqrq_cur = tmp;
>>>> + if (!mmc_prefetch_pending(&mq->mqrq_prev->mmc_active)) {
>>>> + mq->mqrq_prev->brq.mrq.data = NULL;
>>>> + mq->mqrq_prev->req = NULL;
>>>> + tmp = mq->mqrq_prev;
>>>> + mq->mqrq_prev = mq->mqrq_cur;
>>>> + mq->mqrq_cur = tmp;
>>>> + }
>>>> } while (1);
>>>> up(&mq->thread_sem);
>>>>
>>>> @@ -109,10 +113,44 @@ static void mmc_request_fn(struct request_queue *q)
>>>> return;
>>>> }
>>>>
>>>> + if (mq->prefetch_enable) {
>>>> + spin_lock(&mq->prefetch_lock);
>>>> + if (mq->prefetch_completion)
>>>> + complete(mq->prefetch_completion);
> Since mq->prefetch_completion init happens only in core.c after
> mmc_start_req(NULL), we can miss all new request notifications coming
> from fetching NULL request until then.
It's a mistake in the patch.
I meant check mq->prefetch_pending to know whether we need to wait in the first place.
That's why mq->prefetch_pending is assigned even if mq->prefetch_completion is not set yet.
Patch needs to be updated to take it into account. One could let mmc_prefecth_init() return and indicate if there is already a NEW_REQ pending. If this is the case skip wait.
>>>> + mq->prefetch_pending = true;
>>>> + spin_unlock(&mq->prefetch_lock);
>>>> + }
>>>> +
>>>> if (!mq->mqrq_cur->req && !mq->mqrq_prev->req)
>>>> wake_up_process(mq->thread);
>>>> }
>>>>
>>>> +static void mmc_req_init(struct mmc_async_req *areq, struct completion *compl)
>>>> +{
>>>> + struct mmc_queue *mq =
>>>> + container_of(areq->prefetch, struct mmc_queue, prefetch);
>>>> +
>>>> + spin_lock(&mq->prefetch_lock);
>>>> + mq->prefetch_completion = compl;
>>>> + if (mq->prefetch_pending)
>>>> + complete(mq->prefetch_completion);
>>>> + spin_unlock(&mq->prefetch_lock);
>>>> +}
>>>> +
>>>> +static void mmc_req_start(struct mmc_async_req *areq, bool enable)
>>>> +{
>>>> + struct mmc_queue *mq =
>>>> + container_of(areq->prefetch, struct mmc_queue, prefetch);
>>>> + mq->prefetch_enable = enable;
>>>> +}
>>>> +
>>>> +static bool mmc_req_pending(struct mmc_async_req *areq)
>>>> +{
>>>> + struct mmc_queue *mq =
>>>> + container_of(areq->prefetch, struct mmc_queue, prefetch);
>>>> + return mq->prefetch_pending;
>>>> +}
>>>> +
>>>> static struct scatterlist *mmc_alloc_sg(int sg_len, int *err)
>>>> {
>>>> struct scatterlist *sg;
>>>> @@ -166,6 +204,12 @@ int mmc_init_queue(struct mmc_queue *mq, struct
>>>> mmc_card *card,
>>>> int ret;
>>>> struct mmc_queue_req *mqrq_cur = &mq->mqrq[0];
>>>> struct mmc_queue_req *mqrq_prev = &mq->mqrq[1];
>>>> + spin_lock_init(&mq->prefetch_lock);
>>>> + mq->prefetch.wait_req_init = mmc_req_init;
>>>> + mq->prefetch.wait_req_start = mmc_req_start;
>>>> + mq->prefetch.wait_req_pending = mmc_req_pending;
>>>> + mqrq_cur->mmc_active.prefetch = &mq->prefetch;
>>>> + mqrq_prev->mmc_active.prefetch = &mq->prefetch;
>>>>
>>>> if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
>>>> limit = *mmc_dev(host)->dma_mask;
>>>> diff --git a/drivers/mmc/card/queue.h b/drivers/mmc/card/queue.h
>>>> index d2a1eb4..5afd467 100644
>>>> --- a/drivers/mmc/card/queue.h
>>>> +++ b/drivers/mmc/card/queue.h
>>>> @@ -33,6 +33,12 @@ struct mmc_queue {
>>>> struct mmc_queue_req mqrq[2];
>>>> struct mmc_queue_req *mqrq_cur;
>>>> struct mmc_queue_req *mqrq_prev;
>>>> +
>>>> + struct mmc_async_prefetch prefetch;
>>>> + spinlock_t prefetch_lock;
>>>> + struct completion *prefetch_completion;
>>>> + bool prefetch_enable;
>>>> + bool prefetch_pending;
>>>> };
>>>>
>>>> extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *,
>>>> diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
>>>> index 9503cab..06fc036 100644
>>>> --- a/drivers/mmc/core/core.c
>>>> +++ b/drivers/mmc/core/core.c
>>>> @@ -352,7 +352,15 @@ struct mmc_async_req *mmc_start_req(struct mmc_host *host,
>>>> mmc_pre_req(host, areq->mrq, !host->areq);
>>>>
>>>> if (host->areq) {
>>>> + if (!areq)
>>>> + mmc_prefetch_init(host->areq,
>>>> + &host->areq->mrq->completion);
>>>> mmc_wait_for_req_done(host, host->areq->mrq);
>>>> + if (!areq) {
>>>> + mmc_prefetch_start(host->areq, false);
>>>> + if (mmc_prefetch_pending(host->areq))
>>>> + return NULL;
> In this case, mmc thread may be unblocked because done() arrived for
> current request and not because new request notification. In such a case
> we would like the done request to be handled before fetching the new
> request.
I agree. We wake up and there is no pending new request execution should proceed normally by handling the completed request.
> In my code is_done_rcv flag used along with is_new_req flag in
> order to differentiate the reason for mmc thread awake.
In my patch the return value of "mmc_prefetch_pending()" specifies if thee is a pending request.what should happen.
If both current request is done and mmc_prefetch_pending is done at the same time I see there is a problem with my patch. There needs to be a flag to indicate if current request is done.
>
>>>> + }
>>>> err = host->areq->err_check(host->card, host->areq);
>>>> }
>>>>
>>>> diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
>>>> index 65c64ee..ce5d03f 100644
>>>> --- a/include/linux/mmc/host.h
>>>> +++ b/include/linux/mmc/host.h
>>>> @@ -15,6 +15,7 @@
>>>> #include <linux/sched.h>
>>>> #include <linux/device.h>
>>>> #include <linux/fault-inject.h>
>>>> +#include <linux/completion.h>
>>>>
>>>> #include <linux/mmc/core.h>
>>>> #include <linux/mmc/pm.h>
>>>> @@ -140,6 +141,13 @@ struct mmc_host_ops {
>>>>
>>>> struct mmc_card;
>>>> struct device;
>>>> +struct mmc_async_req;
>>>> +
>>>> +struct mmc_async_prefetch {
>>>> + void (*wait_req_init)(struct mmc_async_req *, struct completion *);
>>>> + void (*wait_req_start)(struct mmc_async_req *, bool);
>>>> + bool (*wait_req_pending)(struct mmc_async_req *);
>>>> +};
>>>>
>>>> struct mmc_async_req {
>>>> /* active mmc request */
>>>> @@ -149,8 +157,33 @@ struct mmc_async_req {
>>>> * Returns 0 if success otherwise non zero.
>>>> */
>>>> int (*err_check) (struct mmc_card *, struct mmc_async_req *);
>>>> + struct mmc_async_prefetch *prefetch;
>>>> };
>>>>
>>>> +/* set completion variable, complete == NULL to reset completion */
>>>> +static inline void mmc_prefetch_init(struct mmc_async_req *areq,
>>>> + struct completion *complete)
>>>> +{
>>>> + if (areq->prefetch && areq->prefetch->wait_req_init)
>>>> + areq->prefetch->wait_req_init(areq, complete);
>>>> +}
>>>> +
>>>> +/* enable or disable prefetch feature */
>>>> +static inline void mmc_prefetch_start(struct mmc_async_req *areq, bool enable)
>>>> +{
>>>> + if (areq->prefetch && areq->prefetch->wait_req_start)
>>>> + areq->prefetch->wait_req_start(areq, enable);
>>>> +}
>>>> +
>>>> +/* return true if new request is pending otherwise false */
>>>> +static inline bool mmc_prefetch_pending(struct mmc_async_req *areq)
>>>> +{
>>>> + if (areq->prefetch && areq->prefetch->wait_req_pending)
>>>> + return areq->prefetch->wait_req_pending(areq);
>>>> + else
>>>> + return false;
>>>> +}
>>>> +
>>>> /**
>>>> * struct mmc_slot - MMC slot functions
>>>> *
>>>> ....................................................................
>>>>
>>>>
>>>> BR
>>>> Per
>>> I understand your motivation and idea for re-structure the patch. It is
>>> still not clear for me how exactly mmc thread will be awaken on new
>>> request notification in your version, but I have another problem:
>>>
>> mmc_request_fn() is called and it calls complete(mq->prefetch_completion) which wakes up the current thread.
>> My patch is just an example. The intention is to make the patch cleaner. But I may have missed some of the HPI aspects.
>
> Is it the lack of functions wrappers that you are using in your example?
It's fine to skip the functions wrappers if it makes no sense.
What I want to avoid is to create new functions for data_req_start and data_req_wait.
I would rather add this to the existing functions in core.c and keep changes in block.c and the flow minimal.
> As I understand your example, you mean to implement generic logic on
> core/core.c level by using wrapper functions and leave final
> implementation for MMC to card/queue.c and for SDIO layer to
> card/..sdio.. (I'm not too familiar with sdio protocol implementation).
> Well, it is make a lot of sense.
>
I still have "plans" to add pre/post/async support in the SDIO framework too but I have not got to it.
I would be nice to add your NEW_REQ feature along with the other mmc-async features, to make it usable from SDIO.
> But the devil is in details - there is a lot of work in
> mmc_wait_for_data_req_done(), done() callback and also error handling
> changes for card/block.c
Things in core.c could be re-used in the SDIO case. In block.c there are only FS specific implementation not relevant for SDIO.
>
> Do you think, that wait_event() API used not suits the same semantic as
> completion API?
The waiting mechanims may be wait_event or completion.
I'm not in favor of using both. Better to replace completion with wait_event if you prefer.
My main concern is to avoid adding new API to core.c in order to add the NEW_REQ feature. My patch is an example of changes to be done in core.c without adding new functions.
>
> We would like to have a generic capability to handle additional events,
> such as HPI/stop flow, in addition to the NEW_REQUEST notification.
> Therefore, an event mechanism seems to be a better design than completion.
>
> I've looked at SDIO code and from what I can understand, right now SDIO
> is not using async request mechanism and works from 'wait_for_cmd()`
> level. This means that such work as exposing MMC core API's is major
> change and definitely should be separate design and implementation
> effort, while my current patch right now will fix mmc thread behavior
> and give better performance for upper layers.
You are right. There is no support in the SDIO implementation for pre/post/async feature. Still I had it in mind design the "struct mmc_async". It's possible to re-use the mmc core parts in order to utilize this support in the SDIO case. I'm not saying we should design for SDIO but at least keep the API in a way that it's potential usable from SDIO too. The API of core.c (start/wait of requests) should make sense without the existence of MMC block driver (block/queue).
One way to test the design is to add a test in mmc_test.c for penging NEW_REQ. In mmc_test.c there is not block.c or queue.c.
If the test case if simple to write in mmc_test.c it's means that the API is on a good level.
BR
Per
>
>
>>
>> If the API is not implemented the mmc core shall simply ignore it.
>>
>>> We want to expand this event based mechanism (unblock mmc thread from
>>> waiting for the running request) by adding another reason/event. This is
>>> URGENT_REQUEST event. When block layer has Urgent request to execute, it
>>> notifies mmc layer (similar to mmc_request() notification) and mmc layer
>>> handles this urgent request notification by correctly stopping running
>>> request, re-inserting back to I/O scheduler all not yet performed parts
>>> and fetching & starting the urgent request.
>>> The reasoning and general idea are documented together with "New event"
>>> and will be submitted soon. The patch itself will be submitted in a week
>>> or so.
>> I have not consider use of HPI in my proposal. If the NEW_REQ is first in line in the mmc queue it will be fetched as the NEW_REQ.
>> Then the current request must be interrupted and returned to mmc-queue or io-scheduler.
>>
>> I don't see a direct contradiction between the two designs.
>>
>> The main point is to make the NEW_REQ API more simple clear.
>> My example is just an example.
>>
>>>
>>> As for our discussion - to handle both events mmc layer should be
>>> capable to be awaken not only in case of no mmc_prefetch_pending() (in
>>> your patch terms), but also when we have perfect case of async requests:
>>> one running on the bus and second prepared and waiting for the first.
>>>
>>> I think, that in case SDIO protocol driver need such functionality as
>>> NEW_REQUEST, one need to implement notification to the mmc layer similar
>>> to mmc_request() code in my patch.
>> SDIO needs to implement the new NEW_REQ API, but the API needs to be clean.
>>
>> BR
>> Per
>>
> Best regards,
> --
> Konstantin Dorfman,
> QUALCOMM ISRAEL, on behalf of Qualcomm Innovation Center,
> Inc. is a member of Code Aurora Forum,
> hosted by The Linux Foundation
>
next prev parent reply other threads:[~2012-11-19 14:33 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-24 9:41 [PATCH v1] mmc: fix async request mechanism for sequential read scenarios Konstantin Dorfman
2012-10-24 17:07 ` Per Förlin
2012-10-25 13:28 ` Konstantin Dorfman
2012-10-25 15:02 ` Per Förlin
2012-10-26 12:07 ` Venkatraman S
2012-10-28 13:12 ` Konstantin Dorfman
2012-10-29 21:40 ` Per Forlin
2012-10-30 7:45 ` Per Forlin
2012-10-30 12:23 ` Konstantin Dorfman
2012-10-30 12:19 ` Konstantin Dorfman
2012-10-30 19:57 ` Per Forlin
2012-11-13 21:10 ` Per Forlin
2012-11-14 15:15 ` Konstantin Dorfman
2012-11-15 16:38 ` Per Förlin
2012-11-19 9:48 ` Konstantin Dorfman
2012-11-19 14:32 ` Per Förlin [this message]
2012-11-19 21:34 ` Per Förlin
2012-11-20 16:26 ` Konstantin Dorfman
2012-11-20 18:57 ` Konstantin Dorfman
2012-11-26 15:28 ` Konstantin Dorfman
2012-10-28 12:43 ` Konstantin Dorfman
2012-10-26 11:55 ` Venkatraman S
2012-10-28 12:52 ` Konstantin Dorfman
-- strict thread matches above, loose matches on Subject: below --
2012-10-15 15:36 Konstantin Dorfman
2012-10-21 23:02 ` Per Forlin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=50AA4312.5000703@stericsson.com \
--to=per.forlin@stericsson.com \
--cc=cjb@laptop.org \
--cc=kdorfman@codeaurora.org \
--cc=linux-mmc@vger.kernel.org \
--cc=per.lkml@gmail.com \
--cc=svenkatr@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).