From: Lina Iyer <ilina@codeaurora.org>
To: Stephen Boyd <swboyd@chromium.org>
Cc: andy.gross@linaro.org, david.brown@linaro.org,
linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org,
rnayak@codeaurora.org, bjorn.andersson@linaro.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 09/10] drivers: qcom: rpmh: add support for batch RPMH request
Date: Mon, 26 Mar 2018 09:30:59 -0600 [thread overview]
Message-ID: <20180326153059.GB22084@codeaurora.org> (raw)
In-Reply-To: <152121964417.179821.6748540781312701645@swboyd.mtv.corp.google.com>
On Fri, Mar 16 2018 at 11:00 -0600, Stephen Boyd wrote:
>Quoting Lina Iyer (2018-03-08 14:55:40)
>> On Thu, Mar 08 2018 at 14:59 -0700, Stephen Boyd wrote:
>> >Quoting Lina Iyer (2018-03-02 08:43:16)
>> >> @@ -343,6 +346,146 @@ int rpmh_write(struct rpmh_client *rc, enum rpmh_state state,
>> >> }
>> >> EXPORT_SYMBOL(rpmh_write);
>> >>
>> >> +static int cache_batch(struct rpmh_client *rc,
>> >> + struct rpmh_request **rpm_msg, int count)
>> >> +{
>> >> + struct rpmh_ctrlr *rpm = rc->ctrlr;
>> >> + unsigned long flags;
>> >> + int ret = 0;
>> >> + int index = 0;
>> >> + int i;
>> >> +
>> >> + spin_lock_irqsave(&rpm->lock, flags);
>> >> + while (rpm->batch_cache[index])
>> >
>> >If batch_cache is full.
>> >And if adjacent memory has bits set....
>> >
>> >This loop can go forever?
>> >
>> >Please add bounds.
>> >
>> How so? The if() below will ensure that it will not exceed bounds.
>
>Right, the if below will make sure we don't run off the end, but unless
>rpm->batch_cache has a sentinel at the end we can't guarantee we won't
>run off the end of the array and into some other portion of memory that
>also has a bit set in a word. And then we may read into some unallocated
>space. Or maybe I missed something.
>
The rpmh_write_batch checks to make sure the number of requests do not
exceed the max and would not exceed the value. This is ensured by the
write_batch request.
A write_batch follows an invalidate and therefore would ensure that that
the batch_cache does not overflow, but I can add a simple check, though
its unnecessary with the general use of this API.
>>
>> >> + index++;
>> >> + if (index + count >= 2 * RPMH_MAX_REQ_IN_BATCH) {
>> >> + ret = -ENOMEM;
>> >> + goto fail;
>> >> + }
>> >> +
>> >> + for (i = 0; i < count; i++)
>> >> + rpm->batch_cache[index + i] = rpm_msg[i];
>> >> +fail:
>> >> + spin_unlock_irqrestore(&rpm->lock, flags);
>> >> +
>> >> + return ret;
>> >> +}
>> >> +
>>
>> >> + * @state: Active/sleep set
>> >> + * @cmd: The payload data
>> >> + * @n: The array of count of elements in each batch, 0 terminated.
>> >> + *
>> >> + * Write a request to the mailbox controller without caching. If the request
>> >> + * state is ACTIVE, then the requests are treated as completion request
>> >> + * and sent to the controller immediately. The function waits until all the
>> >> + * commands are complete. If the request was to SLEEP or WAKE_ONLY, then the
>> >> + * request is sent as fire-n-forget and no ack is expected.
>> >> + *
>> >> + * May sleep. Do not call from atomic contexts for ACTIVE_ONLY requests.
>> >> + */
>> >> +int rpmh_write_batch(struct rpmh_client *rc, enum rpmh_state state,
>> >> + struct tcs_cmd *cmd, int *n)
>> >
>> >I'm lost why n is a pointer, and cmd is not a double pointer if n stays
>> >as a pointer. Are there clients calling this API with a contiguous chunk
>> >of commands but then they want to break that chunk up into many
>> >requests?
>> >
>> That is correct. Clients want to provide a big buffer that this API will
>> break it up into requests specified in *n.
>
>Is that for bus scaling?
>
Yes.
>> >> + /* For those unsent requests, spoof tx_done */
>> >
>> >Why? Comments shouldn't say what the code is doing, but explain why
>> >things don't make sense.
>> >
>> Will remove..
>>
>
>Oh, I was hoping for more details, not less.
Hmm.. I thought it was fairly obvious that we spoof tx_done so we could
complete when the wait_count reaches 0. I will add that to the comments.
Thanks,
Lina
next prev parent reply other threads:[~2018-03-26 15:31 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-03-02 16:43 [PATCH v3 00/10] drivers/qcom: add RPMH communication support Lina Iyer
2018-03-02 16:43 ` [PATCH v3 01/10] drivers: qcom: rpmh-rsc: add RPMH controller for QCOM SoCs Lina Iyer
2018-03-06 19:45 ` Stephen Boyd
2018-03-09 21:33 ` Lina Iyer
2018-03-09 21:37 ` Stephen Boyd
2018-03-02 16:43 ` [PATCH v3 02/10] dt-bindings: introduce RPMH RSC bindings for Qualcomm SoCs Lina Iyer
2018-03-06 22:30 ` Stephen Boyd
2018-03-07 20:54 ` Lina Iyer
2018-03-02 16:43 ` [PATCH v3 03/10] drivers: qcom: rpmh-rsc: log RPMH requests in FTRACE Lina Iyer
2018-03-06 5:38 ` kbuild test robot
2018-03-06 21:47 ` Steven Rostedt
2018-03-06 21:56 ` Lina Iyer
2018-03-06 22:05 ` Lina Iyer
2018-03-06 22:34 ` Steven Rostedt
2018-03-02 16:43 ` [PATCH v3 04/10] drivers: qcom: rpmh: add RPMH helper functions Lina Iyer
2018-03-08 18:57 ` Stephen Boyd
2018-03-08 21:37 ` Lina Iyer
2018-03-02 16:43 ` [PATCH v3 05/10] drivers: qcom: rpmh-rsc: write sleep/wake requests to TCS Lina Iyer
2018-03-08 19:41 ` Stephen Boyd
2018-03-08 23:58 ` Lina Iyer
2018-03-09 15:45 ` Lina Iyer
2018-03-02 16:43 ` [PATCH v3 06/10] drivers: qcom: rpmh-rsc: allow invalidation of sleep/wake TCS Lina Iyer
2018-03-08 20:40 ` Stephen Boyd
2018-03-09 16:41 ` Lina Iyer
2018-03-02 16:43 ` [PATCH v3 07/10] drivers: qcom: rpmh: cache sleep/wake state requests Lina Iyer
2018-03-05 20:44 ` Evan Green
2018-03-06 22:12 ` Lina Iyer
2018-03-02 16:43 ` [PATCH v3 08/10] drivers: qcom: rpmh: allow requests to be sent asynchronously Lina Iyer
2018-03-08 21:06 ` Stephen Boyd
2018-03-02 16:43 ` [PATCH v3 09/10] drivers: qcom: rpmh: add support for batch RPMH request Lina Iyer
2018-03-08 21:59 ` Stephen Boyd
2018-03-08 22:55 ` Lina Iyer
2018-03-16 17:00 ` Stephen Boyd
2018-03-26 15:30 ` Lina Iyer [this message]
2018-03-02 16:43 ` [PATCH v3 10/10] drivers: qcom: rpmh-rsc: allow active requests from wake TCS Lina Iyer
2018-03-08 20:47 ` Stephen Boyd
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180326153059.GB22084@codeaurora.org \
--to=ilina@codeaurora.org \
--cc=andy.gross@linaro.org \
--cc=bjorn.andersson@linaro.org \
--cc=david.brown@linaro.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-soc@vger.kernel.org \
--cc=rnayak@codeaurora.org \
--cc=swboyd@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).