From: Matthias Kaehlcke <mka@chromium.org>
To: Raju P L S S S N <rplsssn@codeaurora.org>
Cc: andy.gross@linaro.org, david.brown@linaro.org,
linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org,
rnayak@codeaurora.org, bjorn.andersson@linaro.org,
linux-kernel@vger.kernel.org, sboyd@kernel.org,
evgreen@chromium.org, dianders@chromium.org,
ilina@codeaurora.org
Subject: Re: [PATCH v2 6/6] drivers: qcom: rpmh: write PDC data
Date: Wed, 12 Sep 2018 15:55:52 -0700 [thread overview]
Message-ID: <20180912225552.GM22824@google.com> (raw)
In-Reply-To: <1532685889-31345-7-git-send-email-rplsssn@codeaurora.org>
On Fri, Jul 27, 2018 at 03:34:49PM +0530, Raju P L S S S N wrote:
> From: Lina Iyer <ilina@codeaurora.org>
>
> In addition to requests that are send to the remote processor, the
> controller may allow certain data to be written to the controller for
> use in specific cases like wakeup value when entering idle states.
> Allow a pass through to write PDC data.
>
> Signed-off-by: Lina Iyer <ilina@codeaurora.org>
> Signed-off-by: Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
> ---
> drivers/soc/qcom/rpmh.c | 28 ++++++++++++++++++++++++++++
> include/soc/qcom/rpmh.h | 6 ++++++
> 2 files changed, 34 insertions(+)
>
> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
> index 0d276fd..f81488b 100644
> --- a/drivers/soc/qcom/rpmh.c
> +++ b/drivers/soc/qcom/rpmh.c
> @@ -472,6 +472,34 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
> }
> EXPORT_SYMBOL(rpmh_write_batch);
>
> +/**
> + * rpmh_write_pdc_data: Write PDC data to the controller
> + *
> + * @dev: the device making the request
> + * @cmd: The payload data
> + * @n: The number of elements in payload
> + *
> + * Write PDC data to the controller. The messages are always sent async.
> + *
> + * May be called from atomic contexts.
> + */
> +int rpmh_write_pdc_data(const struct device *dev,
> + const struct tcs_cmd *cmd, u32 n)
> +{
> + DEFINE_RPMH_MSG_ONSTACK(dev, 0, NULL, rpm_msg);
At first I was concerned about the message being created on the stack
and sent asynchronously, however there is no asynchronous access
to the on-stack memory after returning from rpmh_rsc_write_pdc_data().
> + struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev);
> +
> + if (!n || n > MAX_RPMH_PAYLOAD)
> + return -EINVAL;
> +
> + memcpy(rpm_msg.cmd, cmd, n * sizeof(*cmd));
> + rpm_msg.msg.num_cmds = n;
> + rpm_msg.msg.wait_for_compl = false;
> +
> + return rpmh_rsc_write_pdc_data(ctrlr_to_drv(ctrlr), &rpm_msg.msg);
> +}
> +EXPORT_SYMBOL(rpmh_write_pdc_data);
> +
> static int is_req_valid(struct cache_req *req)
> {
> return (req->sleep_val != UINT_MAX &&
> diff --git a/include/soc/qcom/rpmh.h b/include/soc/qcom/rpmh.h
> index 018788d..d5e736e 100644
> --- a/include/soc/qcom/rpmh.h
> +++ b/include/soc/qcom/rpmh.h
> @@ -28,6 +28,9 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>
> int rpmh_mode_solver_set(const struct device *dev, bool enable);
>
> +int rpmh_write_pdc_data(const struct device *dev,
> + const struct tcs_cmd *cmd, u32 n);
> +
> #else
>
> static inline int rpmh_write(const struct device *dev, enum rpmh_state state,
> @@ -56,6 +59,9 @@ static inline int rpmh_ctrlr_idle(const struct device *dev)
> static inline int rpmh_mode_solver_set(const struct device *dev, bool enable)
> { return -ENODEV; }
>
> +static inline int rpmh_write_pdc_data(const struct device *dev,
> + const struct tcs_cmd *cmd, u32 n)
> +{ return -ENODEV; }
> #endif /* CONFIG_QCOM_RPMH */
>
> #endif /* __SOC_QCOM_RPMH_H__ */
Reviewed-by: Matthias Kaehlcke <mka@chromium.org>
next prev parent reply other threads:[~2018-09-12 22:55 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-27 10:04 [PATCH v2 0/6] drivers/qcom: add additional functionality to RPMH Raju P L S S S N
2018-07-27 10:04 ` [PATCH v2 1/6] drivers: qcom: rpmh-rsc: return if the controller is idle Raju P L S S S N
2018-09-11 22:39 ` Matthias Kaehlcke
2018-09-12 2:20 ` Lina Iyer
2018-07-27 10:04 ` [PATCH v2 2/6] drivers: qcom: rpmh: export controller idle status Raju P L S S S N
2018-07-27 10:04 ` [PATCH v2 3/6] drivers: qcom: rpmh: disallow active requests in solver mode Raju P L S S S N
2018-09-11 23:02 ` Matthias Kaehlcke
2018-09-12 2:22 ` Lina Iyer
2018-07-27 10:04 ` [PATCH v2 4/6] drivers: qcom: rpmh-rsc: clear active mode configuration for waketcs Raju P L S S S N
2018-09-12 21:51 ` Matthias Kaehlcke
2018-07-27 10:04 ` [PATCH v2 5/6] drivers: qcom: rpmh-rsc: write PDC data Raju P L S S S N
2018-09-12 22:28 ` Matthias Kaehlcke
2018-09-12 22:33 ` Lina Iyer
2018-09-12 22:37 ` Matthias Kaehlcke
2018-07-27 10:04 ` [PATCH v2 6/6] drivers: qcom: rpmh: " Raju P L S S S N
2018-09-12 22:55 ` Matthias Kaehlcke [this message]
2018-09-05 20:05 ` [PATCH v2 0/6] drivers/qcom: add additional functionality to RPMH Lina Iyer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180912225552.GM22824@google.com \
--to=mka@chromium.org \
--cc=andy.gross@linaro.org \
--cc=bjorn.andersson@linaro.org \
--cc=david.brown@linaro.org \
--cc=dianders@chromium.org \
--cc=evgreen@chromium.org \
--cc=ilina@codeaurora.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-soc@vger.kernel.org \
--cc=rnayak@codeaurora.org \
--cc=rplsssn@codeaurora.org \
--cc=sboyd@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).