From: Peng Fan <peng.fan@oss.nxp.com>
To: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Bjorn Andersson <andersson@kernel.org>,
Rob Herring <robh@kernel.org>,
Krzysztof Kozlowski <krzk+dt@kernel.org>,
Conor Dooley <conor+dt@kernel.org>,
Shawn Guo <shawnguo@kernel.org>,
Sascha Hauer <s.hauer@pengutronix.de>,
Pengutronix Kernel Team <kernel@pengutronix.de>,
Fabio Estevam <festevam@gmail.com>, Frank Li <Frank.Li@nxp.com>,
Daniel Baluta <daniel.baluta@nxp.com>,
linux-remoteproc@vger.kernel.org, devicetree@vger.kernel.org,
imx@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, Peng Fan <peng.fan@nxp.com>
Subject: Re: [PATCH v4 4/5] remoteproc: imx_rproc: Add support for System Manager API
Date: Thu, 18 Dec 2025 13:22:30 +0800 [thread overview]
Message-ID: <aUOPloYtbZHCV6iw@shlinux89> (raw)
In-Reply-To: <aUM6aGbSH9ICOL5f@p14s>
Hi Mathieu,
On Wed, Dec 17, 2025 at 04:19:04PM -0700, Mathieu Poirier wrote:
>On Tue, Dec 16, 2025 at 09:51:02AM +0800, Peng Fan (OSS) wrote:
>
>Why are these not at the bottom of the file with the rest of the
>imx_rproc_plat_ops? That way you wouldn't have to declare
>imx_rproc_sm_detect_mode() at the top of the file...
>
>> +
>> +static int imx_rproc_sm_lmm_check(struct rproc *rproc, bool started)
>> + if (ret) {
>> + priv->flags &= ~IMX_RPROC_FLAGS_SM_LMM_AVAIL;
>> + dev_err(dev, "shutdown lmm(%d) failed: %d\n", dcfg->lmid, ret);
>> + return ret;
>> + }
>
>If you move this call to scmi_imx_lmm_operation() (and the error check) before
>setting priv->flags, you won't have to reset it in the error path.
>
>Let's see how the next revision goes...
Thanks for the review.
V5 has been sent out just now with both comments addressed:
- Moved the SM LMM functions to the bottom of the file with the
rest of imx_rproc_plat_ops
- Moved the scmi_imx_lmm_operation() call before setting priv->flags
to simplify error handling
Thanks,
Peng
>
>Thanks,
>Mathieu
>
>> +
>> + return 0;
>> +}
>> +
>> +static int imx_rproc_sm_detect_mode(struct rproc *rproc)
>> +{
>> + struct imx_rproc *priv = rproc->priv;
>> + const struct imx_rproc_dcfg *dcfg = priv->dcfg;
>> + struct device *dev = priv->dev;
>> + struct scmi_imx_lmm_info info;
>> + bool started = false;
>> + int ret;
>> +
>> + ret = scmi_imx_cpu_started(dcfg->cpuid, &started);
>> + if (ret) {
>> + dev_err(dev, "Failed to detect cpu(%d) status: %d\n", dcfg->cpuid, ret);
>> + return ret;
>> + }
>> +
>> + if (started)
>> + priv->rproc->state = RPROC_DETACHED;
>> +
>> + /* Get current Linux Logical Machine ID */
>> + ret = scmi_imx_lmm_info(LMM_ID_DISCOVER, &info);
>> + if (ret) {
>> + dev_err(dev, "Failed to get current LMM ID err: %d\n", ret);
>> + return ret;
>> + }
>> +
>> + /*
>> + * Check whether rproc is in the same LM as host core(running Linux)
>> + * If yes, use CPU protocol API to manage rproc.
>> + * If no, use Logical Machine API to manage rproc.
>> + */
>> + if (dcfg->lmid == info.lmid) {
>> + priv->ops = &imx_rproc_ops_sm_cpu;
>> + dev_info(dev, "Using CPU Protocol OPS\n");
>> + return 0;
>> + }
>> +
>> + priv->ops = &imx_rproc_ops_sm_lmm;
>> + dev_info(dev, "Using LMM Protocol OPS\n");
>> +
>> + return imx_rproc_sm_lmm_check(rproc, started);
>> +}
>> +
>> static int imx_rproc_detect_mode(struct imx_rproc *priv)
>> {
>> /*
>> diff --git a/drivers/remoteproc/imx_rproc.h b/drivers/remoteproc/imx_rproc.h
>> index 37417568a0ade2ae4d6a4e3d0f139ea52b185254..d37e6f90548cec727b4aeb874680b42af85bdbb4 100644
>> --- a/drivers/remoteproc/imx_rproc.h
>> +++ b/drivers/remoteproc/imx_rproc.h
>> @@ -38,6 +38,9 @@ struct imx_rproc_dcfg {
>> size_t att_size;
>> u32 flags;
>> const struct imx_rproc_plat_ops *ops;
>> + /* For System Manager(SM) based SoCs */
>> + u32 cpuid; /* ID of the remote core */
>> + u32 lmid; /* ID of the Logcial Machine */
>> };
>>
>> #endif /* _IMX_RPROC_H */
>>
>> --
>> 2.37.1
>>
next prev parent reply other threads:[~2025-12-18 5:37 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-16 1:50 [PATCH v4 0/5] remoteproc: imx_rproc: Support i.MX95 Peng Fan (OSS)
2025-12-16 1:50 ` [PATCH v4 1/5] dt-bindings: remoteproc: fsl,imx-rproc: Add support for i.MX95 Peng Fan (OSS)
2025-12-16 1:51 ` [PATCH v4 2/5] remoteproc: imx_rproc: Add runtime ops copy to support dynamic behavior Peng Fan (OSS)
2025-12-16 1:51 ` [PATCH v4 3/5] remoteproc: imx_rproc: Introduce prepare ops for imx_rproc_dcfg Peng Fan (OSS)
2025-12-16 1:51 ` [PATCH v4 4/5] remoteproc: imx_rproc: Add support for System Manager API Peng Fan (OSS)
2025-12-16 11:51 ` Daniel Baluta
2025-12-17 23:19 ` Mathieu Poirier
2025-12-18 5:22 ` Peng Fan [this message]
2025-12-16 1:51 ` [PATCH v4 5/5] remoteproc: imx_rproc: Add support for i.MX95 Peng Fan (OSS)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aUOPloYtbZHCV6iw@shlinux89 \
--to=peng.fan@oss.nxp.com \
--cc=Frank.Li@nxp.com \
--cc=andersson@kernel.org \
--cc=conor+dt@kernel.org \
--cc=daniel.baluta@nxp.com \
--cc=devicetree@vger.kernel.org \
--cc=festevam@gmail.com \
--cc=imx@lists.linux.dev \
--cc=kernel@pengutronix.de \
--cc=krzk+dt@kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-remoteproc@vger.kernel.org \
--cc=mathieu.poirier@linaro.org \
--cc=peng.fan@nxp.com \
--cc=robh@kernel.org \
--cc=s.hauer@pengutronix.de \
--cc=shawnguo@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).