From: Bjorn Andersson <andersson@kernel.org>
To: Maria Yu <quic_aiquny@quicinc.com>
Cc: mathieu.poirier@linaro.org, arnaud.pouliquen@foss.st.com,
linux-arm-msm@vger.kernel.org, linux-remoteproc@vger.kernel.org,
quic_clew@quicinc.com
Subject: Re: [PATCH v5 2/2] remoteproc: core: change to ordered workqueue for crash handler
Date: Fri, 2 Dec 2022 12:16:02 -0600 [thread overview]
Message-ID: <20221202181602.sg2pbgl5br2hw2rh@builder.lan> (raw)
In-Reply-To: <20221202094532.2925-3-quic_aiquny@quicinc.com>
On Fri, Dec 02, 2022 at 05:45:32PM +0800, Maria Yu wrote:
> Only the first detected crash needed to be handled, so change
> to ordered workqueue to avoid unnecessary multi active work at
> the same time.
In cab8300b5621 ("remoteproc: Use unbounded workqueue for recovery
work") Mukesh specifically said that it was required that multiple
remoteproc instances should be allowed to recover concurrently.
Is this no longer the case? Or am I perhaps misunderstanding the
nuances of the different work queue models?
> This will reduce the pm_relax unnecessary concurrency.
I'm not sure I understand this sentence, unless I remove the word
"pm_relax", was it added by mistake?
If so, is the support for concurrent recovery really unnecessary?
I know we have cases where we spend time in the recovery process just
waiting for things to happen, so allowing recovery to run concurrent
between instances sounds like a good idea.
Regards,
Bjorn
>
> Signed-off-by: Maria Yu <quic_aiquny@quicinc.com>
> ---
> drivers/remoteproc/remoteproc_core.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
> index c2d0af048c69..4b973eea10bb 100644
> --- a/drivers/remoteproc/remoteproc_core.c
> +++ b/drivers/remoteproc/remoteproc_core.c
> @@ -2728,8 +2728,8 @@ static void __exit rproc_exit_panic(void)
>
> static int __init remoteproc_init(void)
> {
> - rproc_recovery_wq = alloc_workqueue("rproc_recovery_wq",
> - WQ_UNBOUND | WQ_FREEZABLE, 0);
> + rproc_recovery_wq = alloc_ordered_workqueue("rproc_recovery_wq",
> + WQ_FREEZABLE, 0);
> if (!rproc_recovery_wq) {
> pr_err("remoteproc: creation of rproc_recovery_wq failed\n");
> return -ENOMEM;
> --
> 2.17.1
>
next prev parent reply other threads:[~2022-12-02 18:16 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-02 9:45 [PATCH v5 0/2] remoteproc: core: do pm relax when in Maria Yu
2022-12-02 9:45 ` [PATCH v5 1/2] remoteproc: core: do pm relax when in RPROC_OFFLINE Maria Yu
2022-12-02 17:30 ` Mathieu Poirier
2022-12-06 0:58 ` Aiqun(Maria) Yu
2022-12-02 18:09 ` Bjorn Andersson
2022-12-06 1:05 ` Aiqun(Maria) Yu
2022-12-02 9:45 ` [PATCH v5 2/2] remoteproc: core: change to ordered workqueue for crash handler Maria Yu
2022-12-02 17:34 ` Mathieu Poirier
2022-12-06 1:28 ` Aiqun(Maria) Yu
2022-12-07 18:16 ` Mathieu Poirier
2022-12-02 18:16 ` Bjorn Andersson [this message]
2022-12-06 1:42 ` Aiqun(Maria) Yu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221202181602.sg2pbgl5br2hw2rh@builder.lan \
--to=andersson@kernel.org \
--cc=arnaud.pouliquen@foss.st.com \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-remoteproc@vger.kernel.org \
--cc=mathieu.poirier@linaro.org \
--cc=quic_aiquny@quicinc.com \
--cc=quic_clew@quicinc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox